Storage Soup

February 3, 2009  8:18 PM

NetApp V-Series supports Texas Memory SSDs; users yawn

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

NetApp blogger and chief technical architect Val Bercovici, leaked the news yesterday that NetApp’s V-Series storage gateways can now front Texas Memory Systems’ RamSan-500 solid-state storage arrays.

This is the follow-on to NetApp’s announcement last month that it planned to offer Flash-as-disk to go along with its Flash-as-Cache and DRAM-based Performance Acceleration Module (PAM).

A common issue with deploying solid state drives, analysts have said since EMC first announced support for STEC Inc. SSDs in Symmetrix last year, is integrating them with storage management software tools. Until recently, provisioning SSDs could be like provisioning hard disks used to be before storage virtualization–complex, slow and fairly rigid.

According to Bercovici, while the RamSan acts as the high IOPS storage behind the V-Series, the V-Series gives it storage management features through NetApp’s WAFL operating system:

WAFL’s log-structured architecture implements native load-balancing of write operations via write-aggregation to solid state NVRAM. This includes an innovative data layout engine which enables WAFL to “write anywhere” in order to optimize the placement of data across the appropriate media. For flash, that means native built-in wear-leveling optimized to spread writes over as many flash cells as possible in parallel, with minimum wear to each individual flash cell.

According to NetApp chief marketing officer Jay Kidd:

[The V-Series and RamSan] effectively [create] the industry’s only Enterprise Flash storage system that supports thin provisioning, fast snapshots, remote mirroring, and data deduplication

So far, though, this approach to Flash-as-disk isn’t really flying with storage admins. “This would be the most expensive way of doing SSD,” Tom Becchetti, storage admin for a manufacturing company and NetApp customer, wrote today in an email. “What I would like to see is just how EMC implemented their SSD. They have SSD that is physical and logically the same form factor of the hard drive. It would give you the most flexibility and as more SSD vendors show up on the scene, the cost will dramatically fall.”

Denizens of the storage blogosphere were even more outspoken. “Is that it??” was the title of a post on U.K. storage end user Martin Glassborow’s blog, Storagebod. “I expected more, I expected something which was going to force EMC to raise the bar on their SSD implementation.”

February 3, 2009  4:10 PM

Archive migration company makes software generally available

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

We reported on an archive migration software startup, Procedo, late last year while it was still in the early stages of delivering product (usually attached to services). Today, the company came out with its first generally available software offering for migrating archive data between repositories while maintaining chain of custody. This GA offering also comes with some more features folded in, including storage resource discovery and reporting.

According to founder and CEO Joe Kvidera, the Procedo Archive Migration Manager (PAMM) Suite 3.0 bundles what had been separate pieces of software and migration tools into one product which allows features to be unlocked using license keys. During services engagements, Procedo staffers (often brought in by a bigger company like Symantec Corp.) instead bring the appropriate sets of tools to conduct the migration.

Pricing for those license keys depends on the applications and capacity to be migrated, according to Kvidera, from $5000 per TB for a simple file-system migration to up to $45,000 for migration involving complex applications with proprietary APIs such as EMC Corp.’s Centera. Kvidera said the average price would be somewhere in the range of $25,000 per TB.

PAMM is made up of a cluster of at least three servers, one for ingesting data from the old archive, another for writing to the new archive and a SQL server that tracks each message according to the object ID assigned to it at either end. The SQL database uses snapshots temporarily stored on storage area network (SAN) storage to validate that object IDs match on both ends, while the data is converted and migrated. The SQL instance then becomes a chain-of-custody log in case the validity of the data being migrated is questioned. The database tracking also means that failed or incomplete migrations can be corrected.

With PAMM 3.0, users can perform load balancing features across multiple migration servers, and destination archives for migration will now include cloud services LiveOffice and MessageOne. “We have closed a couple of deals with those services already this quarter,” Kvidera said.

Finally, PAMM 3.0 adds a new user interface with wizards to aid in migrations and new storage discovery tools that help users assess what they have in an archive before beginning the migration. “Customers often don’t have a clue what they have,” Kvidera said, describing the new PAMM feature as a kind of “mini SRM” useful for planning archive migration projects. Reporting and analysis can also be performed on that data, but today canned reports are limited to storage reporting and trending analysis on the archive. In the second quarter, he said, more reports will become available.

February 2, 2009  7:29 PM

EMC officials mum on Israeli corruption story

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

EMC was prominently featured in a story from Israeli news story about a recent secret investigation into corruption in the bidding process for Israeli government contracts, but company officials declined comment today.

According to Haaretz,

A secret seven-year investigation at the Defense Ministry has raised concerns that senior ministry officials used inside information to help certain American companies win more than $100 million in security-equipment tenders advertised in the United States.

Cisco, Juniper and Hewlett-Packard were also mentioned in the article, but EMC was named in the only specific example of a corrupt bidding process outlined by the piece:

The first deal that raised concerns related to a tender issued at the start of the decade for digital storage for the Israel Defense Forces. Three U.S. firms made bids: EMC, HP and Hitachi Data Systems. EMC won the tender. Haim Adar was in charge of the defense procurement office in New York at the time of the tender. Since his retirement from the ministry several years ago, he has served as external adviser to EMC and other firms who do business with the Defense Ministry.

“As early as the next day [after EMC had won the tender], I knew that our competitors had known everything about our price bid,” Yehuda Cohen, who at the time was in charge of procurement at HP, told Haaretz.

The article goes on to say:

Shortly after the deal with EMC, during Operation Defensive Shield in 2002, various problems were found with the system the company was providing. The technical problems made it difficult to analyze intelligence during the West Bank operation. It took 24 hours to correct the problems and restore the intelligence systems to working order.

Nonetheless, the IDF continued to work with EMC, and over the next few years the firm won several other contracts for data storage systems worth tens of millions of dollars.

According to the article, which was first brought to my attention today by Storage Monkeys, the probe that unearthed these alleged instances of corruption was shut down by the Israeli Defense Ministry in 2007, “citing insufficient evidence, after the ministry stalled the probe due to fears it would harm Israel-U.S. ties.”

January 30, 2009  1:59 PM

Storage Headlines for 01-29-09

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Here are some stories you may have missed this week:

Stories referenced:

  • EMC braces for IT spending declines
  • Editor’s note: The Symmetrix numbers in this story have been corrected since this podcast was posted; Symmetrix revenue was down 9% in Q4 year-over-year, but was up 2% for the year overall, not down 2% as was originally reported.

  • Focus on storage efficiency grows as budgets shrink
  • The Internet cries foul over Carbonite Amazon reviews
  • Brocade, Cisco expand data center platforms
  • Hifn offers NIC with compression and encryption

  • Last week’s headlines

    As always, you can find the latest storage news, trends and analysis at

    January 29, 2009  9:46 PM

    Analyst: e-Discovery to get worse in 2009

    Beth Pariseau Beth Pariseau Profile: Beth Pariseau

    IT pros may find themselves in a Catch-22 this year when it comes to e-Discovery and data management for compliance, according to a new report released this week by Forrester Research Analyst Brian Hill. The economic downturn is likely to increase litigation, while making it more difficult for IT organizations to keep up with e-Discovery requests and synchronize information management across different repositories.

    Hill predicts an increase in litigation and regulation due to the economic crisis because “To promote confidence and greater macroeconomic stability, we expect governing agencies to institute new regulations, and we anticipate litigation following job losses, broken contracts, and other economic hardships.”

    Meanwhile, one year after the new Federal Rules of Civil Procedure created a mandate for companies to systematically preserve electronic information, users were telling that before they could evaluate specific products or services for archiving and litigation review of data, organizational structures within their company had to realign to create new data management policies.

    Another year has passed and Hill’s report states “Effective alignment between the information management phase and other steps outlined in the Electronic Discovery Reference Model (EDRM) remains out of reach for most enterprises.”

    The good news is that companies will at least make steps toward that vision in 2009, Hill told Storage Soup. “There’s broad recognition that it needs to happen,” he said. “We’re starting to see new liaison roles being created, designated intersection points of IT with legal.” However, the two remain sharply different disciplines, with a historically separate reporting structure and few other common objectives. “The reality is it’s a long way out before [a major shift] happens.”

    In the meantime, Hill said Forrester advises clients to focus on two of the steps in the EDRM: information preservation and review. Administrators should strive to apply policies to as broad a range of content as possible rather than focusing on particular content types–organizing data according to potential legal relevance rather than application means users may start out with less information to wade through during a litigation request, as well as cutting down on capacity growth also threatening to bust storage budgets this year.

    The second phase of EDRM Hill advised users to focus on for the time being is the review phase. “This is where the most spending is,” Hill said, recommending that particularly cash-strapped organizations look to the clouds for their content repositories. “Hosted review platforms can make some difference–something internal can require a lot of capital.”

    January 28, 2009  7:51 PM

    The Internet cries foul over Carbonite Amazon reviews

    Beth Pariseau Beth Pariseau Profile: Beth Pariseau

    A story from a New York Times blog by David Pogue has ignited the tinderbox that is the Internet, and the flames are being directed at online backup service Carbonite. The conflagration is over glowing reviews of the service on Amazon by insiders at the company who did not divulge that they worked for Carbonite.

    The reviews, written in December 2006, were first brought to the attention of the New York Times by a Carbonite customer identifying himself as Bruce Goldensteinberg, who has also posted screenshots of the original reviews on a Picasa blog.

    Carbonite has posted an official response to the issue on its website, claiming policies were not in place at the time but have since been updated. Carbonite CEO David Friend has also responded directly to Pogue’s blog with a claim that Carbonite’s uppermost management was not aware of the bogus postings until they were brought to public attention.

    This is where things really get interesting–Pogue also disputes that claim, referring to the comments section of another post about Carbonite where one of the first comments discusses the bogus reviews. David Friend posts comment #29 on that same thread, leading Pogue to argue that Carbonite was aware of the reviews at least since September and is only “cleaning up its act—now, after it’s been caught.”

    I followed up with Carbonite myself about this, and received this response from a spokesperson:

    In 2006 a few reviews were posted by employees who did not disclose their employment affiliation. That was a mistake and we apologize. This has long since ceased and will not happen again.

    Pogue’s post also can be seen as responding to this pre-emptively:

    It doesn’t matter to me that Carbonite’s fraudulent reviews are a couple of years old. These people are gaming the system, deceiving the public to enrich themselves.

    In Carbonite’s defense, I do think the level of recrimination they’re getting is a bit disproportionate to the problem of the reviews. Mr. Goldensteinberg became disgruntled when he experienced a crash, difficult restore, and delayed customer support. That’s a more important core issue for an online backup company than marketing tricks that are not unique {Pogue’s blog points out a more recent similar incident involving Belkin).

    Slow restores may also be the way of online backup, especially if users are looking to restore an entire system, at this stage of its development. EMC Corp.’s Mozy was hit with similar angst among its users last year over similar problems–it, too, was forced to revise its up-front disclosures to users about restore times.

    Bottom line: the Internet is all about word of mouth, but doing business oftentimes can’t be. Forget about Amazon reviews, and make sure you get an SLA from your online backup service provider in writing before you deploy the service.

    January 28, 2009  7:30 PM

    Overland puts Snap veteran Kelly in charge

    Dave Raffo Dave Raffo Profile: Dave Raffo

    It took him awhile, but Eric Kelly is running the Snap Appliance business again.

    Kelly today became CEO of Overland Storage, which acquired the Snap business from Adaptec last June. He replaces Vern LoForti, who remains at Overland as president.

    Kelly has a long history with Snap. He put together a group of investors to buy Snap from Quantum for $10 million in 2002 and served as its CEO until selling it to Adaptec for $100.4 million in 2004. Kelly worked as GM of Adaptec’s storage business for two years and when Adaptec put the Snap business up for sale, there were rumblings in the storage industry that Kelly tried to put together a group to buy it back. That didn’t work out, but he joined Overland’s board in late 2007 and strongly recommended Overland buy the Snap business.

    When that deal was completed, LoForti said Kelly told the Overland board, ‘”If you don’t want [Snap], I’ll buy it myself.”

    Kelly told today he’s looking forward to working again with other members of the Snap team who remain from his CEO days.

    “Some of the same team is here,” he said. “I think we do have a little advantage in terms of understanding what our customers are looking for, and how we position the product and grow the business.”

     So with Overland trying to go from a tape vendor to a storage systems company built around its new NAS products, Kelly seems like the right guy for the job. LoForti has done his best to revitalize Overland after moving CFO to CEO of the troubled company in 2007, but had to spend much of his time looking for financing to keep it afloat.


    Overland raised $9 million in financing last Decmeber to keep the doors open but will need to reverse its long history of losing money if it is to survive. The vendor last week said it would reduce its workforce by 17% by cutting 53 employees and slashed the salaries of executives and other salaried employees by 10%. That followed previous layoffs totaling 64 employees since last August.


    Kelly said Overland will seek more funding and will obviously strive to become profitable as soon as possible. Overland’s strategy will be to provide end to end data protection across disk and tape,  which involves leveraging Snap’s software with Overland’s other platforms.


    LoForti says because the Snap Guardian OS and the OS for Overland’s REO disk appliances are based on the same Linux kernel, it won’t be difficult to integrate the product lines.


    “When you sell and appliance, people look at the hardware, but our value is in the software,” Kelly said.

    January 27, 2009  5:59 PM

    WD launches 2 TB desktop drive

    Beth Pariseau Beth Pariseau Profile: Beth Pariseau

    Like their 1 TB brethren before them, 2 TB drives are showing up first at the desktop — beginning with the shipment this week of a 2 TB version of Western Digital’s Caviar Green 3.5-inch SATA drive.

    The drive follows two generations of high-capacity desktop drives in the Green line. The first was a four-platter 1 TB drive with a 16 MB cache, followed by a 3-platter 1 TB drive with a 32 MB cache. The new 2 TB version uses four 500 GB platters and eight drive heads and also includes a 32 MB cache.

    In addition to cramming more data into the same drive footprint, the WD Green line, as its name indicates, targets power-conscious users with drive firmware features that manage energy efficiency. This includes “IntelliPower,” described by WD as “a finely tuned balance” of spin speed, transfer rates, and caching algorithms designed to keep the drive’s power draw just over 10 watts while reading and writing and at 10 watts or under while idle.

    Other 1 TB 3.5-inch drives, such as Seagate’s Barracuda, draw about 8 watts while idle. WD’s goal with this drive was to double the capacity without doubling the power draw, according to Caviar Green product manager Mojgan Pessian. Because energy efficiency and capacity are the focus of this product, she said, the drive’s exact RPM–somewhere between 5400 and the typical desktop 7200–is not being disclosed. “This is a low-RPM drive,” she said. “That’s how it’s [able to be] low-power.”

    According to IDC analyst John Rydning, these drives will probably find their way into what IDC calls personal storage devices, boxes from Iomega, Buffalo and others sold to consumers and prosumers for backup. “Last year, with one terabyte hard drives, you saw two terabyte solutions in the market,” he said. “This year, we should see those devices offer 4 TB in the same form factor.”

    Enterprise users are growing wary of ever-increasing drive sizes, as big drives can make failures more devastating, and double-drive failures more likely. For consumers, though, Rydning pointed out that drives like this in personal storage devices are most often used for less critical copies of data rather than for ‘primary’ storage.

    January 27, 2009  3:37 AM

    Welcome to Storage Soup’s new home on IT Knowledge Exchange

    Dave Raffo Dave Raffo Profile: Dave Raffo

    I’d like to take a moment to introduce you to some of our new blog features and also some of the features on IT Knowledge Exchange.

    Instead of a long list of categories, we now have a Tag Cloud. Click any topic in the Tag Cloud and you’ll see only posts on that topic. The Tag Cloud is dynamic, so the more a tag is used, the larger and darker it will appear. This helps you quickly see the most popular topics.

    You’ll also notice we’ve integrated more of our related editorial content in the right sidebar. If you’re on a post about a specific topic and wish to know more after reading the post, be sure to browse the links in the right sidebar.

    We always appreciate your sharing our content on social networking sites and we’ve increased the number of bookmarking tools from four to forty-three. If you enjoy a post, please be sure to share.

    Look near the top of the page and you’ll see a row of tabs. You can click the IT Blogs tab to find dozens of technology blogs, both user-generated and TechTarget editorial blogs. You can even request your own blog.

    There is also a tab labeled IT Answers. This is where you can ask your own IT question and have it seen by thousands of IT Knowledge Exchange members. So be sure to pose your storage question, browse thousands of storage answers or help out a fellow IT pro by answering a question.

    Thank you for stopping by and be sure to bookmark our new blog location and visit the storage section on IT Knowledge Exchange.

    January 26, 2009  9:27 PM

    Storage vendors put together ESX iSCSI cookbook

    Beth Pariseau Beth Pariseau Profile: Beth Pariseau

    Just came across a pretty interesting resource on EMC’er Chad Sakac’s Virtual Geek blog (first brought to my attention by Stephen Foskett). It’s a guide to ESX and iSCSI co-developed by, among others, Andy Banta of VMware, Vaughn Stewart of NetApp, Eric Schott of Dell/EqualLogic, Adam Carter of HP/Lefthand, and David Black of EMC.

    The post gets into nitty-gritty details and even includes what look like scanned-in napkin drawings to illustrate some of the complexities of performance management using ESX 3.x server with iSCSI. There are multiple links to futher resources on everything from the fundamentals of link aggregation to the full iSCSI spec.

    But the bottom line for storage users is that “the ESX 3.x software initiator only supports a single iSCSI session with a single TCP connection for each iSCSI target…So, no matter what MPIO setup you have in ESX, it doesn’t matter how many paths show up in the storage multipathing GUI for multipathing to a single iSCSI Target, because there’s only one iSCSI initiator port.”

    There are ways around it–in short, the post states, “Use the ESX iSCSI software initiator. Use multiple iSCSI targets. Use MPIO at the ESX layer. Add Ethernet links and iSCSI targets to increase overall throughput. Ser your expectation for no more than ~160MBps for a single iSCSI target.”

    There’s also a workaround for single LUNs needing more than 160 MBps, using an iSCSI initiator in the guest along with MPIO, though the post acknowledges, “It has a big downside…you need to manually configure the storage inside each guest, which doesn’t scale particularly well from a configuration standpoint – so for most customers [say] they stick with the ‘keep it simple’ method.”

    The best news out of this post for VMware and iSCSI users, though, is probably the pre-announcement that this behavior will be changing in future ESX releases.

    Forgot Password

    No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

    Your password has been sent to: