The Holy Grail for SSDs is to make SLC performance cost as much as MLC, or, even better, as much as hard disk drives. The CTO of SSD manufacturer STEC says it won’t happen, pointing out that every advance in SLC density will have manifold benefits in MLC. Anyone who’s bought the newest iPhone or iPod knows older technology will almost always be cheaper, so SLC or MLC at the price point of HDD seems an even more far-fetched concept.
However, the CEO of Flexstar Technology, a manufacturer of hard drive, SSD and optical media testing equipment for storage vendors and OEMs, says SSDs do have one cost advantage over spinning media: a less complex, potentially more cost-effective quality testing process.
Currently storage OEMs and manufacturers have to put spinning media through extensive QA tests to make sure they stand up against environmental factors from rotational vibration, shock, ambient temperatures and humidity, according to FlexStar President and CEO Tony Lavia. “It’s not enough to test the drives in a clinically clean environment–each vendor needs to know that the drive will work inside their unique cabinet environments.” This means each vendor has to do its own custom testing and qualification process for spinning media.
From Lavia’s perspective, SSDs, with fewer moving parts, will change that picture. “You don’t have to do custom testing,: he said. “Instead OEMs can tell the drive manufacturers to do testing and only send them the drives that pass.” It also opens the door, Lavia said, to third-party testing companies (like, say, FlexStar).
IDC analyst Jeff Janukowicz said Lavia’s reasoning makes sense. “There are close to 100 SSD companies right now, and they don’t all necessarily have the resources to go out and buy their own testing equipment,” he said. “If those companies don’t require extra capital for testing, they could offer cheaper solutions [to end users].”
But least one emerging SSD vendor disagrees with the “no custom testing” idea. “Not all NAND is created equal,” said Pliant Technology’s vice president of marketing Greg Goelz. “We test rigorously using proprietary tools.”
While there was a “flash” of NetApp’s solid state strategy evident in its product launch this week, a big part of it was missing in all the talk of Data Ontap 8 and the vendor’s cloud strategy. The missing part was support for solid state drives (SSD) in NetApp disk arrays. NetApp chief marketing officer Jay Kidd says that’s coming by the end of 2009.
But SSDs in the array are part three of NetApp’s three-part flash strategy. The first part was DRAM-based Performance Acceleration Module (PAM) cards that shipped in February. Now NetApp is rolling out PAM II, which delivers flash memory as cache. The difference in the two PAM cards is the first one has about 10 times faster access times than flash, but costs about 10 times as much. The new flash-based card provide more capacity – the cards come in 256 GB and 512 GB configurations and you can use multiple cards to get up to 4 TB in one system.
“Customers found adding more DRAM cache memory accelerated performance of their workloads, but we had requests for larger capacities,” Kidd said.
Step three will be support for SSDs in arrays, which every other major storage vendor now offers. “We’ll have certification of SSD drives in disk shelves later this year,” Kidd said. “You’ll be able to add them to existing systems, probably with additional disk shelves that support SSD.”
Kidd says NetApp will use SSD drives with native SAS interfaces.
There’s a lot of 8 Gbps Fibre Channel going around these days.
LSI Corp. joined EMC Corp. today in announcing new support for the next leap in Fibre Channel throughput with the Engenio 4900 system it has launched to replace its 3994 midrange disk array. Also new with this system is support for 1 Gbps iSCSI host connectivity and support for Seagate’s full disk encryption (FDE).
The 4900 can hold up to 112 drives and 2 GB or 4 GB cache. Add-on software features include Encryption, Partitioning, Snapshot, Volume Copy, Remote Volume Mirroring. LSI’s midrange products are OEMed by IBM Corp. for its midrange DS3000 and DS4000 SAN product lines.
Like EMC, LSI isn’t adding native Fibre Channel over Ethernet (FCoE) support to its arrays yet. “We’re not quite ready to talk about our plans there yet,” said Steve Gardner, diretor of product marketing for LSI. “FCoE and 10 Gigabit Ethernet (GbE) is a battle that will heat up in the second half of next year.”
This outlook is supported by data collected by market research firm TheInfoPro, whose storage market research director Robert Stevenson told SearchStorage that enterprise users are already beginning the transition to 8 Gbps FC, with FCoE still on the back burner.
Reports from storage industry blogs and an a EMC Corp. customer event suggest EMC is getting ready to launch a new cloud computing service based on its Atmos onLine storage as a service offering.
Jeff Darcy, a software engineer and author of a blog dubbed Canned Platypus, reported after a meetup event in the Boston area August 14 that an EMC presentation on Atmos had divulged plans to offer a compute service like Amazon’s Elastic Compute Cloud (EC2):
They’ll be rolling out a compute service “by the end of the year” to support in-data-center access to data. It looks like it’ll be roughly comparable to EC2 plus S3/EBS [Elastic Block Storage]; there was no mention of supporting other features like SDB [Amazon Simple DataBase]/SQS [Simple Queue Service], and of course EMC pricing is likely to keep people on Amazon.
Unlike Darcy, Foskett believes this service will be “a real EC2 challenger,” pointing out some differentiators (bold in the original):
- They (probably) use VMware ESX, which is more common and familiar than Xen. Atmos Compute Service might even be able to handle existing ESX instances migrated in from private servers!
- Atmos onLine storage supports NFS in addition to the Atmos API, unlike Amazon’s own S3 which is API-only.
- They offer VLANs for enhanced network security, which Amazon lacks.
- They seem to offer per-instance internal persistent IP addresses, another area of frustration for EC2 users.
This seems like a likely candidate to be announced at VMWorld in less than two weeks because it ties in with VMware’s persistent cloud messaging. Despite VMware’s marketing efforts, some public cloud service providers such as RackSpace say they find the open-source XenServer more customizable. It would make sense for VMware’s parent company to throw its weight behind VMware as a public cloud offering at this year’s show.
After 15 years as CEO – practically an eternity in the storage business – NetApp’s Dan Warmenhoven stepped down today and named Tom Georgens his successor.
The move was anticipated since NetApp promoted Georgens to COO and president in February 2008, yet Warmenhoven had given no timeframe for his retirement. He will stay on as executive chairman “to help build and expand relationships with certain strategic partners around the world, including service providers and key technology partners,” according to NetApp’s news release.
“I am honored to follow in Dan’s footsteps,” Georgens said in the release. “In just 15 years, NetApp has grown from a $14 million startup with 45 employees into a recognized market leader in networked storage and data management with $3.4 billion in annual revenues and approximately 8,000 employees around the world. Dan also helped to cultivate a unique corporate culture, which has resulted in NetApp consistently being recognized as a great place to work.”
Warmenhoven’s final months were a bit rocky, as NetApp got outbid by its chief rival EMC for data deduplication backup specialist Data Domain. NetApp said it would buy Data Domain last May for $1.5 billion, but EMC eventually acquired Data Domain for $2.1 billion. NetApp did rally from a sales perspective at the end for Warmenhoven, though, and today reported better than expected $838 million in revenue last quarter during rough financial times.
Georgens joined NetApp as head of its Enterprise Storage Systems group in Oct. 2005. He was previously CEO of LSI’s Engenio storage systems division for two years and spent 11 years at EMC.
Yesterday, we posted a story about Dell’Oro Group’s prediction that Fibre Channel over Ethernet (FCoE) sales growth would outpace that of FC by 2011. That report got us lots of great feedback during the preparation of that news item, not all of which could fit in the news article, so here are some raw “deleted scenes” — additional points of view to go with that piece from analysts, users and financial experts.
Jeff Boles, director of validation services and senior analyst for the Taneja Group —
Right now, we have a scattering of fabrics and technologies, and while the promises of FCoE are interesting (if not compelling) for the day to day practitioner, transitioning to this new fabric is a bit more complex than filling your shopping cart on Amazon.com.
What I fully expect to happen is a multi-year integration of converged Ethernet as a broad fabric that joins together the multiple fabric domains in the enterprise data center. Those separate domains – FC, InfiniBand, and even traditional Ethernet – may rapidly become converged in a 10gb core, but will likely keep growing at a steady pace, or at least being maintained with regular equipment replacements. Once a converged core is in place (over years), we’ll likely see new equipment deployments taking place on the converged fabric when it is justified (high I/O demands, cable simplification in large infrastructures).
But a full tilt shift to FCoE as the new fabric, is likely out beyond the 3 year mark for aggressive businesses, and well beyond the 5 year mark for less aggressive businesses. The problem plain and simple, is that many, many businesses are well served by their current fabrics and skillsets, and the transition to converged ethernet, and FCoE, will only get near term adoption when it is fully justified. Many times, existing fabrics and skillsets will outweigh the battle over port prices and power utilization. While CEE/FCoE will change the computing landscape, my expectation is that this will happen in the long term.
Andrew Reichman, senior analyst for Forrester Research —
I’m seeing vendors like Brocade, Cisco, QLogic and NetApp move towards greater support for FCoE. The benefits often include reduction of complexity in cabling, and a longer term desire for simplification of SAN and LAN networking through network convergence. That said, it is likely to take a long time to see the benefits, and require a fairly significant investment in new equipment and re-architecting. I do believe that storage traffic will be on Ethernet at some point, the question is how soon- The FCoE standard has been slow to emerge, which has delayed adoption, but early adopters seem to be getting started now. 2011 seems a bit ambitious for broad adoption beyond FC, but I think it might not be too far off. You do have to remember that storage buyers are extremely conservative and like to see very mature products and architectures before making a big change, but once the momentum gets going, it’s likely to grow rapidly.
Mark Kelleher, Managing Director, Equity Research, Brigantine Advisors —
Dell’Oro isn’t really going out on a limb with its prediction that FCoE will supplant Fibre Channel by 2011 – that’s a common assumption in the storage industry. One converged fabric for all enterprise communications makes a lot of sense. The fibre channel switch and HBA people are moving in that direction, the Ethernet providers are moving in that direction, there’s really no reason it would not. The key difference between FC and Ethernet is that Ethernet can lose packets and take its time to recover, while FC guarantees delivery, and does not drop packets. To port the upper layers of the FC stack onto Ethernet, the Ethernet protocol itself has to be augmented to allow ‘lossless’ transmission of data under certain circumstances. That is all incorporated in FCoE, and the technology is just now reaching the market. Deployment starting now thru next year, widespread adoption by 2011.
Keep an eye on the core FC vendors: Brocade, Emulex, and QLogic. Brocade sells switches (although moving into the host-bus adapter market), while Emulex and QLogic are knows for selling the input/output offload engines that connect servers to FC (host-bus adapters, or HBAs). To connect to Ethernet, servers use “Network Interface Cards”, or NICs. With the new FCoE protocol, those two functions are combined into a “converged network adapter”, or CNA. Sell through of CNAs will tell us how the adoption of FCoE is progressing.
Reinoud Reynders, IT manager at University Hospitals Leuven in Belgium–
I believe very strongly in FCoE. Cisco is pushing this very hard and indeed, they have a strong story. Just one plug for all your I/O (network and SAN) on 10 Gb, 1 switch that separates client access (IP network) [from the] storage network: it’s a great plan.
I will replace my FC-SAN switches [around] Q2 2011. Personally, I believe 2011 is a little bit to early for the [broader industry] cross over, but maybe 2012.
Feel free to add your own perspective in our comments section below!
Remember the 2007 stock option backdating trial that ended in the conviction of former Brocade CEO Greg Reyes and cost Brocade hundreds of millions of dollars in legal fees? Well, get ready for the rematch.
The 9th U.S. Circuit Court of Appeals Tuesday ordered a new trial for Reyes, claiming prosecutorial misconduct. Reyes was convicted of fraud and other counts, and sentenced to 21 months in prison and fined $15 million in January 2008. The appeals court ruling said a prosecutor falsely claimed the Brocade finance department was unaware that Reyes was granting backdated stock options to lure employees to the company.
“We reverse Reyes’ conviction because of prosecutorial misconduct in making a false assertion of material fact to the jury in closing argument,” the three-judge panel said in its decision.
The appeals court claimed prosecutor Timothy Crudo knew employees of Brocade’s finance department told the FBI they were aware of the backdating scheme, yet he told the jury the finance department did not know about it.
“We do not conclude the prosecutor’s conduct was so egregious as to require dismissal of the prosecution,” the appeals court wrote. “Reyes’ case must be remanded for a new trial.”
There is no word yet on when a new trial will take place.
The appeals court upheld the conviction of former Brocade VP of human resources Stephanie Jensen but ordered that she be given a new sentence for falsifying corporate records. Jensen was sentenced to four months in prison and a $1.25 million fine. That sentence included an obstruction of justice charge, but the appeals court ruled that was her counsel’s fault and she should not be penalized for obstruction.
Reyes and Jensen have been free pending their appeals.
Reyes left Brocade in 2005 after the first hint of the backdating charges was made public. Brocade paid $160 million to settle shareholder lawsuits and $7 million to settle an SEC suit.
Recommind and Clearwell Systems expanded their e-discovery and regulatory compliance records management product lines this week with support for more areas of the e-discovery Reference Model (EDRM).
Recommind, which already has Axcelerate eDiscovery and Insite Legal Hold products out on the market for preservation, collection, processing, culling, review and production of data, added support for information management, collection, and classification with the new MindServer Categorization software module.
The new search, indexing and classification module is based on the same underlying search and index engine as the rest of the Recommind product line. Recommind uses an algorithm devised at MIT that can be “taught” to derive meaning and relevance from content, and perform “concept searches” that don’t rely on keyword matches.
Recommind VP of marketing Craig Carpenter says there’s little difference in the underlying technology, but each of Recommind’s modules are used for different purposes. While Axcelerate eDiscovery is generally used by law firms, and Legal Hold by internal counsel, MindServer is mostly targeted to a corporate records management or enterprise end users for “search and index for knowledge management” rather than strictly litigation support. The product began shipping this week. Pricing is offered according to a per-seat licensing model or a per-server license. Pricing varies according to size of deployment, but Carpenter said enterprise deals are typically $50,000 to $100,000.
Clearwell Systems added new modules to its e-discovery framework for pre-processing, review and production. “Clearwell had been in early stage processing, but now they can perform full review, including redaction and auto-redaction in preparation for formal production,” said Brian Babineau, senior analyst with Milford, Mass.-based Enterprise Strategy Group ESG).
Babineau says the product launches represent “a natural progression for both companies,” as e-discovery software makers across the board look to broaden their reach across the full EDRM spectrum. Babineau said the number of small companies looking to create “one-stop-shops” for compliance and litigation support is an indicator of how strong the market is right now.
“Not every vendor can survivve being all things to all people,” he said. “At least right now, there’s enough money in the market with things like the [Bernie] Madoff investigation and new regulations [following last year’s financial crisis] to keep all of these players alive and fund their R&D efforts.”
Amazon now supports data export from its S3 storage cloud onto customers’ removable hard drives.
Amazon first opened up this “sneakernet” for import/upload to the Amazon cloud earlier this spring, allowing customers with large data sets to send the data to Amazon on removable media rather than trying to migrate the data over an Internet connection. This most recent announcement means users can extract data from the cloud using this method, too.
At the time of the first announcement, Amazon bloggers referenced the quote that immediately jumped to my mind reading about the export feature: “Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.”
Amazon is far from the first or only cloud storage vendor to use seeding devices to get large data sets into the cloud rather than trying to squish terabytes through the average broadband Internet connection. Indeed, this network bottleneck is considered one of the biggest barriers to cloud computing adoption to date, and cloud backup vendors including EMC’s Mozy already send out seeding devices to upload or restore terabytes of data.
Companies such as NetEx are also offering software that promises to cut down on bandwidth between service providers and consumers downloading large, say, video files from centralized data centers. Others, including Cleversafe, are proposing to split data into chunks and among multiple sites to cut down on bandwidth and preserve data security.
So far, however, for the largest data sets — as this Amazon announcement demonstrates — nobody’s quite beaten the highway.