EMC Corp. rolled out a 4 TB home NAS box for the Iomega StorCenter line today, and an EMC official told Storage Soup about other software and hardware updates to come for EMC’s SMB, SOHO and consumer products later this year.
The Iomega StorCenter Pro ix4-100 is an upmarket successor to the two-drive ix2 consumer product. It also has a predecessor within Iomega’s product lines, the Iomega 150d NAS, launched prior to EMC’s acquisition of Iomega last year. The ix4 adds more enterprise-level software features such as security and built-in data backup with the addition of EMC’s LifeLine consumer storage software to the 150d’s hardware. EMC’s Mozy online backup service and Retrospect local backup software are also included, along with automatically updated backup folders within the device.
Marc Tanguay, GM of the StorCenter product line, said the four-bay ix4 is aimed at small businesses of around 25 employees rather than the home NAS market. It will come in 2 TB and 4 TB capacities, the latter double the capacity for the ix2. As with the ix2, the LifeLine software will offer features like Windows Active Directory support, email and SNMP system status notifications, print server capabilities and Bluetooth compatibility. The ix4 will support four printers (the ix2 supports two), and the ability to plug in an Axis security camera and stream directly to the box without a PC as an operator.
Tanguay said customers who purchase the ix4 now will get a free upgrade at the end of the first quarter to the next version of LifeLine. That version will include remote administrative access, which will be free for the first year and then cost around $9.95 per year, according to Tanguay. Remote access will let customers upload and download files to the box from a remote location, and an admin can manage it from a Web browser. Also coming in the next version of Lifeline is native Apple File Protocol support. StorCenter currently supports Mac and Apple computers, but “it’s easier to share files and networks with native AFP support,” Tanguay said.
The next version will also offer folder quotas and automated torrent support. Consumers are the most frequent users of torrents to share media files, but Tanguay said small businesses are increasingly using them to exchange data as well. The new version of LifeLine will offer the ability to run the torrent downloader on the shared storage box without requiring a separate computer or process. “Today if a worker is moving from office to home, they lose the continuity of the download, and they have to use their PC’s CPU power for large files,” Tanguay said. Finally, the next version of LifeLine will also make the StorCenter a full media server, including native support for iTunes.
The StorCenter hardware will also be upgraded in the second quarter when EMC will support 2 TB SATA drives from Seagate and Western Digital, and release a new 8 TB StorCenter model.
The 2TB StorCenter Pro ix4-100 NAS Server is now available for $799.95. The 4TB StorCenter Pro ix4-100 NAS Server will be available later this month for $1,299.95.
SSD maker Fusion-io announced today that Apple founder Steve Wozniak has joined it as Chief Scientist. According to a Fusion-io press release, “Wozniak will act as a key technical advisor to the Fusion-io research and development group. He will also work closely with the executive team of Fusion-io in formulating a company strategy that will accelerate the expansion of major global accounts.”
Wozniak is a big name for an emerging company like Fusion-io to land. Sometimes known as Apple’s “other Steve,” Wozniak is credited with significant engineering contributions to the personal computer revolution of the 1970’s.
Interestingly, Wozniak is not the only former Apple exec who has found his way to the storage–and specifically, solid-state storage–industry. Michael Cornwell, now heading up NAND business development for Sun Microsystems, was previously the manager of storage engineering for the iPod division of Apple.
Fusion-io came out of stealth last March with a PCIe flash card designed to give off-the-shelf servers SAN-like performance. Fusion-io calls its product the ioDrive, and it’s NAND-based storage that comes in 80 Gbyte, 160 Gbyte and 320 Gbyte configurations. The ioDrive fits in a standard PCI express slot, shows up to an operating system as traditional storage and can be enabled as virtual swap space. IBM announced last fall that it will be partnering with Fusion-io to add ioDrives to its servers, which may include the servers that run its SAN Volume Controller (SVC) network-based storage virtualization product.
The economy took a bite out of CommVault last quarter, as the backup software vendor recorded lower sales than expected.
CommVault’s $60.1 million in revenue was below its guidance of $63 million to $65 million, and actually dropped 5% from the previous quarter — unusual because the fourth quarter of the year is when the most money is spent on storage. CommVault also reduced its forecast for this quarter to approximately $63 million to $67 million, down from previous guidance of $69 million to $72 million.
“We are certainly not happy with these results,” CommVault CEO Bob Hammer said on his company’s earnings call Wednesday night.
Hammer said CommVault’s win rate against its competitors hasn’t dropped, but larger deals are taking longer to get approved because of budget constraints. The company has increased its sales force to try and get things moving, and is counting on a boost from Simpana 8 released last week. Simpana 8 adds block-level data deduplication for data on disk and tape, and Hammer said CommVault released it ahead of schedule.
Still, CommVault’s forecast shows it might take awhile for Simpana sales to take off.
“We’re realistic about the state of the global economy,” Hammer said. “This uncertainty is why we have revised our guidance down. We’re dealing with an environment none of us has seen before.”
Riverbed Technology won’t be deduplicating primary data this year as planned.
The scheduled launch of its Atlas device has been delayed from this year into 2010, Riverbed CEO Jerry Kennelly disclosed today on the WAN optimization vendors’ earnings report conference call.
Riverbed did a round of pre-briefings for Atlas last September when it began alpha testing, forecasting its release this year. But Kennelly said today that Atlas will require adjustments before it is ready to bring to market. “We’ve had three months of alpha testing,” he said. “We’ve seen that our original approach can be better addressed to meet customer needs, so we will be delaying shipments.”
Riverbed’s SVP of marketing Eric Wolford said the changes will make Atlas easier to deploy and manage. “We’re disappointed in the delay to market, but we learned a lot from customers and found changes would be necessary,” he said. Wolford says the adjustments may mean Atlas can be used without Riverbed’s flagship Steelhead WAN appliances.
Kennelly said the original plans was to launch what he today called a “bare-bones release” this year followed by a more robust product in late 2010. Now he said there will be one rollout instead of two, although he would not say when in 2010 that would happen.
“Although we hate to announce a delay, we think it’s smarter for us and we’ll get more money earlier,” he said. We won’t be trying to shoot bullets in the economic market in 2009.”
The rest of the news was good on Riverbed’s call. Its fourth-quarter revenue of $92.2 million increased 21% over last year despite the down economy, and executives forecast a 14% to 18% year-over-year increase in revenue ($83 million to $86 million) this quarter. Riverbed also said today EMC qualified Steelhead appliances to work with EMC’s SRDF/Asynchronous replication software for its Symmetrix enterprise storage systems.
Wolford said the qualification was “a prerequisite” to getting Steelhead boxes installed by shops replicating data between Symmetrix systems in multiple sites. He said there were about 30 to 40 potential customers holding up deals pending qualification.
NetApp founder and current “Chief Philosophy Officer” has been on a media tour this month following the publication of his book about NetApp’s rise from startup to billion-dollar company, How to Castrate a Bull.
And yes, your first question–about that title and what it has to do with storage technology–was my first question as welll. Along the way, we also discuss NetApp’s rebranding last year, its positioning for the low end market, the time-honored SAN/NAS debate, and storage trends for the future.
NetApp blogger and chief technical architect Val Bercovici, leaked the news yesterday that NetApp’s V-Series storage gateways can now front Texas Memory Systems’ RamSan-500 solid-state storage arrays.
This is the follow-on to NetApp’s announcement last month that it planned to offer Flash-as-disk to go along with its Flash-as-Cache and DRAM-based Performance Acceleration Module (PAM).
A common issue with deploying solid state drives, analysts have said since EMC first announced support for STEC Inc. SSDs in Symmetrix last year, is integrating them with storage management software tools. Until recently, provisioning SSDs could be like provisioning hard disks used to be before storage virtualization–complex, slow and fairly rigid.
According to Bercovici, while the RamSan acts as the high IOPS storage behind the V-Series, the V-Series gives it storage management features through NetApp’s WAFL operating system:
WAFL’s log-structured architecture implements native load-balancing of write operations via write-aggregation to solid state NVRAM. This includes an innovative data layout engine which enables WAFL to “write anywhere” in order to optimize the placement of data across the appropriate media. For flash, that means native built-in wear-leveling optimized to spread writes over as many flash cells as possible in parallel, with minimum wear to each individual flash cell.
According to NetApp chief marketing officer Jay Kidd:
[The V-Series and RamSan] effectively [create] the industry’s only Enterprise Flash storage system that supports thin provisioning, fast snapshots, remote mirroring, and data deduplication
So far, though, this approach to Flash-as-disk isn’t really flying with storage admins. “This would be the most expensive way of doing SSD,” Tom Becchetti, storage admin for a manufacturing company and NetApp customer, wrote SearchStorage.com today in an email. “What I would like to see is just how EMC implemented their SSD. They have SSD that is physical and logically the same form factor of the hard drive. It would give you the most flexibility and as more SSD vendors show up on the scene, the cost will dramatically fall.”
Denizens of the storage blogosphere were even more outspoken. “Is that it??” was the title of a post on U.K. storage end user Martin Glassborow’s blog, Storagebod. “I expected more, I expected something which was going to force EMC to raise the bar on their SSD implementation.”
We reported on an archive migration software startup, Procedo, late last year while it was still in the early stages of delivering product (usually attached to services). Today, the company came out with its first generally available software offering for migrating archive data between repositories while maintaining chain of custody. This GA offering also comes with some more features folded in, including storage resource discovery and reporting.
According to founder and CEO Joe Kvidera, the Procedo Archive Migration Manager (PAMM) Suite 3.0 bundles what had been separate pieces of software and migration tools into one product which allows features to be unlocked using license keys. During services engagements, Procedo staffers (often brought in by a bigger company like Symantec Corp.) instead bring the appropriate sets of tools to conduct the migration.
Pricing for those license keys depends on the applications and capacity to be migrated, according to Kvidera, from $5000 per TB for a simple file-system migration to up to $45,000 for migration involving complex applications with proprietary APIs such as EMC Corp.’s Centera. Kvidera said the average price would be somewhere in the range of $25,000 per TB.
PAMM is made up of a cluster of at least three servers, one for ingesting data from the old archive, another for writing to the new archive and a SQL server that tracks each message according to the object ID assigned to it at either end. The SQL database uses snapshots temporarily stored on storage area network (SAN) storage to validate that object IDs match on both ends, while the data is converted and migrated. The SQL instance then becomes a chain-of-custody log in case the validity of the data being migrated is questioned. The database tracking also means that failed or incomplete migrations can be corrected.
With PAMM 3.0, users can perform load balancing features across multiple migration servers, and destination archives for migration will now include cloud services LiveOffice and MessageOne. “We have closed a couple of deals with those services already this quarter,” Kvidera said.
Finally, PAMM 3.0 adds a new user interface with wizards to aid in migrations and new storage discovery tools that help users assess what they have in an archive before beginning the migration. “Customers often don’t have a clue what they have,” Kvidera said, describing the new PAMM feature as a kind of “mini SRM” useful for planning archive migration projects. Reporting and analysis can also be performed on that data, but today canned reports are limited to storage reporting and trending analysis on the archive. In the second quarter, he said, more reports will become available.
EMC was prominently featured in a story from Israeli news story Haaretz.com about a recent secret investigation into corruption in the bidding process for Israeli government contracts, but company officials declined comment today.
According to Haaretz,
A secret seven-year investigation at the Defense Ministry has raised concerns that senior ministry officials used inside information to help certain American companies win more than $100 million in security-equipment tenders advertised in the United States.
Cisco, Juniper and Hewlett-Packard were also mentioned in the article, but EMC was named in the only specific example of a corrupt bidding process outlined by the piece:
The first deal that raised concerns related to a tender issued at the start of the decade for digital storage for the Israel Defense Forces. Three U.S. firms made bids: EMC, HP and Hitachi Data Systems. EMC won the tender. Haim Adar was in charge of the defense procurement office in New York at the time of the tender. Since his retirement from the ministry several years ago, he has served as external adviser to EMC and other firms who do business with the Defense Ministry.
“As early as the next day [after EMC had won the tender], I knew that our competitors had known everything about our price bid,” Yehuda Cohen, who at the time was in charge of procurement at HP, told Haaretz.
The article goes on to say:
Shortly after the deal with EMC, during Operation Defensive Shield in 2002, various problems were found with the system the company was providing. The technical problems made it difficult to analyze intelligence during the West Bank operation. It took 24 hours to correct the problems and restore the intelligence systems to working order.
Nonetheless, the IDF continued to work with EMC, and over the next few years the firm won several other contracts for data storage systems worth tens of millions of dollars.
According to the article, which was first brought to my attention today by Storage Monkeys, the probe that unearthed these alleged instances of corruption was shut down by the Israeli Defense Ministry in 2007, “citing insufficient evidence, after the ministry stalled the probe due to fears it would harm Israel-U.S. ties.”
Here are some stories you may have missed this week:
Editor’s note: The Symmetrix numbers in this story have been corrected since this podcast was posted; Symmetrix revenue was down 9% in Q4 year-over-year, but was up 2% for the year overall, not down 2% as was originally reported.
As always, you can find the latest storage news, trends and analysis at http://searchstorage.com/news.
IT pros may find themselves in a Catch-22 this year when it comes to e-Discovery and data management for compliance, according to a new report released this week by Forrester Research Analyst Brian Hill. The economic downturn is likely to increase litigation, while making it more difficult for IT organizations to keep up with e-Discovery requests and synchronize information management across different repositories.
Hill predicts an increase in litigation and regulation due to the economic crisis because “To promote confidence and greater macroeconomic stability, we expect governing agencies to institute new regulations, and we anticipate litigation following job losses, broken contracts, and other economic hardships.”
Meanwhile, one year after the new Federal Rules of Civil Procedure created a mandate for companies to systematically preserve electronic information, users were telling SearchStorage.com that before they could evaluate specific products or services for archiving and litigation review of data, organizational structures within their company had to realign to create new data management policies.
Another year has passed and Hill’s report states “Effective alignment between the information management phase and other steps outlined in the Electronic Discovery Reference Model (EDRM) remains out of reach for most enterprises.”
The good news is that companies will at least make steps toward that vision in 2009, Hill told Storage Soup. “There’s broad recognition that it needs to happen,” he said. “We’re starting to see new liaison roles being created, designated intersection points of IT with legal.” However, the two remain sharply different disciplines, with a historically separate reporting structure and few other common objectives. “The reality is it’s a long way out before [a major shift] happens.”
In the meantime, Hill said Forrester advises clients to focus on two of the steps in the EDRM: information preservation and review. Administrators should strive to apply policies to as broad a range of content as possible rather than focusing on particular content types–organizing data according to potential legal relevance rather than application means users may start out with less information to wade through during a litigation request, as well as cutting down on capacity growth also threatening to bust storage budgets this year.
The second phase of EDRM Hill advised users to focus on for the time being is the review phase. “This is where the most spending is,” Hill said, recommending that particularly cash-strapped organizations look to the clouds for their content repositories. “Hosted review platforms can make some difference–something internal can require a lot of capital.”