NetApp Inc. was at the center of some buzz over the weekend among publications that cover corporate financials, after a column in Barron’s touched off speculation on Wall St. that NetApp might be about to report encouraging earnings. Amid that speculation was buzz that the company would also be laying off workers to maintain its profit margins as the economy continues to spiral downward.
NetApp PR director Jodi Bauman today confirmed the company plans to reduce its global workforce by 6% with the following statement: “”Today NetApp took a number of steps to better align our resources with the business outlook. This restructuring includes a reduction of about 6% of the global workforce, as well as the reallocation of other resources to initiatives designed to increase operating efficiency and build a foundation for additional market share gains.”
The company outlined a strategy at its Analyst Day last March that included adding to its sales force to better penetrate the enterprise market. It remains unclear how the planned cuts will affect that strategy. NetApp will hold its quarterly earnings call on Wednesday and may offer additional color then.
Startup Cleversafe Inc. is preparing to launch a new online storage service based on its dsNet product, with data centers on each of the three major power grids in the U.S.
Since coming out of stealth two years ago, Cleversafe’s goal has been to deliver dsNet as a service. So far, the company has sold a few systems to service providers in the company’s home area of Chicago, but it has yet to realize its original vision of a “Storage Internet” in which data is distributed geographically.
The new service is currently in beta testing and four data center locations have been built out, three in Chicago and one in Omaha, Nebraska. By the end of March, Cleversafe officials say they expect to double that number of data centers and extend the dsNet service across U.S. power grids in the West, East and Texas interconnect. Each power grid will have two data centers, and each data center would utilize multiple internet service providers for redundancy.
Cleversafe’s SliceStor storage nodes can break up a single file into up to 11 pieces for redundancy. The Cleversafe hash appended to each slice for reconstruction also provides built-in encryption. The company is selling dsNet systems into end user and service provider accounts with customizable fault tolerance, but the dsNet service would have 8-6 or 8-5 redundancy, meaning eight slices across each data center with either five or six nodes required to reconstruct files.
dsNet might eventually be able to act as a content delivery network (CDN) as well as a storage service, according to director of customer solutions Alan Holmes, so that files can be delivered without requiring a separate Accesser node or client, as is required today. “We have a technology differentiator already built in to dsNet,” Holmes said. “Because of the way we reconstruct files, we already query the network for the nearest server many times per second.”
The Museum of Broadcast Communications (MBC) is an early adopter of the service, thanks to a chance online connection between the museum’s website and Cleversafe founder Chris Gladwin. The museum, also a brick-and-mortar institution in Chicago since the late 1980′s, was struggling to host digital files for download on its website, and sent out a letter to members announcing the discontinuation of that service two years ago. Gladwin received that notification, according to MBC founder and president Bruce DuMont, and got in contact with the museum to offer the dsNet service.
Neither DuMont or Cleversafe would disclose the specific financial details of Cleversafe’s relationship with MBC, but DuMont said MBC had agreed to partner with Cleversafe for its online content distribution for 10 years. “Right now, we’re in year two,” he said.
Here are some stories you may have missed this week:
As always, you can find the latest storage news, trends and analysis at http://searchstorage.com/news
EMC Corp. rolled out a 4 TB home NAS box for the Iomega StorCenter line today, and an EMC official told Storage Soup about other software and hardware updates to come for EMC’s SMB, SOHO and consumer products later this year.
The Iomega StorCenter Pro ix4-100 is an upmarket successor to the two-drive ix2 consumer product. It also has a predecessor within Iomega’s product lines, the Iomega 150d NAS, launched prior to EMC’s acquisition of Iomega last year. The ix4 adds more enterprise-level software features such as security and built-in data backup with the addition of EMC’s LifeLine consumer storage software to the 150d’s hardware. EMC’s Mozy online backup service and Retrospect local backup software are also included, along with automatically updated backup folders within the device.
Marc Tanguay, GM of the StorCenter product line, said the four-bay ix4 is aimed at small businesses of around 25 employees rather than the home NAS market. It will come in 2 TB and 4 TB capacities, the latter double the capacity for the ix2. As with the ix2, the LifeLine software will offer features like Windows Active Directory support, email and SNMP system status notifications, print server capabilities and Bluetooth compatibility. The ix4 will support four printers (the ix2 supports two), and the ability to plug in an Axis security camera and stream directly to the box without a PC as an operator.
Tanguay said customers who purchase the ix4 now will get a free upgrade at the end of the first quarter to the next version of LifeLine. That version will include remote administrative access, which will be free for the first year and then cost around $9.95 per year, according to Tanguay. Remote access will let customers upload and download files to the box from a remote location, and an admin can manage it from a Web browser. Also coming in the next version of Lifeline is native Apple File Protocol support. StorCenter currently supports Mac and Apple computers, but “it’s easier to share files and networks with native AFP support,” Tanguay said.
The next version will also offer folder quotas and automated torrent support. Consumers are the most frequent users of torrents to share media files, but Tanguay said small businesses are increasingly using them to exchange data as well. The new version of LifeLine will offer the ability to run the torrent downloader on the shared storage box without requiring a separate computer or process. “Today if a worker is moving from office to home, they lose the continuity of the download, and they have to use their PC’s CPU power for large files,” Tanguay said. Finally, the next version of LifeLine will also make the StorCenter a full media server, including native support for iTunes.
The StorCenter hardware will also be upgraded in the second quarter when EMC will support 2 TB SATA drives from Seagate and Western Digital, and release a new 8 TB StorCenter model.
The 2TB StorCenter Pro ix4-100 NAS Server is now available for $799.95. The 4TB StorCenter Pro ix4-100 NAS Server will be available later this month for $1,299.95.
SSD maker Fusion-io announced today that Apple founder Steve Wozniak has joined it as Chief Scientist. According to a Fusion-io press release, “Wozniak will act as a key technical advisor to the Fusion-io research and development group. He will also work closely with the executive team of Fusion-io in formulating a company strategy that will accelerate the expansion of major global accounts.”
Wozniak is a big name for an emerging company like Fusion-io to land. Sometimes known as Apple’s “other Steve,” Wozniak is credited with significant engineering contributions to the personal computer revolution of the 1970′s.
Interestingly, Wozniak is not the only former Apple exec who has found his way to the storage–and specifically, solid-state storage–industry. Michael Cornwell, now heading up NAND business development for Sun Microsystems, was previously the manager of storage engineering for the iPod division of Apple.
Fusion-io came out of stealth last March with a PCIe flash card designed to give off-the-shelf servers SAN-like performance. Fusion-io calls its product the ioDrive, and it’s NAND-based storage that comes in 80 Gbyte, 160 Gbyte and 320 Gbyte configurations. The ioDrive fits in a standard PCI express slot, shows up to an operating system as traditional storage and can be enabled as virtual swap space. IBM announced last fall that it will be partnering with Fusion-io to add ioDrives to its servers, which may include the servers that run its SAN Volume Controller (SVC) network-based storage virtualization product.
The economy took a bite out of CommVault last quarter, as the backup software vendor recorded lower sales than expected.
CommVault’s $60.1 million in revenue was below its guidance of $63 million to $65 million, and actually dropped 5% from the previous quarter — unusual because the fourth quarter of the year is when the most money is spent on storage. CommVault also reduced its forecast for this quarter to approximately $63 million to $67 million, down from previous guidance of $69 million to $72 million.
“We are certainly not happy with these results,” CommVault CEO Bob Hammer said on his company’s earnings call Wednesday night.
Hammer said CommVault’s win rate against its competitors hasn’t dropped, but larger deals are taking longer to get approved because of budget constraints. The company has increased its sales force to try and get things moving, and is counting on a boost from Simpana 8 released last week. Simpana 8 adds block-level data deduplication for data on disk and tape, and Hammer said CommVault released it ahead of schedule.
Still, CommVault’s forecast shows it might take awhile for Simpana sales to take off.
“We’re realistic about the state of the global economy,” Hammer said. “This uncertainty is why we have revised our guidance down. We’re dealing with an environment none of us has seen before.”
Riverbed Technology won’t be deduplicating primary data this year as planned.
The scheduled launch of its Atlas device has been delayed from this year into 2010, Riverbed CEO Jerry Kennelly disclosed today on the WAN optimization vendors’ earnings report conference call.
Riverbed did a round of pre-briefings for Atlas last September when it began alpha testing, forecasting its release this year. But Kennelly said today that Atlas will require adjustments before it is ready to bring to market. “We’ve had three months of alpha testing,” he said. “We’ve seen that our original approach can be better addressed to meet customer needs, so we will be delaying shipments.”
Riverbed’s SVP of marketing Eric Wolford said the changes will make Atlas easier to deploy and manage. “We’re disappointed in the delay to market, but we learned a lot from customers and found changes would be necessary,” he said. Wolford says the adjustments may mean Atlas can be used without Riverbed’s flagship Steelhead WAN appliances.
Kennelly said the original plans was to launch what he today called a “bare-bones release” this year followed by a more robust product in late 2010. Now he said there will be one rollout instead of two, although he would not say when in 2010 that would happen.
“Although we hate to announce a delay, we think it’s smarter for us and we’ll get more money earlier,” he said. We won’t be trying to shoot bullets in the economic market in 2009.”
The rest of the news was good on Riverbed’s call. Its fourth-quarter revenue of $92.2 million increased 21% over last year despite the down economy, and executives forecast a 14% to 18% year-over-year increase in revenue ($83 million to $86 million) this quarter. Riverbed also said today EMC qualified Steelhead appliances to work with EMC’s SRDF/Asynchronous replication software for its Symmetrix enterprise storage systems.
Wolford said the qualification was “a prerequisite” to getting Steelhead boxes installed by shops replicating data between Symmetrix systems in multiple sites. He said there were about 30 to 40 potential customers holding up deals pending qualification.
NetApp founder and current “Chief Philosophy Officer” has been on a media tour this month following the publication of his book about NetApp’s rise from startup to billion-dollar company, How to Castrate a Bull.
And yes, your first question–about that title and what it has to do with storage technology–was my first question as welll. Along the way, we also discuss NetApp’s rebranding last year, its positioning for the low end market, the time-honored SAN/NAS debate, and storage trends for the future.
NetApp blogger and chief technical architect Val Bercovici, leaked the news yesterday that NetApp’s V-Series storage gateways can now front Texas Memory Systems’ RamSan-500 solid-state storage arrays.
This is the follow-on to NetApp’s announcement last month that it planned to offer Flash-as-disk to go along with its Flash-as-Cache and DRAM-based Performance Acceleration Module (PAM).
A common issue with deploying solid state drives, analysts have said since EMC first announced support for STEC Inc. SSDs in Symmetrix last year, is integrating them with storage management software tools. Until recently, provisioning SSDs could be like provisioning hard disks used to be before storage virtualization–complex, slow and fairly rigid.
According to Bercovici, while the RamSan acts as the high IOPS storage behind the V-Series, the V-Series gives it storage management features through NetApp’s WAFL operating system:
WAFL’s log-structured architecture implements native load-balancing of write operations via write-aggregation to solid state NVRAM. This includes an innovative data layout engine which enables WAFL to “write anywhere” in order to optimize the placement of data across the appropriate media. For flash, that means native built-in wear-leveling optimized to spread writes over as many flash cells as possible in parallel, with minimum wear to each individual flash cell.
According to NetApp chief marketing officer Jay Kidd:
[The V-Series and RamSan] effectively [create] the industry’s only Enterprise Flash storage system that supports thin provisioning, fast snapshots, remote mirroring, and data deduplication
So far, though, this approach to Flash-as-disk isn’t really flying with storage admins. “This would be the most expensive way of doing SSD,” Tom Becchetti, storage admin for a manufacturing company and NetApp customer, wrote SearchStorage.com today in an email. “What I would like to see is just how EMC implemented their SSD. They have SSD that is physical and logically the same form factor of the hard drive. It would give you the most flexibility and as more SSD vendors show up on the scene, the cost will dramatically fall.”
Denizens of the storage blogosphere were even more outspoken. “Is that it??” was the title of a post on U.K. storage end user Martin Glassborow’s blog, Storagebod. “I expected more, I expected something which was going to force EMC to raise the bar on their SSD implementation.”
We reported on an archive migration software startup, Procedo, late last year while it was still in the early stages of delivering product (usually attached to services). Today, the company came out with its first generally available software offering for migrating archive data between repositories while maintaining chain of custody. This GA offering also comes with some more features folded in, including storage resource discovery and reporting.
According to founder and CEO Joe Kvidera, the Procedo Archive Migration Manager (PAMM) Suite 3.0 bundles what had been separate pieces of software and migration tools into one product which allows features to be unlocked using license keys. During services engagements, Procedo staffers (often brought in by a bigger company like Symantec Corp.) instead bring the appropriate sets of tools to conduct the migration.
Pricing for those license keys depends on the applications and capacity to be migrated, according to Kvidera, from $5000 per TB for a simple file-system migration to up to $45,000 for migration involving complex applications with proprietary APIs such as EMC Corp.’s Centera. Kvidera said the average price would be somewhere in the range of $25,000 per TB.
PAMM is made up of a cluster of at least three servers, one for ingesting data from the old archive, another for writing to the new archive and a SQL server that tracks each message according to the object ID assigned to it at either end. The SQL database uses snapshots temporarily stored on storage area network (SAN) storage to validate that object IDs match on both ends, while the data is converted and migrated. The SQL instance then becomes a chain-of-custody log in case the validity of the data being migrated is questioned. The database tracking also means that failed or incomplete migrations can be corrected.
With PAMM 3.0, users can perform load balancing features across multiple migration servers, and destination archives for migration will now include cloud services LiveOffice and MessageOne. “We have closed a couple of deals with those services already this quarter,” Kvidera said.
Finally, PAMM 3.0 adds a new user interface with wizards to aid in migrations and new storage discovery tools that help users assess what they have in an archive before beginning the migration. “Customers often don’t have a clue what they have,” Kvidera said, describing the new PAMM feature as a kind of “mini SRM” useful for planning archive migration projects. Reporting and analysis can also be performed on that data, but today canned reports are limited to storage reporting and trending analysis on the archive. In the second quarter, he said, more reports will become available.