The storage blog-and Tweet-o-sphere was abuzz with details provided by HP execs at a private forum in Colorado Springs Monday about the direction of their midrange storage roadmap. Among the tidbits flying about online:
- HP has added solid-state drive (SSD) support with its most recent EVA refresh, but is working on automated sub-LUN tiered storage migration, according to attendees. IBM is also reportedly working on something similar, EMC is planning LUN-level automated tiered storage migration later this year, Atrato has announced something similar, and Compellent has always had sub-LUN automated tiered storage migration, which currently also supports SSDs. This seems to be becoming table stakes in the Flash-as-disk market for SSDs.
- HP is reportedly moving to an x86 / x64 Intel processor architecture for all of its storage arrays below the USP-V. Methinks Jasper Forest may have something to do with that.
- Finally, look for LeftHand Networks’ virtual storage appliance (VSA) to be ported to other hypervisors, including Xen and Hyper-V.
- Update: Tweets out of Day 2 of the TechDay meeting indicate HP officials are talking about offering Ibrix as a clustered NAS gateway in front of block storage, and eventually converging with LeftHand, though what exactly that convergence would look like isn’t clear.
HP is also still executing on roadmap predictions it made last year, adding small-form factor SAS drives across its storage arrays, beginning with the MSA line. The EVA is slated for a refresh with 2.5-inch SAS drives by the end of this year.
HP also expanded its Windows-based NAS products for SMBs today, with the introduction of the small office/home office (SOHO) X500 Data Vault series and new high-availability (HA) options for its X3000 Windows Storage Server 2008 product line.
Symantec Corp. says the results of a recent worldwide survey of 1,653 small and midsize businesses (SMBs) and those who do business with them show a gap between how these companies perceive their disaster recovery plans and how prepared for disaster they actually are.
The survey began by asking SMBs (which represented 70 percent of the respondents, with “small” companies defined as 10 to 99 employees and midsize as 100 to 499 employees) how confident they were in their ability to respond to a disaster. According to the survey results, around 82% are somewhat or very satisfied with their DR plan, 84% believe they are “very” or “somewhat” protected, and one in three responded that they believe customers would evaluate other vendors should they experience an outage.
But when the survey questions drilled down into the details of SMB DR plans, Symantec representatives noticed discrepancies in the responses. While the vast majority began by expressing confidence in their ability to survive a disaster, 47% also said they have no formal DR plan An estimated 60% of company data is backed up in this market, with only one in five respondents backing up daily, and more than half expect they would lose more than 40% of their data in a disaster. More puzzling, while only a third of respondents said they expected their own customers to evaluate competitors in the event of a disaster or outage, 42% said they personally had switched vendors due to “unreliable computing systems” and 63% said it damaged their perception of an SMB vendor.
Pat Hanavan, Symantec’s VP for Backup Exec product management, admitted the answers to the questions about confidence may have been different if asked at the end of the survey rather than at the beginning. “My guess is the survey itself may have been an educational process for some,” he said.
It’s also important to remember how long it has taken enterprises to focus on formalized disaster recovery planning and technology with the benefit of internal expertise dedicated to data protection in most cases. Many SMBs rely on a partner or non-technical employees to keep IT operations running and also operate without the budget of the big guys.
The good news for SMBs starting to consider disaster recovery is that more and more vendors are focused on storage and data protection in their market these days, including a plethora of cloud services designed to host data and/ or standby infrastructure for companies which can’t afford a full secondary data center.
If you’re an SMB working your way through disaster recovery planning, please feel free to share your experiences in the comments.
Data Domain’s first product upgrade since it became part of EMC is a step in the same direction the deduplication specialist was heading before the acquisition.
Data Domain today upgraded its operating system with an emphasis on improving its replication capabilities. Data Domain Replicator now supports cascading (multithreaded) replication that lets customers automatically replicate across more than two sites with bi-directional replication. It also expanded its fan-in to a maximum of 180-1 for remote sites.
Replication has been an area of concentration for Data Domain and other deduplication vendors this year. In May, Data Domain bumped its fan-in to a maximum of 90-1 and added full-system replication mirroring. IBM Diligent, Sepaton, and Quantum have all beefed up or added replication to deduplication products this year.
The number of sites supported by Replicator depends on the Data Domain system being run. Data Domain’s largest system, the DD880, now supports 180 to 1 fan-in with the DD690 getting up to 90-1 and midrange DD565 boxes support 45-1.
“We actually have customers asking for fan-in to more sites,” said Brian Biles, Data Domain VP of product marketing.
Biles says the number of sites that can replicate to the data center is determined by the amount of resources allocated for each operation. The idea is to keep them balanced for operations such as reads, writes and replication. “As systems get bigger and faster, we can apply more of those resources to replication,” he says. “The DD880 can support more streams coming in [than other Data Domain devices].”
Cascaded replication copies multiple threads at the same time. This doubles the amount of throughput getting replicated in most cases, Biles said.
Biles, a Data Domain founder, says EMC execs have given Data Domain the go-ahead to continue to execute the roadmap it drew up before EMC’s $2.1 billion acquisition.
“All discussions so far have been encouraging us to stay on the same course we were on, and to do more of it,” Biles said. “I think you’ll see a lot of the same things next year as you saw this year – an emphasis on scaling, and tightening our link with backup and archiving software.”
He said that includes work with Symantec’s NetBackup OpenStorage (OST) interface, even though Symantec and EMC are rivals in the backup game. “Absolutely,” Biles said when asked if there would be tighter integration with OST. “Expect to see more and more over time.”
I realize I’m dating myself here, but I get a kick out of seeing H. Ross Perot Jr. surfacing in the news this morning with the announcement that Dell Inc. has bought his IT services company Perot Systems. Perot is the son of onetime US Presidential candidate H. Ross Perot Sr., who was a prominent and colorful character in the first American national elections I was old enough to be aware of at the time, in 1992 and 1996.
This acquisition isn’t directly related to storage — industry observers like Steve Duplessie describe it as analogous to Hewlett-Packard (HP) Co. buying EDS. However, storage is included in the infrastructure services Perot Systems offers. Also relevant to the storage world is the work the two companies have already been doing in the electronic medical records space, an area where storage managers in healthcare IT are struggling right now.
One Wall Street analyst who follows the storage market predicted the deal could have a short-term impact on shares of CommVault Systems Inc., writing in a note to clients this morning,
We believe shares of CommVault, which have been up approximately 109% since the early March lows and approximately 30% over the past three months…have been partially driven by investor sentiment on the thesis that the company would be on a short-list of potential acquisition candidates for Dell.
“While I agree that Dell may be less likely to acquire in general due to this major outlay, I see Perot and Commvault as filling very different needs within the Dell portfolio,” wrote Forrester Research analyst Andrew Reichman in an email to Storage Soup. “If they needed it before Perot, they still need it after, so I disagree that this takes Commvault off the table.”
Added Gartner analyst David Russell, “I think that a counter argument could be made that the Dell/Perot deal could lead to expanded CommVault sales if a backup and archiving practice is established.”
The Taneja Group’s Jeff Boles said this deal raises questions for him about the impact on EqualLogic services. “What does Dell/Perot do for EqualLogic? What do they do with EqualLogic within an increasingly virtual infrastructure? EqualLogic has great scale, and great economics – there might be a tremendous solution set here that gets really energized in the larger scale business through this professional services coupling.”
Dell has announced SaaS services for storage, and Perot’s SaaS expertise was also emphasized in this morning’s announcement. But, as is to be expected this soon after an acquisition agreement, Dell’s not yet revealing its plans.
“We view this acquisition as completely complementary to Dell’s current services business,” said a spokesperson reached today for comment by Storage Soup. “We have begun integration planning and will have more information on it upon closing.”
Backup expert and Tech Target executive editor W. Curtis Preston Friday wrote on his Backup Central blog about a discovery he made regarding MozyHome online backup service – something he wasn’t pleased with. When he switched laptops and didn’t re-install the Mozy client, Preston wrote, Mozy kept charging him for 11 months without backing up data or connecting to his workstation.
Preston didn’t lose any data and acknowledges it was his fault that he didn’t re-install the Mozy client when he got a new laptop. “I’m not saying that the fact that I didn’t use their service for 309 days was even their fault,” he writes,
What I’m saying is that for almost a year they took my money to perform a service, they knew I wasn’t using that service, and they never said squat. This is a typical business model for an ISP…but this isn’t an ISP. It’s a backup service. They don’t know I’m Mr. Backup. I could just as well be my Mom (who is on Mozy) and have no idea that I’ve done something dumb like accidentally uninstall the application or set it never to backup. When you’re selling a backup service directly to the consumer, the least you owe them is an email if they’re not backing up, don’t you think? I still like Mozy. But I think they should change this practice…
We reached out to Mozy to determine their notification poliy, whether you are Mr. Backup or not, and they emailed the following response:
We decided to put our notifications in the client instead of in e-mail because people get so many e-mails that they may miss the notification. We want our customers to know if a backup isn’t happening. For this reason, the client pop up box doesn’t go away until you click to remove it. That said, we expanding our notification options so that people can have more ways of receiving notifications than through the client only.
Keep in mind that [Preston]was still using the Mozy service even though he wasn’t sending new files to us. We were still storing his information, which includes the power, cooling and management costs incurred to keep his data protected.
Mozy rivals including Carbonite notify users outside the client, according to Preston. “Competitors catch those who uninstall it unintentionally or forget to reinstall after a system change,” he pointed out.
This isn’t the first complaint to surface about Mozy, which EMC Corp. bought in 2007. Users have also complained of slow restore performance last year, a problem Mozy officials blamed on an “isolated bug”, saying customers affected by slow restores would receive discounts or have subscription fees waived in response.
Hewlett-Packard Co. (HP) continues to play both the server and array sides of the SSD fence, today announcing Samsung SSDs will be supported in its ProLiant servers.
According to a Samsung press release issued yesterday, its 60 and 120 GB SSDs have been qualified with all HP ProLiant G6 and G5 servers. The two companies claim the drives in ProLiant servers draw 1.9 watts of power when writing to the drive and 1.5 watts when reading; power usage in idle mode is 0.1 watt. The drives are rated for random read commands at 25,000 IOPS and random writes at 6,000 IOPS, with a sequential read speed of 230 MBps and a sequential write speed of 180MBps.
Initially HP officials seemed keener on putting SSDs into the server side of the house, arguing that the closer to the server bus the SSD, the more performance benefit there was, though HP also added array-based SSD options in its most recent EVA refresh. On the server side, HP has also been linked with Fusion-io.
As the SSD market continues to mature, anything that increases competition among product offerings is seen to be a good thing–even STEC’s CTO said recently he’s hoping for more competitors. Stifel Nicolaus analyst Aaron Rakers pointed out in a note to investors today that STEC’s ZeusIOPS drive claims random IOPs of 80,000/40,000 and 350/300MB/s performance; “While we continue to view STEC’s competitive positioning as solid, we believe HP’s announcement with Samsung reflects what looks to be an increased amount of news flow regarding SSD competition in the enterprise server/storage market going forward,” Rakers wrote.
A Pillar executive responding to a Computerworld report that the vendor is “kicking Intel’s SSD to the curb” says Pillar is still considering Intel as an SSD supplier for future releases, while confirming it switched to STEC drives for its Axiom SSD bricks.
Bob Maness, Pillar’s VP of worldwide marketing and channel sales, says Pillar pitted Intel and STEC drives against one another in a qualification process, and “STEC finished first.” In March, Pillar said it would ship Intel SSDs with its Axiom systems but Maness says Intel’s X25-E SSD caused timeout errors with the Axiom controller during multiple concurrent write operations. He said two companies are still working on fixing.
Earlier this year, Intel issued a firmware update to its X25-M consumer SSDs for performance issues due to data fragmentation. Intel executives said at the time that the glitch did not apply to the X25-E.
Pillar also recently swapped out its storage controller processors from Intel to AMD, which Maness said was the result of a similar “first come, first serve” process of qualification. “For most vendors, this is the way they operate with components suppliers,” he said.
Maness said Pillar has used Intel processors in earlier iterations of Axiom and will continue to keep up with its products. “We have had an ongoing relationship with them,” he said. “We’re not putting Intel in the ditch.”
A bigger question for Pillar as it refreshes Axiom with 2 TB drives as well as the SSDs, is whether it will be PIllar’s or Sun’s midmarket storage product lines left in the proverbial ditch by Oracle, whose CEO Larry Ellison is Pillar’s primary investor.
Maness, not surprisingly, says Oracle will pick Pillar. “If you look at the Sun storage product line, you can assume, in my opinion, they probably won’t continue the OEM relationships based on margin,” he said.
He was referring to Sun’s 9000 product line, a rebranding of Hitachi Data Systems (HDS)’s USP high-end disk arrays, as well as the 6000 and 5000 series it rebrands from LSI.
That leaves the Sun 7000 series, or Amber Road, in Pillar’s competitive sights. “Sun servers are already being put in the midst of the Oracle stack,” Maness said, referring to this week’s announcement of Exadata 2. “But they haven’t talked much about storage. Maybe that’s because Pillar is a superior storage product.”
The appointment of Ed Walsh as Storwize CEO this week has people in the storage industry wondering how long it will be until the primary data reduction vendor gets acquired.
EMC bought data deduplication specialist Avamar 17 months after Walsh became its CEO, and it took Walsh 19 months to sell virtualization startup Virtual Iron to Oracle this year. But Walsh says there’s plenty of room for Storwize to grow on its own.
“I think this company has a lot of legs,” he said. “The opportunity is quite large.”
Walsh certainly knows the data reduction space. Outside of Data Domain – also part of EMC now – no company did as much to market data deduplication in its early days as Avamar.
“Data deduplication was not a term until Avamar used it,” Walsh said. “Data Domain called it capacity optimized storage. At Avamar, we had to teach the market that data deduplication was something that you wanted. Now the market is rife. The technology has proven itself.”
While the industry is now filled with backup vendors doing dedupe a la Avamar, only Storwize and startup Ocarina Networks are dedicated solely to reducing primary data. NetApp also has dedupe for primary data on its storage systems, EMC this year added single instance storage to its NAS filers and Riverbed is working on a primary dedupe device – although it’s taking longer than originally thought.
“The difference for primary data is, there’s no tolerance for any performance degradation,” Walsh said. “Storwize really cracked the code on that. We get 6x or 9x improvement and no performance degradation. That gives us a long lead time on the competition.”
Storwize and Ocarina originally referred to their technologies as compression instead of dedupe. They work different than dedupe, and dedupe was considered a secondary storage technology. Ocarina has relented and refers to its product as dedupe now, because that’s the term potential customers want to use.
Storwize has emphasized its STN appliances do compression – not dedupe – but its release announcing the new CEO had the headline, “Deduplication pioneer Ed Walsh takes the reins at Storwize.” Walsh says it really doesn’t matter if it’s called compression or dedupe, as long as it works.
“Everyone does it slightly different,” he says. “In the end, it’s still data reduction.”