Storage Soup

A SearchStorage.com blog.


October 8, 2009  9:04 PM

Google DRAM study turns conventional wisdom on its head…again



Posted by: Beth Pariseau
storage technology research

Remember the research paper Google made a splash with two years ago on disk drive failure rates? The one that showed that most failed drives didn’t raise significant SMART flags, failed to find a correlation between temperature and utilizaation with failure rates, and instead established that failure rates are more correlated to drive manufacturer, model and age?

Well, there’s now a DRAM equivalent — and it doesn’t paint a much prettier picture than the one on hard drive failures.

According to a new paper, “DRAM Errors in the Wild: A Large-Scale Field Study“, engineers from Google and the University of Toronto found that once again, failure rates and patterns did not match the received wisdom in the industry about how Dual Inline Memory Modules (DIMMs) behave. According to the paper:

 

We find that DRAM error behavior in the field differs in many key aspects from commonly held assumptions. For example, we observe DRAM error rates that are orders of magnitude higher than previously reported, with 25,000 to 70,000 errors per billion device hours per Mbit and more than 8% of DIMMs affected by errors per year. We provide strong evidence that memory errors are dominated by hard errors, rather than soft errors, which previous work suspects to be the dominant error mode. We find that temperature, known to strongly impact DIMM error rates in lab conditions, has a surprisingly small effect on error behavior in the field, when taking all other factors into account. Finally, unlike commonly feared, we don’t observe any indication that newer generations of DIMMs have worse error behavior.

As with the disk drive study, temperature also doesn’t play a huge role in DRAM failures. Here, vendor and model didn’t make as much difference as in the disk drive study.

However, the study showed errors were more highly dependent on motherboard design than previously thought. And contrary to conventional wisdom about DRAM, more failures were hardware than software-based. According to an article analyzing the paper by Data Mobility Group’s Robin Harris,

This means that some popular [motherboards] have poor EMI hygiene. Route a memory trace too close to noisy component or shirk on grounding layers and instant error problems…For all platforms they found that 20% of the machines with errors make up more than 90% of all observed errors on that platform. There be lemons out there!

These two reports raise one common question, according to Harris — why didn’t we know about these things before? As he put it, “Big system vendors have scads of data on disk drives, DRAM, network adapters, OS and filesystem based on mortality and tech support calls, but do they share this with the consuming public? Nothing to see here folks, just move along.”

October 7, 2009  3:31 PM

Notes from IBM Information Infrastructure Analyst Summit



Posted by: Beth Pariseau
storage vendors

Photobucket
An IBM executive panel discusses the changing IT market at an analyst summit Tuesday.

IBM held a meeting Tuesday at Boston’s Four Seasons Hotel for press and analysts to discuss its new strategy for offering users integrated “stacks” of servers, software, storage and services. The two main products introduced were the new IBM Smart Business Storage Cloud bundle, and Information Archive appliance.

But there were some other tidbits to be gleaned from the announcements and meeting as well:

  • VP Barry Rudolph said IBM is “about ready to announce and deliver” solid-state drives (SSD) in its SAN Volume Controller, which Rudolph said will double the performance of the storage virtualization device. IBM previewed the product as Project Quicksilver with Fusion-io last year. Execs wouldn’t give a more specific time frame than “imminent.”
  • Scale-out File Services (SOFS) in its first iteration required an ongoing services engagement (as it did when reference customer Kantana Animation first installed it last year). The new Storage Cloud based on SOFS has the option of deployment services only, as well as an ongoing managed service, but IBM also added some consulting services to go along with the new product package, including Strategy and Change Services for Cloud Adoption for end users, Strategy and Change Services for Cloud Providers, and Testing Services for Cloud (helping build a business case for cloud-based test environments).
  • Smart Business Storage Cloud is being offered for private cloud deployments right now, but IBM also plans to offer a public cloud based on the package and the CloudBurst product it announced in June, which also features automated provisioning and file-set-level chargeback through Tivoli Services Automation Manager (TSAM).

Analyst reviews of the event were mixed. Wikibon analyst Dave Vellante said he thinks IBM has some work to do on the Information Archive. “I loved the line about ‘The keep everything forever model has failed’ – it’s true,” he wrote to Storage Soup in an email. “Unfortunately, what IBM announced yesterday (IBM Information Archive) is more of the same old same old. New hardware, some decent integration but NO INDEXING AND NO SEARCH. In my mind that is not very useful to customers. Supposedly search and indexing ‘is coming soon’ but I think IBM was rushing to replace the DR550 line.”

He added, “good news for IBM is all the archiving vendors are missing the mark. Systems still don’t scale, nobody does classification right and there’s no good way to defensibly delete un-needed data.”

Evaluator Group analyst John Webster said that after following IBM storage for years, he sees them rationalizing different product lines more effectively these days. “Last year at this time things were more disjointed,” he said. “Now they’re able to rationalize XIV with the DS8000, for example.”

When it comes to the single vertically-integrated stack concept, analysts say they’ve seen this movie before. “I wonder, to what degree is server virtualization and VMware driving the desire to integrate everything into a box?” Webster said. “It reminds me of a concept people used to talk about years ago called a ‘God box,’ basically a big switch that did everything. But nobody wanted to go there–it was enough to talk about an intelligent switch. I’m not sure it’s progressed much farther, but I don’t know that it matters–Cisco has thrown down the gauntlet and other large players have to cover their bets.”

Everything’s cyclical, pointed out Analytico Tom Trainer. “Consolidation and innovation patterns in the market are like a sine wave,” Trainer said. “We were probably at the height of new companies and innovation in the dotcom era of 1999 to 2000, and as politics and economics come into play, the pendulum looks like it’s swinging back toward consolidation.”

However, consolidation can open up space in the market for new companies to emerge. “I’m talking to startups receiving good funding recently,” Trainer said. New storage startups have begun coming out of stealth in the last week, such as Avere.

When asked about industry consolidation, IBM’s Rudolph saw a similar picture. “I think you’re starting to see major shifts in our competitive framework, but I don’t think there’ll be a lack of new innovation and three or four huge corporations and that’s it,” he said.


October 5, 2009  2:34 PM

Will Hewlett-Packard and Brocade tie the knot?



Posted by: Dave Raffo
storage networking; storage vendors

There have been rumors for years that Hewlett-Packard might buy Brocade, and they intensified today after a Wall Street Journal report that Brocade has put itself up for sale.

The WSJ cited unidentified sources and obviously none of the companies named would comment, but the article mentioned HP and Oracle as potential bidders. Wall Street and storage industry analysts who follow Brocade say HP is the likely buyer if Brocade gets acquired. HP has a long-term relationship with Brocade, and Oracle is currently trying to complete its Sun deal and integrate that company.

“It is possible that HP is looking to buy Brocade,” Wedbush Securities analyst Kaushik Roy said today. Roy said he “would guess” Brocade would go for about $11 per share or between $4 billion and $5 billion.

However, there is likely a good reason why HP hasn’t already acquired Brocade. If it did, Brocade would probably lose a good piece of its business because its large OEM customers EMC and IBM wouldn’t be so enthusiastic about selling switches owned by their competitor HP.

“If HP buys Brocade, they would in reality pay a much higher premium because the future revenue forecasts would be revised downwards,” Roy said. “Brocade is an OEM business. EMC is likely to move from Brocade more to Cisco [for Fibre Channel switches] and IBM is likely to move towards Juniper on Ethernet.”

In a note to clients today, Stifel Nicolaus Equity Research analyst Aaron Rakers wrote that HP makes most sense as a Brocade suitor but threw a few others into the mix.

“We find it a bit interesting that the [WSJ] article is not including names such as
IBM and Juniper,” Rakers wrote.

Enterprise Strategy Group analyst Bob Laliberte said when he heard about the WSJ story, “My first thought was that HP would be a potential suitor. When you look at a company the size of Brocade and what they offer, you’re down to IBM, HP, Oracle, maybe Dell. I don’t think you’ll see EMC or Cisco buy them.”

A Cisco-Brocade deal probably wouldn’t clear anti-trust regulation, and EMC is too close to Cisco to buy Cisco’s chief switch competitor.

A Brocade acquisition by anybody is still a big if at this point. The WSJ story said no deal is imminent, and it sounds like Brocade could just be shopping to see how much interest is there.

One thing for sure is that Brocade’s stock price is soaring. It opened at $8.60 today, more than 12% above its Friday closing price of $7.65.


October 2, 2009  2:56 PM

10-02-2009 Storage Headlines



Posted by: Dave Raffo
storage vendors

Stories referenced:

(0:29) Xiotech names Alan Atkinson CEO

(1:12) HP expands Microsoft-based SMB network-attached storage offerings with Data Vault series

(2:57) HP drops roadmap nuggets at StorageWorks TechDay

(4:42) LSI adds solid-state drive, iSCSI support to denser Engenio 7900 disk array

(6:34) Dataram enters solid-state storage market with XcelaSAN

(8:51) Compellent says smaller businesses can dodge forklift upgrades with QuickStart Fibre Channel SAN


October 1, 2009  3:51 PM

Zmanda spruces up Windows cloud backup



Posted by: Beth Pariseau
Cloud storage, data backup

Open-source data backup software company Zmanda Inc. is releasing version 2.0 of its Zmanda Cloud Backup (ZCB) for Windows today.

New features include:

  • Geography control - customers can tag data so that it’s backed up to a cloud data center in a certain region. For ecample, users in Europe can specify data that has to stay in Europe per European Union regulations. Customers can also choose to send data to data centers closest to their location for better performance of data migrations and retrieval over the network.
  • Selective restore - the ability to restore one file from a data set; not new for Zmanda’s main backup product, but new for ZCB.
  • Windows Security Certificate Encryption - Previously data sent to the cloud through ZCB was encrypted using standard AES encryption; support for the Windows certificate “is the highest level of encryption for Windows systems,” said Zmanda CEO Chander Kant. “It means they can use the same certificate they’re used to if they encrypt files on their Windows server and can make bare-metal restores for DR easier.”

Zmanda Cloud Backup 1.0 was first released last December. Kant said there are currently about 100 customers using it to backup systems to the cloud.


September 29, 2009  6:29 PM

Cisco: We’re not abandoning Fibre Channel for FCoE



Posted by: Dave Raffo
FC switches; fcoe; multiprotocol storage

Despite its aggressive push of Fibre Channel over Ethernet (FCoE), Cisco executives say Fibre Channel will remain its main storage protocol for another five to 10 years and the vendor remains committed to extending its MDS FC switching platform.

Cisco reps claimed it is a myth that the vendor is abandoning FC for FCoE today during a webcast on storage networking innovation.

“Cisco is not going out and saying ‘Get rid of the Fibre Channel infrastructure,’” said Ed Chapman, VP of product management for Cisco’s server access and virtualization group.

Added VP of Cisco’s data center switching technology group Rajiv Ramaswami: “Fibre Channel is here, it’s healthy, it’s going to be here for a long time.” When asked how long before FCoE becomes the primary storage protocol, he said at least five to 10 years.

Ramaswami says Cisco plans call for an 8Gbps Fibre Channel module for the Nexus 5000 switch this year and a 16Gbps FC card for its MDS 9000 director switches by early 2011. He said Cisco will also add new intelligent storage services for the MDS platform, as well as an FCoE module.

He said FC will play a major part alongside FCoE in Cisco’s unified platform. “Unified computing is not just another name for FCoE,” he said. “FCoE is a building block in a unified fabric. FCoE is about consolidation of I/O on the server. A unified platform is about building an end to end network along with unified storage.”

Cisco added to its FCoE platform today with the Nexus 4000, the first blade switch for its Nexus unified fabric platform. Cisco expects OEM deals with blade server vendors to ship the Nexus 4000 inside their blades.

Cisco, which has deeper roots in Ethernet than FC, has pushed FCoE more than its chief switching rival Brocade, which began as a FC vendor and added Ethernet when it acquired Foundry Networks last year. Brocade beat Cisco to the punch with 4Gbps FC and 8 Gbps and gained FC market share during the refresh cycles. So it will be interesting to see if Cisco makes good on its FC roadmap pledges, especially for 16-gig.


September 29, 2009  3:57 PM

HP drops roadmap nuggets at StorageWorks TechDay **Updated**



Posted by: Beth Pariseau
storage vendors

The storage blog-and Tweet-o-sphere was abuzz with details provided by HP execs at a private forum in Colorado Springs Monday about the direction of their midrange storage roadmap. Among the tidbits flying about online:

  • HP has added solid-state drive (SSD) support with its most recent EVA refresh, but is working on automated sub-LUN tiered storage migration, according to attendees. IBM is also reportedly working on something similar, EMC is planning LUN-level automated tiered storage migration later this year, Atrato has announced something similar, and Compellent has always had sub-LUN automated tiered storage migration, which currently also supports SSDs. This seems to be becoming table stakes in the Flash-as-disk market for SSDs. 
  • HP is reportedly moving to an x86 / x64 Intel processor architecture for all of its storage arrays below the USP-V. Methinks Jasper Forest may have something to do with that.
  • Finally, look for LeftHand Networks’ virtual storage appliance (VSA) to be ported to other hypervisors, including Xen and Hyper-V.
  • Update: Tweets out of Day 2 of the TechDay meeting indicate HP officials are talking about offering Ibrix as a clustered NAS gateway in front of block storage, and eventually converging with LeftHand, though what exactly that convergence would look like isn’t clear.

HP is also still executing on roadmap predictions it made last year, adding small-form factor SAS drives across its storage arrays, beginning with the MSA line. The EVA is slated for a refresh with 2.5-inch SAS drives by the end of this year.

HP also expanded its Windows-based NAS products for SMBs today, with the introduction of the small office/home office (SOHO) X500 Data Vault series and new high-availability (HA) options for its X3000 Windows Storage Server 2008 product line.


September 28, 2009  6:41 PM

Symantec sounds alarm on SMB disaster recovery



Posted by: Beth Pariseau
small business storage

Symantec Corp. says the results of a recent worldwide survey of 1,653 small and midsize businesses (SMBs) and those who do business with them show a gap between how these companies perceive their disaster recovery plans and how prepared for disaster they actually are.

The survey began by asking SMBs (which represented 70 percent of the respondents, with “small” companies defined as 10 to 99 employees and midsize as 100 to 499 employees) how confident they were in their ability to respond to a disaster. According to the survey results, around 82% are somewhat or very satisfied with their DR plan, 84% believe they are “very” or “somewhat” protected, and one in three responded that they believe customers would evaluate other vendors should they experience an outage.

But when the survey questions drilled down into the details of SMB DR plans, Symantec representatives noticed discrepancies in the responses. While the vast majority began by expressing confidence in their ability to survive a disaster, 47% also said they have no formal DR plan An estimated 60% of company data is backed up in this market, with only one in five respondents backing up daily, and more than half expect they would lose more than 40% of their data in a disaster. More puzzling, while only a third of respondents said they expected their own customers to evaluate competitors in the event of a disaster or outage, 42% said they personally had switched vendors due to “unreliable computing systems” and 63% said it damaged their perception of an SMB vendor.

Pat Hanavan, Symantec’s VP for Backup Exec product management, admitted the answers to the questions about confidence may have been different if asked at the end of the survey rather than at the beginning. “My guess is the survey itself may have been an educational process for some,” he said.

It’s also important to remember how long it has taken enterprises to focus on formalized disaster recovery planning and technology with the benefit of internal expertise dedicated to data protection in most cases. Many SMBs rely on a partner or non-technical employees to keep IT operations running and also operate without the budget of the big guys.

The good news for SMBs starting to consider disaster recovery is that more and more vendors are focused on storage and data protection in their market these days, including a plethora of cloud services designed to host data and/ or standby infrastructure for companies which can’t afford a full secondary data center.

If you’re an SMB working your way through disaster recovery planning, please feel free to share your experiences in the comments.


September 25, 2009  7:35 AM

09-24-2009 Storage Headlines



Posted by: Beth Pariseau
Podcasts

Stories referenced:

(0:23) Brocade expands battlefield with Cisco to encompass Data Center Ethernet and FCoE

(2:10) Data Domain adds cascaded replication

(3:43) MaxiScale out of stealth with Flex clustered file system for Web companies

(5:37) Dell drops $3.9 billion on new services business

(6:35) Storage Decisions: Storage managers must explain retention, email archiving and compliance
Storage Decisions: Pros and cons of cloud storage technology


September 22, 2009  3:24 PM

Data Domain adds cascaded replication



Posted by: Dave Raffo
remote data protection

Data Domain’s first product upgrade since it became part of EMC is a step in the same direction the deduplication specialist was heading before the acquisition.

Data Domain today upgraded its operating system with an emphasis on improving its replication capabilities. Data Domain Replicator now supports cascading (multithreaded) replication that lets customers automatically replicate across more than two sites with bi-directional replication. It also expanded its fan-in to a maximum of 180-1 for remote sites.

Replication has been an area of concentration for Data Domain and other deduplication vendors this year. In May, Data Domain bumped its fan-in to a maximum of 90-1 and added full-system replication mirroring. IBM Diligent, Sepaton, and Quantum have all beefed up or added replication to deduplication products this year.

The number of sites supported by Replicator depends on the Data Domain system being run. Data Domain’s largest system, the DD880, now supports 180 to 1 fan-in with the DD690 getting up to 90-1 and midrange DD565 boxes support 45-1.

“We actually have customers asking for fan-in to more sites,” said Brian Biles, Data Domain VP of product marketing.

Biles says the number of sites that can replicate to the data center is determined by the amount of resources allocated for each operation. The idea is to keep them balanced for operations such as reads, writes and replication. “As systems get bigger and faster, we can apply more of those resources to replication,” he says. “The DD880 can support more streams coming in [than other Data Domain devices].”

Cascaded replication copies multiple threads at the same time. This doubles the amount of throughput getting replicated in most cases, Biles said.

Biles, a Data Domain founder, says EMC execs have given Data Domain the go-ahead to continue to execute the roadmap it drew up before EMC’s $2.1 billion acquisition.

“All discussions so far have been encouraging us to stay on the same course we were on, and to do more of it,” Biles said. “I think you’ll see a lot of the same things next year as you saw this year – an emphasis on scaling, and tightening our link with backup and archiving software.”

He said that includes work with Symantec’s NetBackup OpenStorage (OST) interface, even though Symantec and EMC are rivals in the backup game. “Absolutely,” Biles said when asked if there would be tighter integration with OST. “Expect to see more and more over time.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: