Yottabytes: Storage and Disaster Recovery

Sep 27 2012   11:11AM GMT

3 Things the New York Times Data Center Story Left Out

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

Certainly this won’t be the only blog post calling New York Times writer James Glanz to task for his features on data center power use. But there were three specific areas that he missed out on.

Virtualization. In talking about how under-utilized data center servers are, and in appearing to limiting himself to less than state-of-the-art facilities, Glanz failed to notice how prevalent virtualization is becoming, which enables an organization to set up numerous “virtual servers” inside a physical server — which, in the process, results in much higher utilization.  “[V]irtualized systems can be easily run at greater than 50% utilization rates, and cloud systems at greater than 70%,” writes Clive Longbottom in SearchDataCenter.

“[I]n many cases the physical “server” doesn’t even exist since everyone doing web at scale makes extensive use of virtualization, either by virtualizing at the OS level and running multiple virtual machines (in which case, yes, perhaps that one machine is bigger than a desktop, but it runs several actual server processes in it) or distributing the processing and storage at a more fine-grained level,” writes Diego Doval in his critique of the New York Times piece. “There’s no longer a 1-1 correlation between “server” and “machine,” and, increasingly, “servers” are being replaced by services.”

“Although the article mentions virtualization and the cloud as possible solutions to improve power utilization, VMware is not mentioned,” agrees Dan Woods in Forbes‘ critique of the piece. “If the reporter talked to VMware or visited their web site, he would have found massive amounts of material that documents how thousands of data centers are using virtualization to increase server utilization.”

Storage. Similarly, Glanz appeared to not be aware of advances in storage technology, even though some of them are taking place in the very data centers he lambasted in his articles. In Prineville, Ore., for example, not all that far from the Quincy, Wash., data centers he criticized, Facebook is working on designing its own storage to eliminate unnecessary parts, as well as setting up low-cost slow-access storage that is spun down most of the time.

Facebook — which does this research precisely because of the economies of scale in its massive data centers — is making similar advances in servers. Moreover, the company’s OpenCompute initiative is releasing all these advances to the computer industry in general to help it take advantage of them, too.

In addition, Glanz focused on the “spinning disks” of the storage systems, apparently not realizing that increasingly organizations like eBay are moving to solid-state “flash” storage technology that use much less power.

Also, storage just isn’t as big a deal as it used to be and as the story makes out. “A Mr Burton from EMC lets slip that the NYSE ‘produces up to 2,000 gigabytes of data per day that must be stored for years’,” reports Ian Bitterlin of Data Center Dynamics in its critique of the New York Times piece. “A big deal?  No, not really, since a 2TB (2,000 gigabytes) hard-drive costs $200 – less than a Wall Street trader spends on lunch!”

Disaster recovery. Glanz also criticized data centers for redundancy — particularly their having diesel generators on-site to deal with power failures — apparently not realizing that such redundancy is necessary to make sure the data centers stay up.

And yet, even with all this redundancy, there have been a number of well-publicized data center failures in recent months caused by events as mundane as a thunderstorm. Such outages can cost up to $200,000 per hour for a single company — and a data center such as Amazon’s can service multiple companies. If anything, one might argue that the costs of downtime require more redundancy, not less.

Of course it’s important to ensure that data centers are making efficient use of power, but it’s also important to understand the context.

2  Comments on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Sharon Fisher
    [...] a systems-and-software perspective, Sharon Fisher specifies “3 Things the New York Times Data Center Story Left Out for IT Knowledge [...]
    0 pointsBadges:
    report
  • thomasmckeown55
    The DevOps bottom line: if your civilian friends try to talk to you about the dirty business you’re in, assure them it was all a big misunderstanding. Keep working on the real problems of security, availability, recovery, and operational efficiency already before you. The solutions–solutions in terms of engineering systems which fulfill business requirements–you create within a traditional datacenter framework will also go a long way toward making the most of the electricity, pollution, and other costs contingent on computing. We’re getting better at this every year, and none of the evidence the Times presents deserves more attention.(http://bit.ly/QnGzM1)
    50 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: