Yottabytes: Storage and Disaster Recovery


March 7, 2011  5:42 PM

Why is Hiatchi GST Selling Itself to Western Digital?

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

There doesn’t seem to be much question about why Western Digital would want to acquire Hitachi GST, its intentions of which it announced today. Why wouldn’t the #2 hard drive manufacturer (or #1 depending on how you count) want to acquire the #3 hard drive manufacturer (or #5 depending on how you count)? Especially when it comes with a pedigree like Hitachi GST’s, formed from IBM’s hard drive division in 2002 — so it could leapfrog #1 Seagate? Especially when the acquisition could put it into the enterprise market?

What’s more of a question is why Hitachi GST, which as recently as last November was formally planning to go public, decided to chuck it all and get acquired. In the quarter ending December 31, results of which were announced on February 7, Hitachi showed 25% growth over the previous year and 11% growth over the same quarter a year before. In the quarter ending on September 30, results of which were announced on November 3, Hitachi showed 16% growth over the same quarter a year before. “Being in good health and profitable, HGST is in good financial position to succeed for an eventual IPO,” Storage Newsletter said last November.

There appear to be several reasons.

First, solid-state storage was making it more challenging for a hard drive manufacturer to succeed in an IPO, said Reuters. (Ironically, Hitachi GST has some developments in SSD that could help here.) But Hitachi GST faced some expensive retooling, Chris Mellor of The Register said in November. “Hitachi GST is profitable but, like all the disk drive companies, it faces a costly transition from today’s PMR recording technology to its successor,” he said then. “It also needs to enlarge its manufacturing operations to pump out more disk drives if it is to gain market share and make more money.” “Maybe the certainty of WD’s dollars was stronger than the variable outcome of an IPO for Hitachi,” he said today.

Hitachi GST concluded that being acquired was the “most effective path to take and an excellent fit,” said Hitachi GST CEO Steve Milligan, in an investor conference call.

In fact, at the same time Hitachi GST was considering going public, Seagate was considering going private — which it ended up deciding not to do, but which led Western Digital at the time to consider buying Seagate, Mellor said. While Seagate decided not to go that route — possibly because of concern over antitrust issues, according to Bloomberg — that may have given Western Digital the idea to acquire market share, he said. Antitrust is less of an issue with a Hitachi GST acquisition because they are considered to be in different markets, Bloomberg cited Matt Bryson, an analyst at Avian Securities LLC, as saying.

In addition, the Hitachi parent company was looking to unload Hitachi GST, hence the IPO talk in the first place.  Hitachi is looking to raise money to invest in less volatile, more innovative areas such as power plants, smart grids, batteries, and railway systems, Reuters said. If going public wasn’t going to work, then an acquisition was about the only alternative.

Hitachi GST also hadn’t had a great 2009, and, in fact, was falling behind technologically, Mellor wrote in August 2009. “HGST has virtually no chance of catching up with the two industry leaders unless it maintains areal density equality with them or, even better, gets an edge,” he said.

Another factor behind the Western Digital acquisition could be Milligan himself, who joined Hitachi GST as chief financial officer in 2007 — after being senior vice president and CFO for Western Digital, according to his company bio. “He was named Hitachi GST president in early 2009 and president and CEO in December of the same year” — a numbers guy most likely put in charge when parent company Hitachi was trying to decide what to do with the company, as opposed to a technology guy who might be emotionally invested in the products.

Western Digital will pay $3.5 billion in cash and $750 million in stock for HGST, securing a term loan of $2 billion and a revolving credit facility of $500 million to help fund the deal, Reuters said. Hitachi GST will end up owning 10% of Western Digital. The companies expect the deal to conclude in 4-12 months but most likely in the September quarter, to be immediately accretive to earnings, and to result in “opex synergies of 9 to 10%” — likely meaning layoffs of duplicate personnel — all according to the investor call.

February 28, 2011  2:37 PM

Is Modern Technology Included in Your Disaster Recovery Plan?

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

A variety of articles and surveys have come out recently that indicate that organizations aren’t including modern technology such as social media and mobile devices in their disaster recovery plans — which could hamper their recovery from a disaster. Incidents such as the Internet shutdown in Egypt have demonstrated that companies need more alternatives, and that a “disaster” may look different from what companies typically plan for.

For example, Janco Associates conducted a review of 215 disaster recovery and business continuity plans and found that only 53 (25%) of them included social networks as a tool during the recovery process. In a whole variety of situations, including Egypt (as well as the recent earthquakes in New Zealand), people have used Twitter and Facebook to let people know what was going on, request aid, and coordinate recovery resources. Granted, this isn’t what people always think of when they think of “disaster recovery,” but the first item on any disaster recovery plan should be ensuring the safety of one’s people.

Similarly, the widespread use of mobile devices is making it easier for organizations to continue work outside a disaster’s area of impact. Last year found the Norwegian Prime Minister running his country from an iPad when he was stuck in JFK due to ash plumes from the Icelandic volcano.

Keep in mind, though, that such methods still require the use of a functioning Internet. But how about smartphones? There are reportedly 5 billion cell phones in use worldwide, which dwarfs the 1.9 billion Internet users, according to an article in CMSwire. Text messages can be a more efficient way of reaching people than can messages that rely on a functioning Internet, noted the publication in another article. “Though there’s no denying that Twitter and Facebook have also become great channels for communication, when the Internet gets shut down, the best way to send alerts is with text messaging.”

However, mobile devices don’t yet seem to be included in disaster recovery plans, except as a possible source of disaster themselves through their loss or the loss of data on them, according to an article by Linda Tucci in SearchCIO.

Recent events have shown us two things: First, that disasters may look different from what we expect, and second, that we need to look at all the tools at our disposal in recovering from them.


February 25, 2011  11:56 PM

Moon Project Switches Storage

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

Ever been in the middle of a project and found out the storage system wasn’t big enough?

Now, imagine the device on the other end, generating the data, is a giant camera taking pictures of the moon, and you see the problem.

The Lunar Orbiter Laser Altimeter (LOLA) uses a technique known as remote sensing to draw a precise topographical map of the moon. When it’s done, scientists will basically have a GPS system for the moon, all ready to go, next time we send people to the moon. The purpose is to identify ideal landing sites and areas of permanent shadow and illumination. (Depending on what sorts of cameras LOLA has, imagery could also help scientists determine the amount of water, the type of dirt, and even whether there’s any plan life on the moon, as well as the best places to put a cell tower.)

Basically, according to NASA, the technology works by splitting a single laser pulse into five beams. These beams then strike and are backscattered from the lunar surface. From the return pulse, the LOLA electronics determines the time of flight which, accounting for the speed of light, provides a precise measurement of the range from the spacecraft to the lunar surface.

It’s the same sort of technology spy satellites use on the Earth now. The difference between this and the sort of imagery done of the moon before is one of detail. Where previous images had errors of from one to ten kilometers (about 0.62 to 6.2 miles, the LOLA system is down to the level of 30 meters (almost 100 feet) or less spatially and one meter (almost 3.3 feet) vertically. (In comparison, commercial satellites can take pictures of things on the earth as small as half a meter — and government satellites can take pictures at an even higher resolution.)

What this means, though, is that after a year up in space, LOLA had taken nearly three billion range measurements, compared with about eight million to nine million each from three recent international lunar missions, NASA said.

That’s a lotta data.

So that’s where the new storage system came in. The Arizona State University School of Earth and Space Exploration (SESE) deployed an EMC Isilon network attached storage (NAS) system to hold the tens of thousands of moon images. SESE can also replicate its lunar imagery to a second Isilon NL cluster, using Isilon’s SyncIQ asynchronous replication application.

Because the project is expected to last until 2014, the school’s previous NAS — which wasn’t identified, but which according to SearchStorage.com was Network Appliance — couldn’t handle the projected load, which adds up to more than a petabyte of capacity in the Isilon system.

Ironically, though EMC — which purchased Isilon for $2.25 billion last fall — sent out the press release announcing the project, it was pretty much all set up before the EMC purchase and had little to do with EMC itself. In fact, the project manager wasn’t all that thrilled with the news of the EMC purchase, and he is somewhat concerned about EMC’s ability to continue to support the project, SearchStorage said.

Ernest Bowman-Cisneros, manager, LROC Science Operations Center at SESE was reportedly testing Isilon’s system at the same time EMC was negotiating the Isilon acquisition, but didn’t know it. “It wasn’t until after we signed on the dotted line that we found out about EMC,” SearchStorage quoted Bowman-Cisneros as saying. “By the end of December, we had completed our testing and decided to go with their system. At this point, [the acquisition has] been inconsequential to us. My only concern is that EMC will continue to develop and support the model I have. “

Here’s hoping. It’s another three years before the project is completed .


February 14, 2011  11:38 PM

Millions of Medical Records Stolen from Unlocked Van

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

All the data security in the world doesn’t help if you don’t lock the damn door.

Medical and financial records of about 1.7 million people — mostly patients — from  Jacobi Medical Center, North Central Bronx Hospital, Gunhill Health Center, and Tremont Health Center in New York’s The Bronx were stolen in December, iHealthbeat reported. The news is coming out now because the 1.7 million people are all receiving letters explaining the problem to them and offering them an information hot line, customer care centers, and free credit monitoring and fraud resolution services for one year if they register within the next 120 days, according to an article in the New York Times.

Was it a Russian hacker? Malware?

No, the problem is that the affected information was stored on magnetic data tapes left in an unlocked van belonging to GRM Information Management Services, the city’s health record vendor. The tapes were reportedly being moved to a “a secure storage location.”

It sounds like the punch line to a joke — the saying “Never underestimate the bandwidth of a station wagon full of mag tapes speeding down the highway” has been around since the 1990s. But apparently it’s all too real. The New York Health and Hospitals Corp. has since fired GRM and has filed suit against the company to hold it responsible for covering all damages related to the loss of the data.

NBC New York quoted an HHC spokeswoman as saying that there had been no reports of any access to the data, and that “highly specialized and technical expertise and certain tools” would be required for the thief to gain access to the data. Nonetheless, the organization is legally required to notify all the victims and take steps to mitigate any damages. (To add insult to injury, this was the third time the organization had been hit by theft, though the previous instances were much smaller.)

Lessons to be learned? The first step in storage and backup security is physical access, and that data loss is less often caused by hackers and viruses than is commonly believed.


February 11, 2011  2:06 PM

Storage Costs Forcing Governments to Move GIS to the Cloud

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

Numerous government entities, ranging from local to national, use geographic information systems (GIS) software as a way of collecting and displaying information on a geographic basis. GIS performs a variety of jobs, including developing maps; tracking land development; and placing infrastructure such as roads, cell towers, and fire stations.

However, GIS files can be humongous. In Oregon, for example, the Oregon Geospatial Enterprise Office currently manages 4 TB of geospatial data on behalf of the enterprise GIS community in Oregon, which is expected to grow to nearly 15 TB of stored data in the next few years. The increasing size and cost of the storage required — as well as the people to manage it — are forcing a number of governments to look at moving GIS storage to the cloud, according to an article by Rutrell Yasin in Government Computer News.

Results are due today for a Request for Information submitted by the Western States Contracting Alliance, a consortium consisting of Alaska, Arizona, California, Colorado, Hawaii, Idaho, Minnesota, Montana, Nevada, New Mexico, Oregon, South Dakota, Utah, Washington, and Wyoming. This particular RFI was submitted by Montana, with active participation from the states of Colorado, Oregon, and Utah, but it may result in a desire to place some, or all, GIS services for the participating states in the cloud, the RFI said. In fact, this is potentially true for all 51 members (50 states plus District of Columbia) of the National Association of State Purchasing Officials (NASCO) Cooperative.

“Basically, it is our GIS folks who are saying storage is expensive” and want to find cheaper methods of storing GIS data, Utah CIO Stephen Fletcher was quoted as saying in the GCN article.


February 5, 2011  2:25 PM

Egypt’s Internet Blockage: Could It Happen Here?

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

It didn’t take long for the question of “How did Egypt shut down the Internet?” to “Could it happen here?” and “What do we do if it does?”

To many people in U.S., the first inkling of trouble in Egypt was not from the political pages, but from Facebook, Twitter, and the press talking about the technical issues of the Egyptian government shutting down the Internet.

U.S. Senator Joe Lieberman (ID-Conn.) attempted to leverage the Egyptian situation as a means of encouraging interest in his own cybersecurity bill, which has been languishing in the U.S. Congress since June 2010 (and a previous version before that). While many media reports acted as though Lieberman’s action was new, it was the same old bill, and in fact no new actions have been taken on implementing the so-called internet kill switch in the U.S. — which, in fact, actually limits the authority to shut down the Internet that the President already has.

But for those panicked that President Barack Obama was planning to shut down the Internet, the Egyptian situation might have been a blessing a disguise. In the same way that “The Net interprets censorship as damage and routes around it,” as Internet pioneer John Gilmore put it in Time magazine in 1993, the Internet is likely to interpret attempts at government control the same way. Even during the week or less that the Egyptian Internet was down, people both inside and outside Egypt were looking for — and found — workarounds.

All these alternative routes to the Internet popped up in less than five days,” said writer Mike Elgan. “The longer the shutdown dragged on, the more new ways to connect went online. It’s now clear that any sustained Internet shutdown could be circumvented no matter what.

Moreover, freedom-of-information advocates — and savvy companies — worldwide will learn from the Egyptian shutdown and construct services intended to circumvent future attempts, pundits said.

“Back before the internet, many of us early computer hobbyists networked on something called Fidonet. It was a simple peer-to-peer network where users’ computers would just call each other at night through their old-fashioned modems, exchange information and then move on. It was slow — e-mail could take a day or two to reach someone under this scheme — but it suggested a way of doing things independent of a centralized authority,” reminisced media theorist Douglas Rushkoff in CNN.

While this might require that we go back to dial-up Internet, it seems clear that the Egyptian incident will act as a wake-up call for anyone concerned about government Internet intervention.



Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: