Yottabytes: Storage and Disaster Recovery


April 6, 2011  6:35 PM

So What is CDMI, Anyway?

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

CDMI stands for Cloud Data Management Interface and is an industry standard defined and controlled by the Storage Networking Industry Association (SNIA).

“The SNIA CDMI architecture standard defines the functional interface that applications will use to create, retrieve, update and delete data elements from the cloud,” according to Mezeo Software (quoting the SNIA), which announced this week that it planned to support the standard in its cloud storage products. “Based on a REST HTTP protocol, the CDMI standard requires adopters to implement strong access controls and to provide for encryption of the data on the underlying storage media for secure multi-tenant cloud environments.”

The SNIA goes on to say that CDMI lets clients discover the capabilities of the cloud storage offering, use this interface to manage containers and the data that is placed in them, and lets administrative and management applications manage containers, accounts, security access and monitoring/billing information. In addition, metadata can be set on containers and their contained data elements through this interface, SNIA says.

In other words, CDMI means that users have a standard interface for performing such functions as backups, and defines a set of standard terminology regarding users and types of data, regardless of the underlying storage technology in the cloud.

Vendors such as Bycast, Cisco, Hitachi Data Systems, Iron Mountain, NetApp, Olocity, Oracle, and QLogic have taken part in developing the specification, which came out in February, 2010 after the group was formed in 2009. There is also a mailing list devoted to the specification.

Like other industry standards before it, such as TCP/IP, vendors will be holding “plugfests” to ensure that their different implementations of the CDIA specification can work together. One will be held later this month in Colorado.

CDMI is increasingly becoming of interest to users; according to a recent survey of users from Storage Strategies NOW, 53% said that SNIA’s CDMI will be part of their cloud storage RFPs/proposals; and 30% of respondents said SNIA’s CDMI was very important for a public/hybrid cloud standard.

March 31, 2011  11:32 PM

Companies, Governments Lose Personal Data

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

You know, it’s not even that March was all that unusual. But here, on World Backup Day, it’s worth looking at some of the incidents that happened this month:

  • The personal information — including the names, Social Security numbers, addresses, phone numbers, and dates of birth – of 13,000 individuals who had filed compensation claims with BP after last year’s disastrous oil spill may have been potentially compromised after a laptop containing the data was lost by a BP employee.
  • The world’s largest stem cell bank, Cord Blood Registry, mailed data-breach warning letters to some 300,000 people after storage tapes and a laptop were stolen from an employee’s car
  • Insurer Health Net waited until March 14 to disclose a data breach discovered on Jan. 21 involving the loss of nine server drive and the data of 2 million customers, employees, and health care providers.
  • A USB memory stick containing the details of around 4,000 people has been lost by Leicester City Council.
  • Taxpayers’ Social Security numbers, confidential child abuse reports and personnel reviews of New Jersey workers nearly went to the highest bidder after the state sent surplus computers out for auction.
What the heck is going on?
Sadly, it’s not even all that unusual. And to make matters worse, such breaches are getting more expensive. According to the Ponemon Institute, which did a survey for Symantec Corp., data breaches continue to cost organizations more every year. The average organizational cost of a data breach this year increased to $7.2 million, up 7 percent from $6.8 million in 2009. Total breach costs have grown every year since 2006. Data breaches in 2010 cost their companies an average of $214 per compromised record, up $10 (5 percent) from last year, the Institute said.
Such incidents are so prevalent that the Online Trust Alliance recommends that organizations have a plan in place for dealing with them, indicating it’s an issue of not if, but when. The only winners in these situations appear to be the credit-monitoring bureaus.
Part of the problem is that the lost data wasn’t always encrypted (though in the Leicester case, it appears the data was encrypted and the stick was stolen deliberately). On the other hand, how often does it happen that people lose the password or the key, or through some other action lose legitimate access to their data?
One thing does seem clear: People aren’t learning. The Leicester, New Jersey, and Health Net incidents were followups to similar incidents in 2009.


March 26, 2011  8:13 AM

Governments Also Subject to E-Discovery Regulations

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

When new rules for electronic discovery of documents in civil cases went into effect in December, 2006, there was some discussion at the time about whether governments and other public entities would also be subject to the same rules.

It’s taken more than four years, but it’s starting to look like they do.

According to a recent article in Law Technology News, “Recent decisions indicate that, despite the narrower scope of pretrial criminal discovery, the government may well be held to the same high standards of preservation and production of electronically stored information (ESI).” The article goes onto cite several such decisions.

In one case, the Federal Bureau of Investigation was criticized for not retaining copies of BlackBerry messages sent to a defendant. Consequently, the jury was given what’s called “adverse inference instructions,” which the article said “permits (but does not require) a jury to presume that the lost evidence is both relevant and favorable to the innocent party.” The jury subsequently found the defendant not guilty of all charges.

The FBI was lucky. Companies, in similar cases, have been fined up to $1.5 billion for failing to maintain records that the court considered discoverable.

In another case the article cited, the judge explicitly said that “[l]ike any ordinary litigant, the Government must abide by the Federal Rules of Civil Procedure. It is not entitled to special consideration concerning the scope of discovery, especially when it voluntarily initiates an action.”

In fact, in some courts, there is a movement afoot to use the e-discovery rules — originally defined for civil procedures — for criminal procedures as well, because there is not a corresponding set of rules for such procedures, the article went on to say.

Is the government ready? Since 2007, IE Discovery Inc. said it has surveyed legal, records management, and information technology (IT) personnel within the federal government about trends in e-discovery, and it recently released its 2010 survey2010 Benchmarking Study of Electronic Discovery Practices for Government Agencies Survey. The survey included 46 government attorneys, paralegals, and IT personnel from 24 government agencies.

Results included the following:

  • More than two-thirds of participants report that e-discovery processing is handled in-house.
  • 61% of those surveyed claimed to be “more confident” in their ability to manage e-discovery.
  • Government agencies have no standard approach to impose and manage litigation holds.
  • Many agencies do not engage in early data assessment to reduce the amount of data that must be processed and reviewed.
  • More than 40 percent of the agencies say that their e-discovery burden grew in the past year.
  • The number of agencies reporting budgeting as a top concern jumped by almost 30 percent from 2009 to 2010.
  • Almost one-half of agencies are now collecting “structured data” in repositories, databases, and similar systems.
  • The form of production varies greatly. Almost 40 percent of respondents report producing discovery requests in image and text formats, 37 percent in native file formats and only 41 percent on paper.
Now, not all of these results are good news. 41% still respond to requests on paper? Are you kidding? More than half aren’t collecting structured data in systems? How in the world are they doing it, in Longaberger baskets? They have no standard approach and do no early data assessment? Oy! Still, I’ll take IE’s word for it that this is an improvement. The courts seem to be indicating, though, that they’d better improve faster.


March 18, 2011  3:35 PM

Japan Earthquake Affecting Flash Storage Production, Especially for iPad 2s

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

Several analyst firms have come out with reports in the past week saying that flash storage production could be affected by the Japanese earthquake. In particular, this could delay manufacturing of Apple’s popular iPad 2.

iSuppli, in particular, has issued three separate press releases in the past week regarding the issue, one about the iPad 2 specifically, one about delays in components in general, and one about the industry’s dependence on Japanese-made components. For example, “Japanese companies, mainly Toshiba Corp., account for 35 percent of global NAND flash production in terms of revenue,” the company said.

Reuters quoted DRAMeXchange as saying that spot prices of NAND flash chips increased on Tuesday by nearly 3 percent after a 20 percent jump on Monday.

Micron, based in my home state of Idaho, stands to gain, according to several analysts quoted in an article by Matt Phillips in the Wall Street Journal. While Micron, too, has manufacturing facilities in Japan (despite what Raymond James chip analyst Hans Mosesmann was quoted as saying in Barron’s), they were located in south central Japan and were undamaged, according to an article by Anne Wallace Allen in the Idaho Business Review.

However, even undamaged facilities might take time to start up again, iSuppli warned. “While some of these suppliers reported that their facilities were undamaged, delivery of components from all of these companies is likely to be impacted at least to some degree by logistical issues now plaguing most Japanese industries in the quake zone. Suppliers are expected to encounter difficulties in getting raw materials supplied and distributed as well as in shipping out products. They also are facing difficulties with employee absences because of problems with the transportation system. The various challenges are being compounded by interruptions in the electricity supply, which can have a major impact on delicate processes, such as semiconductor lithography.” Aftershocks are also a factor, the organization warned.

iSuppli also noted that actual shortages aren’t likely to hit until later in the month or April, because there is typically a two-week inventory in the supply chain. However, prices are already going up due to the “psychological effect” of the earthquake, the company said.

While Japan is no stranger to earthquakes, the power of this one dwarfed previous quakes, said Jim Handy of Objective Analysis in a report on March 11. “The Taiwan earthquake in 1999 that caused significant damage in Taipei and stopped fabs in Hsin Chu was a magnitude 7.6, less than one tenth the power of Japan’s earthquake. The 1989 Loma Prieta earthquake that stopped production in Silicon Valley measured 6.9, or one hundredth the strength of today’s earthquake. Prior Japan earthquakes that have caused concerns to the semiconductor industry have been far smaller than today’s, including a 5.9 magnitude earthquake in September 2008, two measuring 6.0 and 6.8 in July 2007, and one measuring 6.9 in March of 2007.”

Handy also updated the company’s mondo chip map to reflect information it had learned from the various manufacturers since the earthquake.

2003-03-11-objective-analysis-japan-earthquake-map-1

Earthquakes can have multiple effects on fabrications plants, Handy said in an earlier report on a 2007 Japanese quake. “Typically an earthquake will disrupt the processing of any wafers that are on a photolithographic tool at the time that the earthquake struck,” he said. “Although a very large earthquake in close proximity to a fab can cause physical damage to the structure that is greater than the damage the building is designed to sustain, most fabs are designed to accommodate the kind of earthquake that is typical to the area. Fabs are built on a special floating floors that isolate the internal equipment from external vibration ranging from tiny earth tremors or vibrations from a passing truck to minor earthquakes. Greater earthquakes may not cause damage but their vibrations can result in incidental damage to the products being processed.

“If there is a power loss, no matter how brief, wafers in a high-temperature process may have to be scrapped,” Handy continued. “If the power loss lasts 20-30 minutes or longer there may also be a period of unexpected downtime as furnaces are brought back to a stable temperature. Another possible difficulty would be possible breaches in the clean environment. Earthquake damage may even require recalibration and further losses of work in progress (WIP) than are spelled out here. Losses could run into multiple days, stopping product flow for a week or more.”

Ironically, typically flash memory chips are cheaper this time of year, according to PCB Design 007. However, due to the earthquake, as well as to increased demand for iPads, that may be different this year, the website said.


March 14, 2011  11:46 PM

What Japan is Teaching Us About Disaster Recovery

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

It’s difficult to write this, when the full extent of the earthquake damage to northern Japan isn’t yet clear and the nuclear crisis is still escalating.  But now — when the images are fresh not only to you but also to the managers who approve your disaster recovery projects — is the best time to think about how your company would handle being in a similar situation, as overwhelming and impossible to believe it might be right now.

Think about it. How many places are really safe from natural disasters? We’ve already seen how the Icelandic volcano shut down flights all over Europe. The Bay Area, Seattle, and Portland are all geologically active; many parts of the Southeast are vulnerable to hurricanes; the central U.S. is prone to tornadoes. In addition to earthquakes and volcanoes in the Asia Pacific area, the region is also subject to typhoons.

It’s easy to think that having a backup or replication system in place is enough, but after watching the widespread devastation in Japan, it’s clear that we need to be thinking about how to scale up our ideas of what kind of disaster we’re planning for.

1. Where are your backups, replicated servers, etc. located? Same building? Same city? Same state? If you didn’t realize it before, it’s clear now that a disaster can cover a massive distance and that backups need to be geographically dispersed, perhaps through the cloud. Also, even if you’re using the cloud, where is the data center actually located? If it’s someplace subject to natural disasters, such as earthquake- and wildfire-prone areas in California, it may not help you much. I know some companies that choose to have their backup sites located near Spokane, Wash., because it’s geologically boring.

2. And while you’re at it, how well is your company set up for remote employees? If employees are evacuated, is there a way they can work from where they are? Can employees in other parts of the world pick up the slack?

3. How well is your site and your backup site set up for emergency power? A big part of the problem with the Japanese nuclear reactors was that they didn’t plan for an extended power outage. While there were batteries to operate the cooling system, they lasted only a few hours. Some colocation facilities keep diesel fuel on hand to run generators; does yours? How long will it last?

4. The good news — and there is some — is that the Internet reportedly held up remarkably well. Renesys, which has performed some interesting analyses of Internet shutdowns in Libya, has observed that much of the country’s Internet traffic was unchanged. “It’s clear that Internet connectivity has survived this event better than anyone would have expected,” the company wrote in its blog. “The engineers who built Japan’s Internet created a dense web of domestic and international connectivity that is among the richest and most diverse on earth, as befits a critical gateway for global connectivity in and out of East Asia. At this point, it looks like their work may have allowed the Internet to do what it does best: route around catastrophic damage and keep the packets flowing, despite terrible chaos and uncertainty.”

Consequently, communication with people outside the disaster zone has been better than after some natural disasters, with many people able to check in with loved ones fairly quickly, using social media such as Facebook and Twitter.

So think about your plan. Meanwhile, there are many ways to follow the developing situation in Japan, and to help victims in the ravaged country. Google, in particular, has collected a list of resources to keep informed about what’s happening. It could just as easily have been any of us, so think about how you can help.


March 7, 2011  5:42 PM

Why is Hiatchi GST Selling Itself to Western Digital?

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

There doesn’t seem to be much question about why Western Digital would want to acquire Hitachi GST, its intentions of which it announced today. Why wouldn’t the #2 hard drive manufacturer (or #1 depending on how you count) want to acquire the #3 hard drive manufacturer (or #5 depending on how you count)? Especially when it comes with a pedigree like Hitachi GST’s, formed from IBM’s hard drive division in 2002 — so it could leapfrog #1 Seagate? Especially when the acquisition could put it into the enterprise market?

What’s more of a question is why Hitachi GST, which as recently as last November was formally planning to go public, decided to chuck it all and get acquired. In the quarter ending December 31, results of which were announced on February 7, Hitachi showed 25% growth over the previous year and 11% growth over the same quarter a year before. In the quarter ending on September 30, results of which were announced on November 3, Hitachi showed 16% growth over the same quarter a year before. “Being in good health and profitable, HGST is in good financial position to succeed for an eventual IPO,” Storage Newsletter said last November.

There appear to be several reasons.

First, solid-state storage was making it more challenging for a hard drive manufacturer to succeed in an IPO, said Reuters. (Ironically, Hitachi GST has some developments in SSD that could help here.) But Hitachi GST faced some expensive retooling, Chris Mellor of The Register said in November. “Hitachi GST is profitable but, like all the disk drive companies, it faces a costly transition from today’s PMR recording technology to its successor,” he said then. “It also needs to enlarge its manufacturing operations to pump out more disk drives if it is to gain market share and make more money.” “Maybe the certainty of WD’s dollars was stronger than the variable outcome of an IPO for Hitachi,” he said today.

Hitachi GST concluded that being acquired was the “most effective path to take and an excellent fit,” said Hitachi GST CEO Steve Milligan, in an investor conference call.

In fact, at the same time Hitachi GST was considering going public, Seagate was considering going private — which it ended up deciding not to do, but which led Western Digital at the time to consider buying Seagate, Mellor said. While Seagate decided not to go that route — possibly because of concern over antitrust issues, according to Bloomberg — that may have given Western Digital the idea to acquire market share, he said. Antitrust is less of an issue with a Hitachi GST acquisition because they are considered to be in different markets, Bloomberg cited Matt Bryson, an analyst at Avian Securities LLC, as saying.

In addition, the Hitachi parent company was looking to unload Hitachi GST, hence the IPO talk in the first place.  Hitachi is looking to raise money to invest in less volatile, more innovative areas such as power plants, smart grids, batteries, and railway systems, Reuters said. If going public wasn’t going to work, then an acquisition was about the only alternative.

Hitachi GST also hadn’t had a great 2009, and, in fact, was falling behind technologically, Mellor wrote in August 2009. “HGST has virtually no chance of catching up with the two industry leaders unless it maintains areal density equality with them or, even better, gets an edge,” he said.

Another factor behind the Western Digital acquisition could be Milligan himself, who joined Hitachi GST as chief financial officer in 2007 — after being senior vice president and CFO for Western Digital, according to his company bio. “He was named Hitachi GST president in early 2009 and president and CEO in December of the same year” — a numbers guy most likely put in charge when parent company Hitachi was trying to decide what to do with the company, as opposed to a technology guy who might be emotionally invested in the products.

Western Digital will pay $3.5 billion in cash and $750 million in stock for HGST, securing a term loan of $2 billion and a revolving credit facility of $500 million to help fund the deal, Reuters said. Hitachi GST will end up owning 10% of Western Digital. The companies expect the deal to conclude in 4-12 months but most likely in the September quarter, to be immediately accretive to earnings, and to result in “opex synergies of 9 to 10%” — likely meaning layoffs of duplicate personnel — all according to the investor call.


February 28, 2011  2:37 PM

Is Modern Technology Included in Your Disaster Recovery Plan?

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

A variety of articles and surveys have come out recently that indicate that organizations aren’t including modern technology such as social media and mobile devices in their disaster recovery plans — which could hamper their recovery from a disaster. Incidents such as the Internet shutdown in Egypt have demonstrated that companies need more alternatives, and that a “disaster” may look different from what companies typically plan for.

For example, Janco Associates conducted a review of 215 disaster recovery and business continuity plans and found that only 53 (25%) of them included social networks as a tool during the recovery process. In a whole variety of situations, including Egypt (as well as the recent earthquakes in New Zealand), people have used Twitter and Facebook to let people know what was going on, request aid, and coordinate recovery resources. Granted, this isn’t what people always think of when they think of “disaster recovery,” but the first item on any disaster recovery plan should be ensuring the safety of one’s people.

Similarly, the widespread use of mobile devices is making it easier for organizations to continue work outside a disaster’s area of impact. Last year found the Norwegian Prime Minister running his country from an iPad when he was stuck in JFK due to ash plumes from the Icelandic volcano.

Keep in mind, though, that such methods still require the use of a functioning Internet. But how about smartphones? There are reportedly 5 billion cell phones in use worldwide, which dwarfs the 1.9 billion Internet users, according to an article in CMSwire. Text messages can be a more efficient way of reaching people than can messages that rely on a functioning Internet, noted the publication in another article. “Though there’s no denying that Twitter and Facebook have also become great channels for communication, when the Internet gets shut down, the best way to send alerts is with text messaging.”

However, mobile devices don’t yet seem to be included in disaster recovery plans, except as a possible source of disaster themselves through their loss or the loss of data on them, according to an article by Linda Tucci in SearchCIO.

Recent events have shown us two things: First, that disasters may look different from what we expect, and second, that we need to look at all the tools at our disposal in recovering from them.


February 25, 2011  11:56 PM

Moon Project Switches Storage

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

Ever been in the middle of a project and found out the storage system wasn’t big enough?

Now, imagine the device on the other end, generating the data, is a giant camera taking pictures of the moon, and you see the problem.

The Lunar Orbiter Laser Altimeter (LOLA) uses a technique known as remote sensing to draw a precise topographical map of the moon. When it’s done, scientists will basically have a GPS system for the moon, all ready to go, next time we send people to the moon. The purpose is to identify ideal landing sites and areas of permanent shadow and illumination. (Depending on what sorts of cameras LOLA has, imagery could also help scientists determine the amount of water, the type of dirt, and even whether there’s any plan life on the moon, as well as the best places to put a cell tower.)

Basically, according to NASA, the technology works by splitting a single laser pulse into five beams. These beams then strike and are backscattered from the lunar surface. From the return pulse, the LOLA electronics determines the time of flight which, accounting for the speed of light, provides a precise measurement of the range from the spacecraft to the lunar surface.

It’s the same sort of technology spy satellites use on the Earth now. The difference between this and the sort of imagery done of the moon before is one of detail. Where previous images had errors of from one to ten kilometers (about 0.62 to 6.2 miles, the LOLA system is down to the level of 30 meters (almost 100 feet) or less spatially and one meter (almost 3.3 feet) vertically. (In comparison, commercial satellites can take pictures of things on the earth as small as half a meter — and government satellites can take pictures at an even higher resolution.)

What this means, though, is that after a year up in space, LOLA had taken nearly three billion range measurements, compared with about eight million to nine million each from three recent international lunar missions, NASA said.

That’s a lotta data.

So that’s where the new storage system came in. The Arizona State University School of Earth and Space Exploration (SESE) deployed an EMC Isilon network attached storage (NAS) system to hold the tens of thousands of moon images. SESE can also replicate its lunar imagery to a second Isilon NL cluster, using Isilon’s SyncIQ asynchronous replication application.

Because the project is expected to last until 2014, the school’s previous NAS — which wasn’t identified, but which according to SearchStorage.com was Network Appliance — couldn’t handle the projected load, which adds up to more than a petabyte of capacity in the Isilon system.

Ironically, though EMC — which purchased Isilon for $2.25 billion last fall — sent out the press release announcing the project, it was pretty much all set up before the EMC purchase and had little to do with EMC itself. In fact, the project manager wasn’t all that thrilled with the news of the EMC purchase, and he is somewhat concerned about EMC’s ability to continue to support the project, SearchStorage said.

Ernest Bowman-Cisneros, manager, LROC Science Operations Center at SESE was reportedly testing Isilon’s system at the same time EMC was negotiating the Isilon acquisition, but didn’t know it. “It wasn’t until after we signed on the dotted line that we found out about EMC,” SearchStorage quoted Bowman-Cisneros as saying. “By the end of December, we had completed our testing and decided to go with their system. At this point, [the acquisition has] been inconsequential to us. My only concern is that EMC will continue to develop and support the model I have. “

Here’s hoping. It’s another three years before the project is completed .


February 14, 2011  11:38 PM

Millions of Medical Records Stolen from Unlocked Van

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

All the data security in the world doesn’t help if you don’t lock the damn door.

Medical and financial records of about 1.7 million people — mostly patients — from  Jacobi Medical Center, North Central Bronx Hospital, Gunhill Health Center, and Tremont Health Center in New York’s The Bronx were stolen in December, iHealthbeat reported. The news is coming out now because the 1.7 million people are all receiving letters explaining the problem to them and offering them an information hot line, customer care centers, and free credit monitoring and fraud resolution services for one year if they register within the next 120 days, according to an article in the New York Times.

Was it a Russian hacker? Malware?

No, the problem is that the affected information was stored on magnetic data tapes left in an unlocked van belonging to GRM Information Management Services, the city’s health record vendor. The tapes were reportedly being moved to a “a secure storage location.”

It sounds like the punch line to a joke — the saying “Never underestimate the bandwidth of a station wagon full of mag tapes speeding down the highway” has been around since the 1990s. But apparently it’s all too real. The New York Health and Hospitals Corp. has since fired GRM and has filed suit against the company to hold it responsible for covering all damages related to the loss of the data.

NBC New York quoted an HHC spokeswoman as saying that there had been no reports of any access to the data, and that “highly specialized and technical expertise and certain tools” would be required for the thief to gain access to the data. Nonetheless, the organization is legally required to notify all the victims and take steps to mitigate any damages. (To add insult to injury, this was the third time the organization had been hit by theft, though the previous instances were much smaller.)

Lessons to be learned? The first step in storage and backup security is physical access, and that data loss is less often caused by hackers and viruses than is commonly believed.


February 11, 2011  2:06 PM

Storage Costs Forcing Governments to Move GIS to the Cloud

Sharon Fisher Sharon Fisher Profile: Sharon Fisher

Numerous government entities, ranging from local to national, use geographic information systems (GIS) software as a way of collecting and displaying information on a geographic basis. GIS performs a variety of jobs, including developing maps; tracking land development; and placing infrastructure such as roads, cell towers, and fire stations.

However, GIS files can be humongous. In Oregon, for example, the Oregon Geospatial Enterprise Office currently manages 4 TB of geospatial data on behalf of the enterprise GIS community in Oregon, which is expected to grow to nearly 15 TB of stored data in the next few years. The increasing size and cost of the storage required — as well as the people to manage it — are forcing a number of governments to look at moving GIS storage to the cloud, according to an article by Rutrell Yasin in Government Computer News.

Results are due today for a Request for Information submitted by the Western States Contracting Alliance, a consortium consisting of Alaska, Arizona, California, Colorado, Hawaii, Idaho, Minnesota, Montana, Nevada, New Mexico, Oregon, South Dakota, Utah, Washington, and Wyoming. This particular RFI was submitted by Montana, with active participation from the states of Colorado, Oregon, and Utah, but it may result in a desire to place some, or all, GIS services for the participating states in the cloud, the RFI said. In fact, this is potentially true for all 51 members (50 states plus District of Columbia) of the National Association of State Purchasing Officials (NASCO) Cooperative.

“Basically, it is our GIS folks who are saying storage is expensive” and want to find cheaper methods of storing GIS data, Utah CIO Stephen Fletcher was quoted as saying in the GCN article.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: