Storage Soup


June 29, 2009  8:27 PM

HDS makes incremental updates to midrange disk arrays

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Hitachi Data Systems’ (HDS) AMS 2000 series got a touching up today with the announcement of some incremental updates to the midrange disk array.

HDS is making two updates available now – a new High-Density Storage Expansion tray and a NEBS-certified DC power option for the 2500 model.

The High-Density Storage Expansion Tray holds up to 48 one-terabyte SATA disk drives in 4U; existing AMS trays hold 15 SAS or SATA drives in 3U. The maximum number of drives supported in the 2500 (480) hasn’t changed, but the maximum configuration now takes up one less rack than with the 15-drive trays. Good news for users focusing on storage and energy efficiency.  A fully loaded high-density tray is listed at $83,260.

The AMS 2000 series has had the option of running on battery power (DC) since the arrays were first announced last fall, but the new 2500DC model has been certified as compliant with the Network Equipment Building System (NEBS) standard for use in telecom and other “lights out” environments.

According to HDS senior product marketing manager Mark Adams, there’s little technical difference between the certified and non-certified versions, but the certified version “has been proven operational through intense earthquake activity” and certified by an independent lab. Another difference between the NEBS-certified and non-NEBS certified models is the price: the compliant list price is $102,870, while the non-compliant list price is $92,500. 

Later this year, HDS will make 8 Gbps Fibre Channel host ports available for the AMS 2300 and AMS 2500 models (internal disks will remain SAS or SATA). Security features to become available in the second half of 2009 include support for external authentication, meaning the AMS array and authenticating server don’t have to reside on the same network. Finally, as announced last week, HDS is extending its Dynamic Provisioning (HDP) software to run on the AMS in addition to the high-end USP-V.

User Matt Stroh, SAP business administrator for Wisconsin-based Industrial Electric Wire and Cable (IEWC), said he’s looking forward to deploying thin provisioning for the AMS 2300 he bought to replace an EMC Clariion CX-300 and HDS AMS 500 at the beginning of the year. “I’d like to get my hands on that as soon as possible,” he said. “We have a lot of file systems just storing SAP and Oracle binaries, and I don’t need much storage for them, but I’ve been giving them a big chunk anyway.”

While dynamic provisioning is going to be available for AMS, the Zero-Page Reclaim feature recently announced for the USP-V version of HDP will not be available for the forseeable future, according to HDS officials, who have not disclosed a technical reason why that’s the case.

June 26, 2009  7:52 AM

06-25-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau


(0:23) DataDirect Networks Web Object Scaler (WOS) challenges EMC’s Atmos in the cloud

(2:53) Pivot3 and Seanodes increase performance, scalability of iSCSI storage products

(5:17) Mimosa NearPoint, LiveOffice Mail Archive offer hybrid SaaS email archiving approach

(6:52) Emulex plans cloud HBA

(8:15) New DR SaaS startup buddies up with Data Domain, offers SLA


June 24, 2009  6:29 PM

New DR SaaS startup buddies up with Data Domain, offers SLA

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

A new private-cloud SaaS player launched this week, with plans to combine VMware and Data Domain products into an off-site disaster recovery service with a money-back recovery time service level guarantee.

Simply Continuous, based in San Francisco, is offering two services: Data Recovery Vault and AppAlive. Both involve the use of Data Domain’s DD series appliances at the customer site, which replicate to Data Domain appliances at the Simply Continuous data center. AppAlive adds bare-metal restore of servers from virtual hot standbys stored by Simply Continuous, which can also perform the conversion of physical servers to virtual ones using VMware’s vConverter tool.

Founder and CEO Tom Frangione said Simply Continuous will charge for capacity according to actual physical data stored, rather than by ‘virtual’ data – so if a user’s 20 TB are compressed to 1 TB by Data Domain’s dedupe algorithm, Simply Continous will charge the user for 1 TB.

Both services also come with a recovery time service-level agreement (SLA), based on the type and amount of data stored. The SLAs first guarantee that data will be recoverable on demand, and then set a maximum recovery window for data. According to a copy of the SLA provided to Storage Soup, the consequences for Simply Continuous are as follows:

  • If Data Recovery Vault is not available at our expected 99.9% rate in any calendar month, we will give [the customer] a credit toward the next month’s service.
  • If that happens three times in any 12 month period, [the customer] can terminate the contract.
  • If [the customer] cannot recover data in the agreed upon time frame, we’ll give [the customer] 3 months service credit toward future services

Customers can also monitor their own storage capacity at Simply Continous with tools the service provider makes available through its web portal, including SNMP trap reports and a Salesforce.com-based help-ticketing system. The company is targeting users with between 1 and 100 TB of data. Pricing depends on capacity. Frangione said the company, which received $10 million in a recent series A funding round, has signed up about 20 customers since last November.

The launch of this company comes after some discussion this spring about the use of service providers for the backup and offsite DR storage of business data, after a well-publicized lawsuit between backup service provider Carbonite and its former storage provider. Enterprise Strategy Group founder Steve Duplessie urged enterprise users to seek out service provider offerings that included service-level agreements. Backup SaaS provider SpiderOak said SLAs will be soon be available, though both SpiderOak reps and Carbonite CEO David Friend have pointed out that offering SLAs, especially SLAs that include geographic redundancy, raises the cost of the service for customers. Either way, both say SLAs, when and if they are added, will not be added to public-cloud consumer-oriented services, but to separate business or enterprise offerings.


June 24, 2009  4:55 PM

Adaptec’s assault on batteries

Dave Raffo Dave Raffo Profile: Dave Raffo

Are lithium-ion batteries running out of juice as a method to protect cache in storage arrays?

There’s probably still a lot of life left in batteries in arrays, but Adaptec today unveiled an alternate approach. The Adaptec Series 5Z RAID controllers use flash memory powered by a super capacitors instead of batteries.

Capacitors store energy until they need it, and provide enough power to destage data to Flash disk. This differs from batteries, which are in constant use, requiring monitoring, and lose power over time. Adaptec director of marketing Scott Cleland says the super capacitors last longer and require less maintenance and lower operating costs than batteries. Adaptec expects to sell the 5Z controllers through integrators and resellers, mostly in entry level and remote office systems.

“Having a battery has been a necessary evil,” Cleland said. “It goes against everything RAID stands for. RAID is about availability without touch.”

Cleland says the 5Z controller is “like having a USB stick on steroids integrated in a system.”

Adaptec isn’t the first storage vendor to use a capacitor in place of batteries. Dot Hill Systems introduced a storage controller with super capacitors two years ago, and recently was granted a patent for a “RAID controller using capacitor energy source to flush volatile cache data to non-volatile memory during main power outage,” according to a vendor press release issued today. Fujitsu also uses a capacitor to back up cache in its Eternus DX midrange storage systems.

“Today it’s available in SANs,” Cleland said. “We’re making it available for everyone else – in appliances, the departmental space, SMBs, not just the high-end Fibre Channel space.”

But Data Mobility Group analyst Joe Martins wonders if this is a solution in search of a problem, because battery life isn’t a big complaint among storage administrators. Still, Martins thinks capacitors can catch on if they work as advertised.

“I never knew it was a problem,” Martins said. “I suspect that this is one of those undercurrents where people don’t know they have the problem until you point it out. It’s like when using Windows you become accustomed to the screen freezing, and after awhile it’s just something you get used to. It’s not thought to be a problem until you encounter something else. A lot of folks may not like the situation as it is, and they may have lost data and travelled miles and miles to get to a data center and thought ‘this is the way it is, there’s no alternative.’ Maybe it will become a requirement as more vendors do it.”

Of course, larger vendors must embrace capacitors before they become a requirement.


June 23, 2009  2:54 PM

Caringo offers 4 TB free, plus ‘cluster in a box’

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Object-based storage maker Caringo Inc. has released version 3.0 of its CAStor software with new support for virtual machine clusters and a couple of freebies to sweeten the deal.

CAStor is created by the people who sold FilePool to EMC, which turned it into Centera. CAStor software can be installed on practically any machine with a processor – the Caringo guys have demonstrated it on a Mac desktop and an external hard drive at trade shows.

Version 3.0 can take advantage of multicore processors to offer a “Cluster in a Box,” in which each of the cluster nodes is a virtual machine attached to one processor core inside a single physical chassis. Caringo is also looking to take advantage of the highly dense storage servers on the market. Each drive inside the server chassis can be allocated to a different CAStor process.

With this release, Caringo is also making available free demoware called CloudFolder, a Windows application that lets customers drag and drop files into a folder on the Windows desktop. The files will automatically be added to a CAStor cluster, either internal to the organization or at Caringo’s own test cluster at its data center. If the data is sent offsite, it is sent without encryption, though Caringo says encryption is on the docket for future releases of the software.

Caringo is also offering a free 4 TB CAStor download from its web site, which requires registration and a multicore server to get started. Customers must buy a license to expand capacity.

Free or not, object-based storage systems such as Caringo’s as well as DataDirect Networks’ new Web Object Scaler (WOS) and EMC Atmos are battling to gain traction in the cloud. For now, many Web 2.0 data centers feature internally built storage.


June 19, 2009  5:11 PM

Emulex plans cloud HBA

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Emulex has confirmed it is working on a new product called Emulex Enterprise Elastic Storage (E3S), which it describes as “a transparent method for connecting block storage to cloud storage providers like EMC Atmos.” 

EMC’er David Graham spilled the beans about the product in a blog post yesterday “Moving from Block to Cloud: Emulex E3S” based on conversations he had with the connectivity vendor at EMC World. (Hmm, could this be the “unannounced OEM deal” Emulex has accused Broadcom of trying to cash in on?)

According to Graham’s post, which an Emulex spokesperson confirmed this morning is accurate:

Your hosts continue to process data to their respective storage targets as usual and the Emulex E3S device acts like a traditional block storage target (SAS or FC disks). As blocks are written to the E3S virtual disks, the E3S software virtualizes the changed blocks and compresses, encrypts, and re-packages the data into your chosen cloud storage protocol (e.g. EMC Atmos). In this way, you’re able to maintain consistent copies of data both in your local datacenter as well as in your private cloud. This is all well and good but what about recovering your data? Using the same process of encapsulation, the Emulex E3S can retrieve your data from your private cloud, unpack the meta-data and extents and present the original SCSI block data back to your hosts, all using traditional SCSI semantics.

Graham declined comment about whether an OEM deal is in the works, but the product is listed on EMC’s Atmos partner page. Amazon also uses the term Elastic Block Store with its EC2 cloud, but that doesn’t appear directly related to E3S.

It also doesn’t look like the product is generally available yet. Rich Pappas, VP of marketing and business development, Embedded Storage Products for Emulex sent over the following statement in an email this morning to Storage Soup:

Emulex has developed E3S as a proof-of-concept design illustrating how block storage can be easily bridged to cloud storage environments. Market research has shown that the most likely application for this technology is within existing storage solutions and Emulex is discussing with its partners the viability of the product concepts and timing for market entry.


June 19, 2009  7:00 AM

06-18-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Stories referenced:

(0:25) TheInfoPro Storage Study finds firms save money through tiered storage, better utilization
(1:36) Cisco sees ratified T11 standard driving adoption of Fibre Channel over Ethernet (FCoE)
(3:38) HP resizes its ExDS9100 scale-out NAS system; finds market broader than original Web 2.0 target
(5:09) Dell launches EqualLogic PS4000 iSCSI SAN for SMBs
(6:27) Hitachi Data Systems (HDS) expands thin provisioning with Storage Reclamation Service and Hitachi Dynamic Provisioning


June 18, 2009  6:24 PM

Symantec and CommVault tussle over TheInfoPro results

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Nothing like a good vendor fight to keep the week interesting. This time, it’s Symantec and CommVault who have been going at it in press releases and statements after TheInfoPro released its Wave 12 Storage Study on Monday.

CommVault put out a press release shortly after the study was released trumpeting the findings that were flattering to its Simpana product (as virtually all storage vendors do when reports like this come out). The statement that drew Symantec’s ire was this one: “CommVault garnered a top spot in attracting new customers from competing solutions, according to TheInfoPro™ Wave 12 Storage Study. Twenty percent of respondents reported they had switched to CommVault from another vendor in the past year.”

Symantec responded by firing off this statement to press through its PR agency:

The actual figure is 0.2%, since TheInfoPro’s sample size was 848 and only 2 had switched. Also, only 10 respondents mentioned Commvault. For comparison, 66 mentioned Symantec, 86 mentioned NetApp, and 194 mentioned EMC. The full report with a chart and list of vendors and customer sample size is available from TheInfoPro.

Roughly 5 out of the 66 Symantec customers reported switching to Symantec solutions.  Clearly, this is not an accurate comparison, or a valid statistic and CommVault seems to be clutching at straws in an attempt to seem relevant to the market.

Rowr! Saucer of milk, table two!

Responded CommVault VP of marketing and business development Dave West:

 

This study is indicative of what we are seeing in the market and reflects historic trends within our customer base. In addition to sustaining strong customer loyalty, CommVault is experiencing notable year on year growth. We continue to see strong Simpana software adoption by former customers of competitive offerings. In May we announced we surpassed 10,000 customers; more of half of these previously were Symantec customers.

 
I don’t know how many CommVault customers came from Symantec, but it’s worth noting CommVault’s revenues actually dropped a bit year-over-year last quarter although it did grow for its entire fiscal year.

As for the spat over TIP numbers, TIP spokesperson Bernadette Abel clarified in an email to Storage Soup:

The percentages noted on this data point are per vendor and not an overall comparison among all vendor mentions. 20% of current CommVault customers interviewed said that they switched to CommVault from a competing vendor.

The press release put out by the organization said that it garnered a top spot, not the top spot as based on the 20% conversion rate.

Bottom line? Regardless of the statistics, these guys are clearly under each other’s skin. CommVault has been aggressive about taking share from competitors, and it would appear it has at least succeeded in getting some attention from them. The real winners in all this should be end users, who stand to benefit from better pricing when competition is intense.


June 18, 2009  2:49 PM

HDS disk array failure suspected in Barclays outage; where’s the HAM?

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

According to reports out of the U.K. yesterday, Barclays ATM machines stopped working Tuesday because of a fault with one of its disk arrays.

The exact nature of the problem has not been specified, but the company is publicly known as a customer of Hitachi Data Systems’ (HDS) USP-V. HDS supplied a SAN subsystem based on its high-end USP-V hardware in February to bring capacity to 1 PB at a new 28,000 square foot Gloucester data center. That is the data center where the outage occurred.

Reached for comment, an HDS spokesperson wrote to Storage Soup in an email:

Not much to respond to as Barclays’ operations are now fully back online as of end of business day yesterday local time. Barclays and Hitachi Data Systems are investigating the cause of the problem. As a trusted storage partner to customers around the globe, it is our commitment to deliver on high standards of customer service and support excellence to Barclays and all of our customers worldwide.

U.K. storage consultant Chris M. Evans, who has worked with HDS products and customers, came to the vendor’s defense. He pointed the finger at the lack of redundancy of Barclays’ architecture.

What surprises me with this story is the time Barclays appeared to take to recover from the original incident.  If a storage array is supporting a number of critical applications including online banking and ATMs, then surely a high degree of resilience has been built in that caters for more than just simple hardware failures?  Surely the data and servers supporting ATMs and the web are replicated (in real time) with automated clustered failover or similar technology?

We shouldn’t be focusing here on the technology that failed.  We should be focusing on the process, design and support of the environment that wasn’t able to manage the hardware failure and “re-route” around the problem.

One other thought.  I wonder if this problem would have been avoided with a bit of Hitachi HAM?


June 16, 2009  4:28 PM

Bocada resurfaces, plans backup reporting updates

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

I’m still digesting all the vendor meetings I had last week at the BD Event. One of the company executives I met with last week was Nancy Hurley, CEO of Bocada Inc. for a little over a year now.

Hurley told me she spent most of her time since becoming CEO last May trying to get the Bocada’s house in order. “We went through our recession already,” she said, adding the vendor rebounded to reach profitability by the end of last year. Hurley said that was mostly the result of improving internal business processes.

Having completed its internal makeover, Hurley said Bocada will update its Bocada Enterprise software June 30 and again later this year. She hopes the two-phase approach to breaking up the monolithic software into a modular front end will help attract more channel sales and improve workflow within the product.

Bocada Enterprise 5.4 will add “policy mining,” which will allow the software to understand each policy for every backup server client, when that policy changed, and how that has impacted backup job failures or error reports. This version will also begin the modularization process by more clearly delineating the workflow between each of the services it provides, from healthcheck to problem management to change management. “Today we leave the customer to navigate the workflow themselves,” Hurley said. “They have to know where they have to go next. Our next update will move them through to the next step.”

The second update planned for later this year will separate the front-end into sections that can be sold and deployed separately, though the back-end will remain the same. The customers Bocada has in mind for this are service providers who may need to offer a combination of services to customers and issue service level agreements (SLAs) for each service. Advanced modules are also planned for generating SLAs and thresholding, i.e., “If this keeps happening, 30 days from now you might not meet your SLA,” explained Hurley.

Other products that began as backup reporting tools, such as Aptare’s StorageConsole, have broadened their capabilities to include storage resource management (SRM). But Hurley said Bocada plans to stick to its knitting in the data protection space. “To me, even addressing everything in data protection is hard — we don’t want to dilute that value by also having to go and look at how much capacity you have on Clariion,” she said.

Bocada may have picked a good time to re-enter the reporting software market; TheInfoPro’s Wave 12 Storage Study showed that capacity planning and reporting shot to #1 on the list of priorities for storage professionals during the economic downturn.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: