Storage Soup


December 23, 2013  1:57 PM

Forget flash — Seagate invests in more hard drive technology

Dave Raffo Dave Raffo Profile: Dave Raffo

Seagate did its last-minute Christmas shopping in the U.K., picking up hard drive testing and OEM storage enclosure company Xratex today for $374 million.

Instead of making a big solid-state move like industry people keep waiting for, Seagate instead went deeper into hard drive technology with the U.K.-based Xyratex.

Seagate did not hold any conference call to discuss the deal, but it’s press release said the deal with strengthen its supply and manufacturing chain for disk drives and guarantee access to capital equipment. Seagate intends to run Xryatex as a standalone business.

“As the average capacity per drive increases to multi-terabytes, the time to test these drives increases dramatically,” Seagate VP of operations and technology Dave Mosley said in the release. “Therefore, access to world-class test equipment becomes an increasingly strategic capability.

Xyratex enclosures are used for Dell EqualLogic, Hewlett-Packard 3PAR StoreServ and IBM StoreWize and XIV storage arrays. NetApp recently ended its OEM relationship with Xyratex.

Seagate and its main enterprise rival Western Digital were Xyratex hard drive testing customers. It is unlikely that the Western Digital relationship will continue.

Xyratex in the past two years also began selling ClusterStor high performance computing systems based on the Lustre file system. ClusterStor systems are sold by Cray, Dell, HP and others.

Xryatex is a public company that reported $638 million in revenue through the first nine months of 2013. That was down from $992 million over the same period in 2012, with the end of the NetApp deal causing much of the decline. Xyratex has been profitable this year, but unhappy investors prompted the company to replace CEO Steve Barber with Ernie Sampias in April and called for it to look for a buyer.

Seagate expects the deal to close in the middle of 2014.

December 18, 2013  9:09 PM

IBM to introduce ‘cloud of clouds’ in 2014

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

IBM has developed a software toolkit that allows users to store and move data in multiple public or private clouds through a drag-and-drop method for data protection against service outages and data loss.

The company has code-named the product “Intercloud Storage (ICStore),” and it uses an object storage interface for data migration, backup and file sharing in third party clouds. The toolkit will be available in beta to Storwize customers in January 2014 but the general availability has not been disclosed yet.

IBM’s SoftLayer will be the default cloud but technology will support more than 20 cloud services, including Microsoft Azure, Amazon S3, Rackspace and OpenStack object storage.

“This is an internal research project and we are using clouds to solve a few issues with cloud storage,” said Thomas Weigold, IBM’s manager for storage systems research at its Zurich research laboratory. “It’s a problem with security, reliability and vendor lock in. These are the points where customers have problems. The idea is to use more than one cloud through replication or erasure coding.”

IBM calls this technology “cloud of clouds” and the company has done proof-of-concepts with a select number of customers during the last two years.

Weigold said the toolkit can be customized so that replication or erasure coding can be used, depending on the what the data needs. The toolkit addresses space efficiency, data synchronization and metadata coordination when storing data redundantly on object storage. If one cloud service fails, the other cloud will be available transparently to the user.

Workloads can be positioned for high availability and disaster recovery. For instance, primary data can be stored in a  private cloud while snapshots of the data can be moved to an external public cloud and encrypted.

“You can be very specific,” said Weigold, “such as all files in the directory of this size should be migrated to these providers but not before an application’s copies are replicated.”

The software will have integrated protection features and AES 128, 192, or 256 bit encryption for security.


December 16, 2013  3:04 PM

Flash array vendor Violin sends CEO overboard following post-IPO swoon

Dave Raffo Dave Raffo Profile: Dave Raffo

Struggling flash array vendor Violin Memory today dumped Don Basile, 11 weeks after he took the company public.

Violin chairman Howard Bain III takes over as interim CEO, and the board has hired an executive search firm to find a permanent replacement for Basile.

Violin’s initial public offering (IPO) turned out the beginning of the end for Basile. The company’s stock price dropped from $9 at its IPO to $2.68 this morning, and Violin missed analysts’ expectations in its first quarter as a public company. CTO Jonathan Goldrick resigned last week, and there has been speculation since then that Basile would leave. Violin is also besieged by a rash of investor lawsuits and analyst downgrades because of its poor financial performance last quarter.

Basile became Violin’s CEO in 2009 after serving as CEO of Fusion-io. During his tenure, Violin raised at least $180 million in funding and became the all-flash array market leader – according to Gartner – with $72 million in revenue in 2012.

However, investors and analysts were stunned to learn Basile earned $19 million in 2013 while the company lost more than $90 million through the first three quarters of the year.

Much of Violin’s early sales success came before large storage vendors got into the all-flash market, but all major storage companies now have at least one all-flash platform.

Violin’s press release made it clear that the change in CEO was the board’s choice, and not Basile’s. While the release went into great detail on Bain’s background, it mentioned Basile only once – saying Bain’s new role “follows the decision of the board of directors to terminate Donald Basile.”

David Walrod, chairman of Violin’s nominating and corporate governance committee, was quoted in the release saying “the board believes this leadership change is necessary to enhance the management team’s operational focus and ability to execute the company’s plans for profitable growth.”

Violin lost $34.1 million last quarter on revenue of $28.3 million.

Bain has been on Violin’s board since October 2012 and became chairman in August. He has been CFO of Symantec, Informix, Portal Software and Vicinity.

 

 


December 16, 2013  10:42 AM

Nimble Storage IPO receives warm Wall Street reception

Dave Raffo Dave Raffo Profile: Dave Raffo

Nimble Storage showed that storage vendors can receive a favorable reception on Wall Street when it completed a successful initial public offering Friday. Nimble’s debut comes on the heels of all-flash vendor Violin Memory’s rocky IPO. Violin’s stock price began tanking on the first day and has continued to fall for nearly three months.

Nimble’s price opened at $21 Friday – above its target range of $18 to $20 – and closed the day at $31.10 for a valuation of $1.48 billion. It opened at $34.90 today. In September, Violin priced its IPO at $9, finished its first day at $7.11, and kept dropping after reporting poor results in its first quarter as a public company. Violin opened today at $2.68.

A lot of attention on Nimble’s pre-IPO compared it to Violin and server-based flash vendor Fusion-io, whose stock price has also dropped sharply since its 2011 IPO. But Nimble is in a different situation than those two. First, it doesn’t sell only flash. Its arrays are hybrid, combining a small amount of flash with spinning disk. And Nimble’s competitive situation hasn’t changed over the past year. In contrast, Violin and Fusion-io beat the major vendors to market with their flash products but now face a glut of competition.

Nimble stepped into the market dominated by EMC, NetApp and others in 2010 and has grown significantly with $84 million in revenue over the first nine months of 2013. While the major vendors now all have all-flash arrays to compete with Violin and other startups, they don’t have any new products that counter Nimble’s strengths. Nimble has been successful – although not yet profitable – with an architecture that combines data reduction, ease of management and advanced cloud-based data monitoring and analytics.

And yes, it has taken advantage of flash but doesn’t rely completely on it.

“These days you can’t talk about storage without mentioning flash,” said Radhika Krishnan, Nimble’s vice president of solutions and alliances.

Still, she says storage doesn’t have to be all-flash to pump out significant performance and latency advantages over spinning disk. Nimble improves performance of solid-state drives (SSDs) with data reduction, efficient snapshots and an architecture that minimizes the amount of writes that go to flash.

“If you have a hybrid system with high performance and predictable latency, why would you want to pay all that money for all-flash arrays or flash on servers?” Krishnan said. “We think hybrid arrays are the way to go, not just in 2014 but for a significant time to come. Not all hybrid arrays are created equal. Any hybrid arrays that use tiering where all writes get staged on flash have to pay the price of write endurance. Then you have to protect your flash with RAID and over-provisioning.”

Nimble’s challenge now is to show it can spin that technology into a profitable company if it is to stay around in a flash-dominated storage world.


December 13, 2013  11:26 AM

HP shines up its converged storage

Dave Raffo Dave Raffo Profile: Dave Raffo

Over the past year, Hewlett-Packard has made it clear that its storage future revolves around what it calls its converged storage platforms – the 3PAR StoreServ SAN array, StoreOnce data deduplication backup appliance, and StoreAll arching system. These products now make up more than 40% of HP’s storage revenue, and will account for most of its storage revenue in 2014.

So it’s no surprise that these were the products HP upgraded during HP Discover Barcelona this week. The enhancements were mostly speeds and feeds, plus improved quality of service on the 3PAR platform.

The new Priority Optimization feature in 3PAR StoreServ OS 3.1.3 allows managers to set thresholds for IOPS, bandwidth and latency for each application or tenant in a multi-tenant system. That enables QoS for application or virtual machines that can be managed in real-time.

The latest OS also supports more snapshots, volumes, replicated volumes and Fibre Channel host initiators, and all 3PAR arrays now support 800 GB solid-state drives (SSDs).

HP also added Adaptive Sparing that optimizes flash overprovisioning. This feature takes some blocks reserved for when other blocks wear out and uses them for allocated spare chunks in the storage pooling architecture. HP claims this expands capacity of each SSD, so an 800 GB SSD actually provides 920 GB of capacity.

For StoreOnce, HP added the 6500 model, which is its highest-capacity StoreOnce system with 1.7 PB of maximum capacity. A new StoreOnce Security Pack handles encryption at the application level, and StoreOnce Catalyst dedupe acceleration software now supports Oracle Recoveyr Manager (RMAN).

There are two new StoreAll systems – an 8200 Gateway that supports 3PAR StorServ on the back-end and a high-end 8800 model. The Gateway brings file and object storage to the 3PAR platform. The 8800 scales to 1.5 PB per rack and 16 PB in a cluster.

HP also added native support for OpenStack Object Storage so customers can take applications developed in the cloud and move them in-house.


December 12, 2013  8:55 AM

Actifio: EMC’s copying us on data copy management

Dave Raffo Dave Raffo Profile: Dave Raffo

Actifio CEO and founder Ash Ashutosh says the recent EMC reorganization shows the storage giant is coming around to his startup’s way of approaching data management.

Ashutosh is not talking about EMC’s combining of its VMAX and VNX teams into an Enterprise Storage Division. He is referring to the other move EMC made at the same time — it added its Vplex and RecoverPoint teams to the backup product teams. That move formed a new Data Protection and Availability Group that replaces the Backup and Recovery Systems division. Specifically, Ashutosh is talking about several mentions EMC data protection people have made recently about copy management.

Actifio has been calling its technology data copy management since 2010. Its software creates virtual copies of data so it can be placed in any location and used for multiple purposes. So Ashutosh is flattered that EMC now uses the term. About nine minutes into this video, EMC data protection CTO Steven Manley talks about using copies of data for more than just data protection. “These copies are available for more than just sitting there waiting for something to go wrong,” he said.

That has been Actifio’s message for years. Ashutosh congratulated EMC in his blog on the Actifio web site: “After seeing your recent EMC Copy Management announcement, we want to welcome you to the most exciting and important marketplace since the storage revolution began 25 years ago.”

He expanded on that in an interview with StorageSoup.

“They acknowledge the trend in the market,” he said. “We see this as validation for what we’re doing, and they’re doing the right thing by their customers. It’s a little earlier than I expected they would announce the shift.”

If EMC pushes into copy data management, that would have a good side and bad side for Actifio. It would signal that Actifio is on the right path. On the other hand, it would mean the world’s largest storage vendor is coming after it.

Ashutosh said he’s not worried about competition yet. He said Actifio is far along the copy data management path, and EMC is just starting. “It’s a three-step process and they’re in step one,” Ashutosh said of EMC. “We’re in step three, and we have been for four years.”

Step three isn’t the last step for Actifio. It continues to develop the product. One future development is the release of a software-only version. Actifio now sells its software on appliances, but it can make the software available to run on servers or other storage systems.

“It’s coming,” Ashutosh said of a software-only product.

Another thing that he has often talked about coming is an initial public offering that would make Actifio a public company. He said that is still in the works, but not imminent. “We’re getting closer,” he said. “I can’t comment on the timing. We’re making sure we build one happy customer at a time. We don’t want to get into the rat race of going public too fast.”


December 6, 2013  10:15 AM

Backups and archives require separate discussions

Randy Kerns Randy Kerns Profile: Randy Kerns

I’ve spent a lot of time talking to IT clients and vendors about archiving and backup, and these discussions made me realize there is a real problem with the terminology used today.

For IT clients, the terms have different meanings depending on the responsibilities of the person talking. Preconceptions (or misconceptions) give color to what motivates customers in managing information. The application owner or business owner sees backup as something for which IT is responsible for. At the same time, IT sees archiving as a possible impediment to success on because it could make it more difficult to access needed information.

Vendors approaches to backup and archiving are driven by their products. For most vendors, backup and archiving are usually combined and their messaging may cover both at the same time. This may not serve the vendor well because of the different customer perceptions and people served.

There are a few basics about the terminology that need to be understood along with some recommendations:

Backup is really about data protection. Data protection should be the top level message and is a continuum that includes replication and point-in-time copy (snapshot). Today, backup is an IT function where the backup group in IT serves the overall business – both applications and systems.

Archiving is really about information management. For the IT backup guys, it is just another form of backup and usually is thought of as backups that are being kept (retained backups) rather than part of a rotation. For the application owner, an archive is about moving some data and making it difficult (or delayed) to access. A storage administrator sees archiving as a migration between tiers and a way to reduce the primary capacity demand as part of capacity management.

The archiving discussion must be separate from data protection, although there is a data protection component in archiving. IT rarely takes initiatives to implement archiving practices (other than retained backups) for several reasons:

• Usually IT is not empowered to make decisions about application data from business owners. The idea that data can be made less accessible or deleted is not something IT people believe they have the authority to do.

• IT does not want to be wrong and cause an impact when it comes to making a decision about the data. The negatives outweigh the improvements that may be made by implementing an archiving strategy.

• The assignment for archiving in IT usually lands in the purview of the backup manager/administrator. Managing the backups is challenging and archiving is seen as moving individual elements such as files that are too fine-grained for the backup process.

The archiving practice needs to focus on the application and business owners who ultimately are responsible – both for the application use and the economic costs. The approach should be about moving data to a content repository that is appropriate for the diminished probability of access. The content repository is less costly, but must still be directly accessible by the application and the information (typically files) visible to the application owner. The content repository is not about files stored inside a backup format but as individual elements (files) that are, in the application owner’s terminology, online.

From the application owner’s perspective, IT is not involved in the access. There should still be a discussion about “deep archive” repository for data that is not expected to be needed again but cannot be deleted. Again, this is an application owner decision but the mechanics are implemented by IT.

When it comes to backup and archiving, terminology matters. There is context for usage and dependencies on who is involved in the discussion. Archiving must be considered in the context of the application. To counter the preconceptions, the discussion should be about application content repositories rather than an archive. The concept of a deep archive is still highly valuable. The archive discussion needs to be with the application owner. Backup needs to be put in the broader context of data protection and separated from archiving. This makes the discussion more relevant to those involved. It also makes it an easier discussion to have.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


December 4, 2013  11:22 AM

Gridstore receives funding for Hyper-V storage system

Dave Raffo Dave Raffo Profile: Dave Raffo

Gridstore, which this year changed its strategy and product to focus on optimizing storage for Microsoft Hyper-V, today closed an $11 million funding round to bring its new system to market.

Gridstore CEO George Symons said the vendor will expand its sales team to sell the new product, also called Gridstore. Symons said Gridstore has converted all of its beta sites to paying customers, and is looking to add about 10 people to the 32-person company.

With Gridstore 3.0, the vendor switched from selling scale-out NAS grid storage for SMBs to storage for mid-market companies using Windows Server 2012 and Hyper-V. Gridstore installs as a virtual controller that runs on the host and provides quality of service on a per-VM basis. It has hybrid configurations that use PCIe flash and arrays that are 100% spinning disk.

Symons said sales of the new systems are averaging 36 TB compared to 6 TB in previous versions. He said Gridstore competes mainly with NetApp FAS, EMC VNX and Nimble Storage CS arrays. He said about half of its customers use the systems for backup and the other half for primary storage.

Symons said he expects to get a sales bump as Hyper-V catches on.

Part of the funding will also go towards product development. Symons expects an upgrade around June. He said Gridstore will eventually move into VMware and release all-flash versions of its array, but not in the next release.

“That will focus more around management and performance enhancements,” he said. “We’ll add grid-based snapshots for VSS [Volume Shadow Copy Service] and zero-copy clones.”

The zero-copy clones should make Gridstore a better fit for virtual desktop infrastructure (VDI) storage, he said.

“Architecturally, we’re well set up for VDI because of the way we spread data across multiple nodes,” he said. “We can take advantage of write-back caching.”

Because Gridstore is software that runs on commodity hardware (Dell servers), the vendor labels the product software-defined storage. Symons said he was reluctant to use the term, but was surprised to find it resonates with customers.

“I hesitated to use the term,” he said. “I thought, ‘does anybody care outside of industry people?’ But customers like the fact that there’s something new and it’s why we’re different. The fear was that term can mean so many things, but reception has been tremendous.”

The funding brings Gridstore’s total to $23.5 million over two rounds Acero Capital led the round with previous investors GGV Capital, Investec Ventures Ireland Limited, and Onset Ventures participating.


November 26, 2013  8:13 PM

Symantec plans to ditch Backup Exec.cloud

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Symantec Corp. has put its channel partners and managed service providers (MSPs) on notice that it is shutting its backup cloud service, known as Symantec Backup Exec.cloud, effective on Jan. 6, 2014.

The company has not made an official announcement public but it sent an email to its reseller partners on Monday, Nov. 25, 2013 informing them there will be no new sales or renewals for the service as of Jan. 6, 2014. Also, customers and partners will not have access to the service, data or technical support service and technical support as of Jan. 6, 2015.

Jerry Gowen, Symantec’s group manager for worldwide communications confirmed the news via email. The email sent to partners stated, “As you know, one of our primary goals is to delight you and your customers with our product offerings. Taking this into consideration and carefully evaluating the overall needs of our customer base, we have made the difficult decision to begin the process of discontinuing Symantec Backup Exec.cloud,”

In the message, the company stated its other cloud offerings would not be affected, which include Symantec Endpoint Protection cloud, Symantec Endpoint Protection Small Business Edition 2013 or on premise Backup Exec and NetBackup software.

“I would assume the solution is not a big money maker for Symantec,” said Pushan Rinnen, research director for storage technologies and strategies at Gartner. “This is not a focus for them and that is why they want to move away from it. It didn’t come as a surprise to me because I didn’t get the sense that it was a gigantic success.”

That’s news to Eran Farajun, executive vice president at Asigra, a backup vendor that has strategic alliances IBM, NetApp and Cisco for its Asigra Cloud Backup product. Farajun said Symantec representatives told him the company was managing “petabytes” of data via Symantec Backup Exec.cloud.

“I think this is interesting and surprising,” said Farajun. “There are a lot of partners that are reselling this. This is Symantec. This is a tier-one company. I thought everything was hunky dory. Then they put this email out, and everybody is saying ‘Huh?’ They can’t decide to act like a risky startup and just get out. There still are a lot of answers that need to come out.”

Symantec’s Backup Exec.cloud service is targeted at small-to-medium size businesses. Symantec partners with Savvis and Rackspace to store data in two East Coast data centers and several in Europe, according to Rachel Dines, senior analyst for infrastructure and operations professionals at Forrester Research.

Dines said Backup Exec.cloud was released in 2011 and the product lacked the features and functions that other cloud backup services offer.

“It seemed like it never matured,” she said. “It had limitations. Scalability was an issue and management was another problem. It was only for Windows. The PC backup feature was basic and it could not compete with other PC features that do sync and share. Plus, it was expensive and that didn’t help. It was not extremely successful and they figured it’s better off to cut their losses and get out.”

While Symantec has made the decision internally, the portal for the Backup Exec.cloud still shows the product is available as of this evening. In a live chat, a sales representative apologized for the confusion.

“Apparently, it is not available for purchasing. It’s not public yet,” the person said. “I am happy to discuss our other backup solutions with you that are very successful and popular products.”

Rinnen said she was briefed on the company’s plans on Nov. 5. She was told an announcement would be sent to partners on Nov.4 and the portal would be shut down by Dec. 2, 2013 for new customers.

“Sounds like they pushed it back,” she said.


November 25, 2013  10:26 PM

Say farewell to SNW

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Last month’s Storage Networking World (SNW) conference in Long Beach, Calif., was the swan song for the semi-annual industry event in the United States.

Computerworld/IDG and SNIA, which launched SNW in the United States in 1999, have decided to end the U.S. version of the storage conference. The two groups will be organizing separate events in the future.

The official SNW website states, “Computerworld/IDG and SNIA have decided to focus our individual conference resources on producing events that cover an expanded storage innovation market and to conclude the production of SNW U.S.”

“It was just that IDG and SNIA had different goals in moving forward,” said an IDG spokesperson who requested anonymity. “We thought it was best to produce our own events.”

IDG intends to incorporate storage in their broader conferences such as CITE Conference + Expo, Open Business and Data +. CITE covers technologies ranging from mobile storage to Big Data that use consumer devices in the enterprise. The Open Business event primarily focuses on Big Data.

SNIA, a group that consists mainly of storage vendors, will go forward with storage-related shows such the Data Storage Innovation Conference and Storage Developer event.

Held every spring and fall, SNW was a major industry influencer until the late-2000s when large storage vendors began focusing more on their own conferences. Attendance at SNWs dropped significantly over the last five years or so, especially among the vendors. During one recent show, a vendor had a booth set up with no one even staffing it.

There was widespread speculation in the industry that SNW would reduce from two shows a year to one, but the SNW parent companies decided to cancel U.S. shows. It’s unclear if SNW Europe – held once a year — will continue. “We are not sure yet,” said a SNIA representative who requested anonymity when asked if the European SNW show would continue.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: