IBM and Actifio struck up a partnership this week that startup Actifio hopes will bring its Protection and Availability (PAS) platform to the cloud and IBM sees as a way to fill data protection needs for service providers.
IBM and Actifio said they will offer bundles to cloud service providers and VARs. The packages include Actifio’s PAS data protection with IBM DS3500 Express, IBM Storwize V7000, XIV Gen3 and SAN Storage Volume Controller (SVC) systems.
IBM has its own backup, replication, disaster recovery and data management products, so it’s unclear why it needs Actifio. But Mike Mcclurg, IBM VP of global midmarket sales, said Actifio provides one tool to handle all those functions.
“We approach managed service providers form a business perspective,” he said. “How can a partnership with IBM grow their business? It’s challenging for managed service providers to find cost effective data solutions that requires cobbling together a lot of backup, replication, snapshot, and data management tools. Actifio is an elegant way of replacing a lot of technology and overlapping software products.”
Maybe the partnership is the beginning of a deeper relationship between the vendors. Actifio president Jim Sullivan is former VP of worldwide sales for IBM system storage. He maintains that the startup is keeping its partnership options open, but he is also counting on IBM to bring Actifio into deals the startup can’t land on its own.
“This is not an exclusive deal,” he said. “But we’re driving this with IBM. Showing up with service providers with IBM is a great opportunity for us to get reach and credibility.”
Hewlett-Packard today quietly launched an all solid-state drive (SSD) version of its LeftHand iSCSI SAN array.
Unlike the server and services announcements HP made at its Global Partner Conference, HP made its storage news with little fanfare on a company blog.
The HP P4900 SSD Storage System has 16 400 GB multi-level cell (MLC) SAS SSDs – eight in each of the system’s two nodes. Each two-node system includes 6.4 TB, and customers can add 3.2 TB expansion nodes to scale to clusters of 102.4 TB. Expansion nodes increase the system’s IOPS as well as capacity.
The systems use the HP SMARTSSD Wear Gauge, which is firmware that monitors SSD drives and sends out alerts when a drive gets close to the end of its life. The monitoring firmware is part of the P4000 Management Console.
HP claims the monitoring and scale-out architecture solve the major problems with solid-state storage arrays. “When it comes to SSDs in general, they are great for increasing IOPS and benefitting a business with lower power/cooling requirements,” P4000 product marketing manager Kate Davis wrote in the blog. “But the bad comes with unknown wear lifespan of the drive. And then it turns downright ugly when traditional dual-controller systems bottleneck the performance that was supposed to be the good part. … Other vendors must build towers of storage behind one or two controllers – LeftHand scales on and on.”
The large storage vendors offer SSDs in place of hard drives in their arrays, and there’s no reason they can’t ship a system with all flash. But the P4900 is the first dedicated all-flash system from a major vendor. Smaller vendors such as Nimbus Data, Pure Storage, SolidFire, Violin Memory, Whiptail and Texas Memory Systems have all-SSD storage systems.
A 6.4-TB P4900 costs $199,000. The expansion unit costs $105,000.
NetApp CEO Tom Georgens says he expects server-side flash to become a key part of his vendor’s flash strategy. However, NetApp will take a different approach than its rival EMC.
Asked about EMC’s VFCache product during NetApp’s earnings call Wednesday, Georgens said server-side flash is “a sure thing,” but NetApp will focus on data management software that works with PCIe cards instead of selling the cards. He doesn’t rule out selling cards either, though.
“I don’t think the opportunity is simply selling cards into the host, although we may do that,” he said. “But our real goal is we’re going to bring the data that’s stored in flash on the host into our data management methodology for backup, replication, deduplication and all of those things. It isn’t as simple as we’re going to make a PCI flash card. Our focus this year is the software component and bringing that into our broader data management capability.”
With VFCache, EMC sells PCIe cards from Micron or LSI with the storage vendor’s management software. NetApp appears intent on selling software that will work with any PCIe cards – or at least the most popular cards. The question is whether it can develop software that is integrated as tightly with many cards instead of focusing on one or two.
Georgens said NetApp was correct all along with its contention that using flash as cache is more effective than replacing hard drives in an array with solid-state drives (SSDs). NetApp’s Fast Cache card goes into the array to accelerate performance. It is included on all FAS6000 systems and as an option on NetApp’s other FAS systems. NetApp does offer SSDs in the array, but recommends flash as cache.
“Flash is going to be pervasive,” Georgens said. “I think you’re going to see it everywhere in the infrastructure. Our position all along has been that flash as a cache is where it has the most impact. And I would say that we actually see probably more pervasive deployment of flash in our systems than anybody else in the industry.”
On the hard drive front, Georgens said the impact from shortages caused by floods in Thailand weren’t as bad as anticipated last quarter although it will take another six to nine months before the “uncertainty” lifts.
“While drive vendors had little forward delivery visibility, most of the disk drives shipped in excess of initial estimates,” Georgens said. “However, not all drive types were universally available and some spot shortages impacted revenue and will likely do so in the upcoming quarter as well. … We expect the drive situation to continue to inject uncertainty into the revenue for the next nine months as availability, cost and pricing settle out in the market.”
One by one, solid-state flash vendors are adding caching software to enhance their products. SanDisk picked up startup FlashSoft today in a move designed to make applications run faster with SanDisk’s and other vendors’ PCIe and solid-state drive (SSD) products.
Enterprise PCIe flash pioneer Fusion-io began the trend by acquiring IO Turbine last August, and OCZ picked up Sanrad for its PCIe caching software in January. Solid-state vendor STEC internally developed its EnhanceIO caching software, and EMC’s caching software and FAST auto-tiering appliance play a big role in its VFCache server-side flash product.
The acquisition of FlashSoft leaves startups Nevex, Velobit and perhaps a few other vendors still in stealth as obvious targets for solid-state vendors. Nevex and Texas Memory Systems last week said they were jointly developing software that would speed applications running on TMS SSD storage.
FlashSoft software turns SSD and PCIe sever flash into a cache for the most frequently accessed data. The company came out of stealth last June with FlashSoft SE for Windows and later added FlashSoft SE versions for Linux, VMware vSphere and Microsoft Hyper-V.
SanDisk said it will sell FlashSoft SE as standalone software and with the Lightning Enterprise SSDs and upcoming PCIe-based devices based on technology that it acquired by buying Pliant last May for $327 million. SanDisk’s SSDs are used by Dell EqualLogic, NetApp, Hewlett-Packard and others through OEM deals.
“We think this is the next step in our performance acceleration journey,” said Greg Goelz, VP of SanDisk’s enterprise storage solutions group.
Goelz said FlashSoft software was appealing because it can work with any hardware and that fits with SanDisk’s OEM model, and it scales better than competitors. “We looked at how did they scale in capacity? If you move from 100 gigabytes of SSDs to terabytes, does the metadata scale exponentially? Is the overhead low? Does it have the best approach to support what’s out there today and to support the evolution from single server to virtualization and clusters? FlashSoft was well ahead of anybody in the market by a substantial lead.”
SanDisk did not disclose the purchase price for FlashSoft.
Information Technology storage professionals are looking at a grim situation. The amount of capacity they need to store their organizations’ data is beyond the scope of what they can deal with given their current resources.
The growth in data that they will have to deal with comes from several areas:
• The natural increase of the amount of data required for business continuance and expansion of current operations. This data represents the normal business requirements.
• New applications or business opportunities. While this is a positive indicator for the business, it represents a potentially significant increase in the amount of data under management.
• The machine-to-machine data from pervasive computing generates an overwhelming amount of data that most IT people have not had to deal with before. The data is used for “big data” analytics or business intelligence, and it will be left to IT to manage for the data scientists.
The problem is really one of scale. Because operational expenses typically are not scaled properly to address the management required for that amount of data, there is insufficient budget to handle the onslaught of data.
Storage professionals are looking at different approaches to address the increased demands. These include more efficient storage systems. Greater capacity efficiently – making better use of capacity – is a big help. So are storage systems that support consolidation of workloads onto one platform.
Data protection is a continuing problem. The process is viewed as a necessary requirement but not as a revenue-enhancing area. Consequently, data protection needs are dramatic but often lack the financial investment to accommodate the capacity increases. This means storage pros must either find products that can be more effective while fitting within the financial constraints or re-examining the entire data protection strategy by using technologies such as automated, policy-controlled archiving and data reduction.
Exploiting point-in-time (snapshot) copies on storage platforms for immediate retrieval demands, implementing backup to disk, and reducing the schedule for backups on removable media to monthly or less frequently are considerations for stretching backup budgets.
Storage professionals need to be open to new ideas for dealing with the massive influx of data. Without addressing the greatly increasing capacity demand, managed storage becomes an oxymoron.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
IBM Corp. is gunning for EMC with its XIV storage system, and “Big Blue” claims it is making a dent in EMC’s lead.
IBM last week added a multi-level cell (MLC) solid-state drive (SSD) cache to its Gen3 XIV Storage System, while it also disclosed some numbers to show it is eating into its top storage rival’s customer base.
Bob Cancilla, vice president of IBM Storage Systems, said his division has shipped 5,200 units of both its Gen2 and Gen3 XIV Storage Systems since the end of last year, and IBM added 1,300 new open-system customers to its storage division with the XIV. Of those 1,300 customers, about 700 replaced EMC’s high-end enterprise Symmetrix VMAX or midrange VNX storage systems, he said.
“They are our biggest bull’s eye,” Cancilla said of EMC. “They have seen the impact.”
Cancilla acknowledged IBM “had a poor presence in the tier-one enterprise open space” before acquiring the privately held, Israel-based XIV company in January 2008. IBM re-launched the XIV system under its brand in September 2008. In the fourth quarter of 2011, XIV “was 75 percent of my shipments,” said Cancilla. More than 59 customers have 1 PB of useable storage on XIV systems, while at least 15 customers have more than 3 PB of storage. More than 65% of the XIV systems have at least one VMware host attached to it.
“We are doing a lot of work to ensure we have the latest and greatest VMware interoperability,” Cancilla added.
XIV didn’t have an SSD option until last week, and that is becoming a must-have feature for enterprise storage. The XIV SSD announcement came one day after EMC rolled out its VFCache (“Project Lightning”) server-side flash caching product to great fanfare. The XIV SSD tier sits between the cache, which uses DRAM, and the disks in the system so when the cache gets full the data spills over to the SSDs.
“You have 360 GBs of DRAM cache and now it goes to 6 Terabytes,” Cancilla said. “It’s a huge jump. It’s a 20x improvement in the cache capability.”
IBM offers SSD drives and automatic tiering software as an option for other storage systems, but this is the first SSD option for XIV. XIV systems include one tier – either SATA or high-capacity SAS drives. Although IBM’s caching option is limited to one of its products, the concept is similar to what EMC is doing throughout its storage array lineup with VFCache. It is speeding read performance for data that needs it while passing writes through to the array.
“It’s like Project Lightning, but in the array,” Silverton Consulting president Ray Lucchesi said. “It’s a similar type of functionality. The differences are IBM is using SSD instead of a PCIe card and it’s at the storage instead of the server. But all the reads go to cache and the writes get destaged to the array.”
IBM also added a mirroring capability to XIV, so customers can replicate data between Gen2 and Gen3 XIV systems.
Starboard Storage Systems launched today, portraying itself as a brand new startup with a new technology and architecture for unified storage. But Starboard is in many ways a re-launch of Reldata, which had been selling multiprotocol storage for years.
Starboard didn’t volunteer information about its Reldata roots, although representatives freely admitted it when asked. With its new AC72 storage system, Starboard wants to appear as a fresh, shiny company rather than one that has been around the block many times without making much of an impact on the storage world.
“It’s not a rebranding of Reldata but Starboard is not your typical startup,” said Starboard chief marketing officer Karl Chen, who joined the company after it became Starboard. “[Reldata] had great technology, so why not absorb Reldata and reduce our time to market? This way, we were able to get to market a lot faster by leveraging what Reldata had. We had the option of starting brand new or taking something that would accelerate our time to market.”
Starboard has the same CEO, CTO, engineering VP and sales chief as Reldata, and has not yet raised any new funding. Starboard’s 30 employees are a mix of Reldata holdovers and new hires. The AC72 includes Reldata intellectual property and was developed in part by Reldata engineers.
“Absolutely, there is technology that we are leveraging from Reldata to build the Starboard Storage product,” Chen said. But he points out that the Starboard product is a new architecture with a different code base. The RelData 9240i did not support Fibre Channel, it was a single controller system, and used traditional RAID blocks. There was no dynamic pooling or SSD tier. “It’s a completely different product from what (Reldata) was selling,” Chen added.
The company also moved from Parsippany, NJ to Broomfield, Colo., which has a deep workforce with storage experience. Starboard CEO Victor Walker, CTO (and Reldata founder) Kirill Malkin, VP of engineering John Potochnik and director of sales Russell Wine were all part of Reldata. They are joined by chairman Bill Chambers, the LeftHand Networks founder and CEO who sold the iSCSI SAN company to HP for $360 million in 2008.
Starboard will continue to service Reldata 9240i systems, but will not longer sell the Reldata line.
(Sonia R. Lelii contributed to this blog).
Red Hat has been tweaking and expanding the NAS storage products it acquired from Gluster last October. This week Red Hat brought GlusterFS to the cloud with an appliance for Amazon Web Services (AWS).
Last December, Red Hat released a Storage Software Appliance (SSA) that Gluster sold before the acquisition. Red Hat replaced the CentOS Gluster used with the Red Hat Enterprise Linux (RHEL) operating system. This week’s release — Red Hat Virtual Storage Appliance (VSA) for AWS -– is a version of the SSA that lets customers deploy NAS inside the cloud.
The VSA is POSIX-compliant, so — unlike with object-based storage — applications don’t need to be modified to move to the cloud.
“The SSA product is on-premise storage,” Red Hat storage product manager Tom Trainer said. “This is the other side of the coin. The VSA deploys within Amazon Web Services with no on-premise storage.”
The VSA lets customers aggregate Amazon Elastic Block Storage (EBS) and Elastic Compute Cloud (EC2) instances into a virtual storage pool
Trainer said Red Hat takes a different approach to putting file data in the cloud than cloud gateway vendors such as Nasuni and Panzura.
“They built an appliance that sits in the data center, captures files and puts them in an object format and you ship objects out to Amazon,” he said. “We said ‘that’s one way to do it.’ But the real problem has been having to modify your applications to run in the cloud because cloud storage has been built around object storage. If we could take two Amazon EC2 instances and attach EBS on the back end, we could build a NAS file server appliance right in the cloud. Users can take POSIX applications from their data center and install them on EC2 instances. They can take applications they had been running in the data center and run them in the cloud.”
Red Hat prices the VSA at $75 per node (EC2 instance). Customers must also pay Amazon for its cloud service.
Trainer said Red Hat plans to support other cloud providers, and customers would be able to copy files via CIFS if they wanted to move from one provider to another. But Amazon is the only provider currently supported for the Red Hat VSA.
EMC rolled out its VFCache (“Project Lightning”) server-side flash caching product to great fanfare this week. IBM made a quieter launch, adding a solid-state drive (SSD) caching option to its XIV storage system.
IBM’s XIV Gen3 now includes an option for up to 6 TB of fast-read cache for hot data. IBM offers SSD drives and automatic tiering software as an option for other storage systems, but this is the first SSD option for XIV. XIV systems include one tier – either SATA or high-capacity SAS drives.
Although IBM’s caching option is limited to one of its products, the concept is similar to what EMC is doing throughout its storage array lineup with VFCache. It is speeding read performance for data that needs it while passing writes through to the array.
“It’s like Project Lightning, but in the array,” Silverton Consulting president Ray Lucchesi said. “It’s a similar type of functionality. The differences are IBM is using SSD instead of a PCIe card and it’s at the storage instead of the server. But all the reads go to cache and the writes get destaged to the array.’
The XIV SSD cache is also similar to what NetApp does with its FlashCache, a product that IBM sells through an OEM deal with NetApp. IBM also sells Fusion-io PCIe cards on its servers. EMC has also been selling SSDs in storage arrays since 2008. So we’re seeing that flash is showing up in enterprise storage systems in many ways, and those options will keep expanding.
“As SSDs become more price performant, customers are putting them in for workloads that require quick response times,” said Steve Wojtowecz, IBM’s VP of storage software. ”We’re seeing real-time data retrievals, database lookups, catalog files, and hot data going to SSDs and colder data going to cheaper devices.”
The other big enhancement in XIV Gen3 is the ability to mirror data between current XIV systems and previous versions of the platform. That is most helpful migrating data from older to newer arrays, although IBM is also pushing it as a way to use XIV for disaster recovery.
In information technology, the part of Newton’s laws of motion regarding inertia where a body at rest tends to stay at rest is often prevalent. In IT, this law means changes are increasingly difficult to make because change is often resisted. Resistance to change means missed opportunities to integrate new technologies, improve processes and become more effective.
The reasons for resistance can be rationalized effectively by management making excuses. That does not mean these reasons are correct, it just puts perspective on why seemingly incomprehensible choices are made that defy logic when considered in the entirety of IT management.
I’ve heard these reasons for not making a strategic change:
• Avoidance of risk. Making a change to introduce a new technology introduces risk of some type. The risk is really about potential failure and the implications of that failure on the organization and the people making the decision.
• Inability to schedule the time to implement a new technology or process. “We don’t have time to do this” is a common explanation for not doing something. The conversation goes into limited budgets for staffing and not enough people to take on the extra work. The advantages that might be gained by the implementation are typically dismissed out of hand.
• Limited budget to invest in the technology. The blame for this is usually placed on “executive management” or “the business” making choices that would limit IT’s ability to invest in new technology.
• IT decision makers want to wait until other organizations have proven the technology works. There usually is some justification here because we all know of past products and technologies that did not last for an extended period of time. Organizations want assurance that they are not investing in a transient technology. In IT, the time expectation for something new is perceived to be 10 years.
• Complexity introduced into IT over time increases risk or makes change more difficult. This means that over time, the effort to avoid complexity proves too great and causes a greater resistance to change in the future.
The negatives for resisting new technologies or new procedures in IT are easy to argue with. Many new technologies can bring value to organizations that properly implement them. These include tiered storage systems with solid-state drive (SSD) technology, storage virtualization, scale-out NAS storage, data reduction technologies, IT as a Service, and ‘big data’ analytics. To not move forward with technologies and processes that will have staying power means missing economic advantages. The lack of advances in IT can have a parallel effect on the IT leadership.
Changes will have to be made eventually, and may be more costly the longer they are put off. I recently heard about an organization looking to replace a data center because it was more than eight-years-old. The justification was that the efficiency gain of a new data center was worth the financial investment. That might be true statistically, but it does not seem to be an intelligent overall investment. Evolving through the introduction of new technologies, improvements in changing procedures, and educating IT personnel has to be a better answer than a complete discard and start over
But if the barriers – the arguments given to avoid introduction of technology or change – are so overwhelming and inhibit greater efficiencies, it may be the path of least resistance. The force on a body at rest — the inertia of IT to not do something new — may not have enough impact to start the motion forward. Education on the technologies and economic advantages need to have the net force to move IT forward.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).