Storage consolidation seems to be a simple concept. If you reduce the number of storage systems, you benefit from fewer devices to manage, less space required, and less power/cooling demands. Yet there is confusion over exactly what the term storage consolidation refers to.
The confusion comes from some vendor messaging and what IT storage professionals actually view as storage consolidation. This leads to miscommunication and different sets of expectations about storage optimization projects.
For IT storage professionals, storage consolidation is about storage efficiency. A new storage system can be deployed to meet the aggregate performance and capacity demands to replace disparate storage systems. The simplest form of storage consolidation is to reduce the number of boxes on the floor. But storage consolidation does not mean one storage system for all purposes.
There are legitimate reasons why IT operations end up with multiple storage systems over time. While people claim this can be avoided through better management and planning, things just don’t work out that way. Multiple storage systems come about because:
• Projects that require more storage come with a budget to purchase new storage systems specifically for that project.
• IT operations consolidate because of acquisitions or mergers.
• New capacity demands require more storage, and it often makes sense to purchase additional systems instead of expanding existing storage systems. That’s because adding capacity to existing storage reduces the access density and overall performance. Also, the asset depreciation schedule for the existing storage system may make it impractical to reset the schedule with an addition.
The “single box for everything” concept is not practical. From an economic standpoint, not all data has equal value and less valuable data can be stored on less expensive, lower-performing storage. The economics of storing data includes the cost of the storage system and the operational costs for protecting and migrating data. The data typically has a lifespan that long outlives any storage system, and managing data over its lifespan is more important for the IT storage professional than the box currently in use. And storage systems are transient. They last a maximum of four or five years before they are replaced with the latest, greatest technology.
Tiered storage can lead to consolidation and enable storage efficiency. Using solid-state technology as a performance tier is a hot trend. Tiered storage allows for greater consolidation by managing the variations in performance requirements, which is really an exploitation of the change in probability access of data over time. This allows the storage system to support a greater amount of consolidation in support of performance and capacity demands.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
A $13 million funding round will help accelerate the transition from Reldata to Starboard Storage Systems.
Starboard closed the round today, three months after re-launching with a new name and re-architected multiprotocol storage system, the AC72. Its lone venture capitalist investor at the time was Reldata investor Grazia Equity of Germany. Starboard’s latest round is led by another German VC, JP Ventures GmbH, with participation from Grazia.
Starboard chief marketing officer Karl Chen said Starboard will use the funding to expand its sales, marketing and customer support. Chen said Starboard has about 40 employees now at its Broomfield, Colo., and Parsippany, N.J., offices and he expects that number to increase significantly over the next three months.
Starboard claims more than 40 customers and more than 1.5 PB of capacity sold for its AC72 systems. Chen said the vendor competes mostly with NetApp FAS2000 and FAS3000 and EMC VNX 5000 unified storage systems.
The AC72 supports Fibre Channel, iSCSI and NAS storage but merely having multiprotocol support isn’t enough these days because the market is flooded with unified storage systems. Starboard will only win if it can live up to its promise to deliver greater storage efficiency and performance at substantially less cost.
Each AC72 system includes three solid-state drives (SSDs) for an acceleration tier. The system automatically writes large sequential workloads to cheaper capacity SAS drives and writes random transactional workloads to 15,000 rpm SAS drives.
Chen said the Starboard’s typical customer is a small enterprise with 50 to 5,000 employees, $10 million to $1 billion in revenue and 50 to 500 virtual machines. “Our customers want to consolidate mixed workloads of unstructured, structured and virtualized data,” he said.
Quantum revealed the OEM deal Wednesday, and said it will have a new family of disk systems with the object storage later this year.
CEO Jon Gacek said Amplidata will eventually become part of Quantum’s cloud architecture, but a “big data” appliance will be Quantum’s first product using the technology. That product will incorporate object storage as a tier on a device running StorNext. Quantum is targeting petabyte-scale content and data analytics with the product.
“The first incarnation will show up as a tier underneath StorNext,” Gacek said. “Some customers will use it with tape, and some will use it to replace tape.”
Gacek said Quantum looked at several object-storage vendors but picked Amplidata because of its performance and the way its BitSpread erasure coding algorithm disperses data to guarantee accessibility.
“Amplidata’s performance is very strong,” Gacek said. “More important to me is Amplidata’s ability to do bit spreading to protect data and expand that to geospreading data, opposed to doing RAID and replication. That really lowers the cost of archiving.”
EMC today confirmed the poorly kept secret that it is buying flash array startup XtremIO. EMC did not disclose the price, but Israeli business publication Globes -– which first reported a deal was likely last month -– put the price at $430 million.
That’s a steep price for a company that is not even shipping products yet, but it underscores EMC’s serious push into flash. EMC said it will reveal details about its plans for XtremIo at EMC World later this month. EMC is also expected to flesh out details about its PCIe-based “Project Thunder” shared storage appliance at the show.
According to the XtremIO website, it’s product is a clustered flash array that scales out for capacity and performance. XtremIO claims the system can be rapidly deployed with simple steps for creating volumes, defining hosts, and mapping volumes to host. The arrays support thin provisioning and global deduplication for primary storage, and XtremIO said it would be cost competitive with performance spinning disk storage.
By Todd Erickson, News and Features Writer
Pivot3 is continuing its push in the virtual desktop infrastructure (VDI) space by working closer with VMware.
Last week, Pivot3 announced that its virtual storage and compute (vSTAC) line of appliances for virtual desktop infrastructure (VDI) environments for SMBs now support VMware’s View 5.1 virtual desktop system as part of the Pivot3’s participation in VMware’s Rapid Desktop Program.
According to Lee Casswell, Pivot3’s chief strategy officer, the Rapid Desktop Program and View 5.1’s new vCenter Operations for View (vCOV) and View Storage Accelerator (VSA) features will help speed SMB virtual desktop pilot programs and deployments.
Pivot3’s vSTAC appliances use a distributed RAID and grid computing infrastructure to allow individual appliance resources to be shared among an entire vSTAC deployment to better handle a VDI deployment’s need for increased input-output operations per second (IOPS) and easy scalability. Each vSTAC appliance can support 100 virtual desktops. To increase an environment’s number of virtual desktops, administrators add more appliances.
Casswell said View 5.1’s vCOV will benefit SMB virtual desktop deployments because it allows IT departments without dedicated storage or virtual desktop administrators to monitor individual desktop and system health, troubleshoot issues, and assign resources from one pane of glass.
The VSA is similar to vSphere’s Content Based Read Cache (CBRC) in that it takes advantage of linked clones by caching desktop image blocks to reduce storage I/O while reading View images, Casswell said. The VSA reduces performance bottlenecks and lowers storage costs.
Pivot3 is targeting state and local governments, education and remote offices and branch offices (ROBOs) of larger firms. The main driver for these markets, particularly education, is the increased use of personal computing devices in the classroom and business by end users, the so-called bring-your-own-device (BYOD) trend.
“Users are expecting to use their own PCs,” Casswell said. That includes laptops, smartphones, and tablet computers. The IT administrators Casswell has talked with are shifting their focus away from standard desktop computing devices. “Rather than have to go and invest money in the end points, [administrators are] investing money in providing more intelligent centralized classroom designs,” Casswell said.
James Bagley, a senior analyst with the analyst firm Storage Strategies Now, believes Pivot3 has positioned itself well for its VDI-in-a-box solution. “They are targeting the right markets,” Bagley said. “Places where you have a medium-sized VDI environment and they’ve got a really easy way to address it.”
Bagley continued to say that the SMB VDI storage market has no clear leaders yet.
“I don’t know of anyone who right now really stands out,” Bagley explained. “All of the [storage] manufacturers right now are working on similar capabilities.”
Pivot3’s vSTAC View 5.1 support is available immediately with pricing starting at $350 per desktop, which includes all licensing fees.
Storage systems are undergoing important changes. New systems are becoming available that are both sophisticated and make storage “simple.” Simple is mainly a euphemism for automating many complicated tasks that administrators had to deal with before, but there’s a lot more to this than just automation of tasks.
There are modern architectures where the underlying device abstraction or virtualization has been changed to enable advanced features such as:
• allocating capacity only on write operations (thin provisioning)
• distribution of data across devices to maximize the number of possible I/O operations (wide striping)
• applying device protection algorithms such as RAID or Forward Error Correction at the abstracted level
• and other advanced capabilities
I wrote about some of these architectural changes here.
Other updates have changed the way storage is configured. For advanced systems, element managers are made simpler by automating underlying actions. And, the tuning that was a cross between tribal knowledge and super specialist training is built into these systems.
Another ongoing change is the elimination of electro-mechanical devices for storage. The current trend is toward NAND flash used in solid-state drives (SSDs). These devices provide less power consumption, greater performance, and potentially longer lifespans than disk technology. Currently undergoing a rapid price decline, flash and the solid-state technology to follow will become the foundation of modern storage devices.
To use an automobile analogy, storage systems have moved from a relatively primitive state to a modern system that makes it seem simple. Automobiles that used to require a crank start, manual adjustment of the spark advance, and points changes every 10,000 miles are inconceivable to most of today’s drivers. How many car owners today know what a manual choke is?
Storage systems are making that same type of modernization transition. We’re at an inflection point for storage as we move to a modern generation of systems.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
The round brings Nirvanix’s total funding to $70 million. CEO Scott Genereux said the company is expanding its “Cloud Competency Center” in Boulder under its new VP of cloud storage engineering Dave Barr, who previously led engineering for LeftHand Storage iSCSI SANs at Hewlett-Packard. Nirvanix is also keeping its San Diego engineering team.
“Over the next six to 12 months you can expect to see our engineering team deliver innovation aimed at addressing petabyte-scale clouds,” Genereux said. He said NIrvanix will also likely increase its data centers from eight to at least 10, with additions in Europe, Asia and perhaps South America.
Nirvanix is trying to establish itself as the leading enterprise cloud storage provider, with the help of an OEM deal with IBM Global Services signed last October. Nirvanix is managing a 9 PB digital repository cloud at USC, and Genereux said the provider will soon announce another customer with even more data in the cloud. He said Nirvanix has signed 30 customers already through IBM.
“Customers are frustrated with the same old story, buy a box, hope you have enough capacity and manage it the same way and do a tech refresh in three years,” he said. “It’s a circle that box vendors have formed for them.”
Genereux said Nirvanix competes more with the “box vendors” than with other cloud services. He said EMC is the most frequent competitor, but most of the data Nirvanix moves to the cloud is coming off NetApp storage. “EMC might be losing against us when we’re bidding, but NetApp is losing the footprint,” he said.
Genereax said he hopes Nirvanix can reach profitability without another funding round. “We’re marching towards an IPO,” he said. “Our business strategy is based on taking the company to the public markets over the next few years.”
Khosla Ventures led the funding round, with previous investors Valhalla Partners, Intel Capital, Mission Ventures and Windward Ventures participating. Khosla general partner David Weiden joins the Nirvanix board.
Move over SAN, WAN, LAN and MAN. EMC is pushing the notion of a CAN – cache area network – with its upcoming Project Thunder product.
During the Solid State Storage Symposium last week in San Jose, Calif., Brian Sorby, an EMC business development director, provided more details on the Thunder product for analysts and bloggers. EMC first disclosed Thunder when it officially launched its VFCache – formerly Project Lightning – in February. VFCache is a PCIe flash card that goes inside a server. Thunder will expand that by using PCIe flash in an appliance.
Sorby said Thunder would consist of a high-speed front end (InfiniBand or 40-gigabit Ethernet), a lightweight operating system and VFCache cards inside a 2U or 4U appliance.
He said that while VFCache lets end users accelerate LUNs on their SANs, it is limited because “putting these cards into every server in an enterprise is a tedious process.” That’s where Thunder comes in.
“Thunder is a perfect complement to blade servers and rack servers that can’t be messed with,” Sorby said. “It’s a cache area network to bring this above a storage area network to today’s 21st century storage bottleneck elimination. Our new term or new vision is to bring SSDs out of the array, and also to bring the SSDS out of having to buy an entirely different type of product just to get access to it, also to alleviate the problem of having to put a PCIe card in every server in your environment to take advantage of some high I/O feature. This is establishing our cache area network. It’s the next logical step, and the direction EMC is pointed in today.”
EMC is certain to provide more details of Project Thunder at EMC World later this month, and may even officially launch the product.
Last week two major storage vendors made significant system announcements. Hitachi Data Systems rolled out its Hitachi Unified Storage (HUS) that has block and file support and is aimed at the mid-tier market. NetApp unveiled Dynamic Disk Pooling for the E-Series platforms. Dynamic Disk Pooling is a new storage pooling implementation enabling faster drive rebuilds than with traditional RAID.
I found it interesting that neither of these launches were coordinated with major storage events. This was a bit unexpected because most major storage announcements come just prior to storage events -– either industry-wide events or the vendors’ own shows — so the vendors have the opportunity to speak in depth with the assembled press, analysts and customers about their new products. In these cases, HDS and NetApp decided not to use any of the recent storage events or wait until the next storage event.
This raises the question of what would be considered a major storage event now.
VMworld, which occurs each August, is probably the biggest show for announcing new storage products from multiple vendors. VMworld is filled with IT professionals involved with server virtualization. And these pros usually realize that storage systems can make a large difference in the number of virtual machines supported per physical server and ultimately determine the success of server virtualization.
The National Association of Broadcasters (NAB) show held this month in Las Vegas has become another storage showcase event. There were more than 20 storage product announcements at this year’s NAB. But the storage systems at NAB have different usage characteristics than traditional IT.
There are also many storage announcements at the Supercomputing conference, with the next (SC12) scheduled for November. Super computing systems are focused on the high performance computing market.
Both NAB and SC have large numbers of attendees representing many different interest areas in their industry. NAB drew more than 100,000 attendees this year. VMworld is increasing yearly in attendance with the focus more from traditional IT than as a specialty vertical. Years ago the major announcement venues for storage were Comdex and CeBit.
Over the next two months, Symantec, EMC, Hewlett-Packard and Dell will all host their own conferences and launch products there instead of at industry-wide shows.
This means there is no longer a handful of major shows that we can look to for storage product news anymore. They can come from the remaining industry storage shows such as Storage Decisions or Storage Networking World (SNW), more targeted shows such as NAB, Supercomputing , vendor-sponsored shows, or independent of shows. You can’t stay up to speed by going to one or two shows a year anymore. Reading the coverage from SearchStorage and the other TechTarget storage sites is probably the best way to keep up with storage announcements.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
It’s been a slow news year for data deduplication. The data reduction technology has yet to make its big splash for primary storage and is taken for granted for backup. But things picked up this week as EMC Data Domain, FalconStor, Hitachi Data Systems and Permabit all either expanded their dedupe products or talked about their plans.
Permabit aims dedupe software at flash arrays
With the adoption rate of dedupe for primary storage slower than anticipated, Permabit this week unveiled Albireo for Flash Technologies, which is really a flashy way of saying it supports solid-state storage with its Albireo Software Development Kit (SDK) and Virtual Data Optimizer (VDO) for Linux.
Permabit does not sell Albireo software directly, but makes its SDK and VDO available for OEM partners.
Permabit founder and CTO Jered Floyd says primary dedupe adoption is slow because the large established storage vendors resist the notion of cutting into disk sales by shrinking data. (The large vendors dispute this, and all have or are working on some type of dedupe for primary data). Floyd maintains the benefits and needs for primary dedupe for flash are greater than for disk arrays, and the startups selling flash systems are more open to incorporating dedupe.
“We believe dedupe will be a basic required feature for any flash platform,” he said. “Permabit makes it so these companies building new flash platforms can easily and rapidly integrate dedupe.
Does dedupe have to be different for data on flash than hard disk? Floyd said there are benefits and challenges for dedupe on flash that goes beyond dedupe on hard drives. He said dedupe can not only significantly lower the cost per gigabyte of flash but also help improve latency and reliability and avoid wear by reducing the number of writes on a system. Floyd claims Albireo can meet the high demands of flash by handling more than 250,000 IOPS on a single core processor.
Permabit CEO Tom Cook said “a handful” of flash vendors are involved in the early access program for Albireo and he has commitments form a few. He expects to announce deals in the second half of the year.
It will be interesting to see who signs up for Albireo. All-flash startups such as Nimbus Data, Greenbytes, Pure Storage, SolidFire, and XtremIO have dedupe or are promising it for when they begin shipping. Does that mean the market for Albireo is smaller than Permabit anticipates?
“It would be a mistake to assume we’re not working with vendors who have announced dedupe but have not yet delivered,” Floyd said. “Not having dedupe in a flash storage system is going to be a huge liability.”
HDS prepares primary dedupe appliance
Hitachi Data Systems is planning primary data reduction for its newly released Hitachi Unified Storage, as well as a deduplication appliance, according to Fred Oh, HDS’ senior product marketing manager for NAS. He said data reduction for the file portion of the HUS will be available this year and the appliance is expected in the summer. Oh wouldn’t say if HDS is using technology from Permabit, which had an OEM deal with NAS vendor BlueArc before HDS acquired BlueArc.
FalconStor provides inline dedupe option
FalconStor added inline dedupe to its virtual tape library (VTL) product, FalconStor VTL 7.5. FalconStor now supports inline, concurrent and post-processing dedupe as well as its Turbo dedupe option for post-processing.
In the early days of dedupe, the inline versus post-process issue was hotly debated. Inline requires less disk capacity on the back end because it reduces data before moving it to the backup target. Post-processing dedupes at the target, so it requires more capacity but is usually the faster method. Faster processors have alleviated inline dedupe speed concerns, and some of the early post-processing advocates have added an inline option or switched from post-processing to inline.
FalconStor claims its dedupe options are the most flexible.
“We added inline dedupe as a fourth choice,” said Darrell Riddle, FalconStor senior director of product marketing. “We see it as a good fit for smaller systems or systems that need more power up front.”
For a four-node VTL cluster, FalconStor claims its inline dedupe can handle more than 28 TB per hour and post-processing dedupe can back up more than 40 TB per hour.
FalconStor’s concurrent dedupe runs post-process, but does not wait until all backups are completed before deduping on the back end. Riddle said FalconStor VTL customers can also turn off dedupe if they have little or no compressable data.
FalconStor VTL 7.5 software costs from $2,500 to $4,500 per terabyte under management, depending on the configuration.
EMC gives Oracle RMAN a DD Boost