May 10, 2012 10:54 AM
Posted by: Dave Raffo
, big data
, object storage
Quantum will add object storage as a tier on top of its StorNext file system with the help of an OEM deal with startup Amplidata.
Quantum revealed the OEM deal Wednesday, and said it will have a new family of disk systems with the object storage later this year.
CEO Jon Gacek said Amplidata will eventually become part of Quantum’s cloud architecture, but a “big data” appliance will be Quantum’s first product using the technology. That product will incorporate object storage as a tier on a device running StorNext. Quantum is targeting petabyte-scale content and data analytics with the product.
“The first incarnation will show up as a tier underneath StorNext,” Gacek said. “Some customers will use it with tape, and some will use it to replace tape.”
Gacek said Quantum looked at several object-storage vendors but picked Amplidata because of its performance and the way its BitSpread erasure coding algorithm disperses data to guarantee accessibility.
“Amplidata’s performance is very strong,” Gacek said. “More important to me is Amplidata’s ability to do bit spreading to protect data and expand that to geospreading data, opposed to doing RAID and replication. That really lowers the cost of archiving.”
May 10, 2012 8:44 AM
Posted by: Dave Raffo
, flash array
EMC today confirmed the poorly kept secret that it is buying flash array startup XtremIO. EMC did not disclose the price, but Israeli business publication Globes -– which first reported a deal was likely last month -– put the price at $430 million.
That’s a steep price for a company that is not even shipping products yet, but it underscores EMC’s serious push into flash. EMC said it will reveal details about its plans for XtremIo at EMC World later this month. EMC is also expected to flesh out details about its PCIe-based “Project Thunder” shared storage appliance at the show.
According to the XtremIO website, it’s product is a clustered flash array that scales out for capacity and performance. XtremIO claims the system can be rapidly deployed with simple steps for creating volumes, defining hosts, and mapping volumes to host. The arrays support thin provisioning and global deduplication for primary storage, and XtremIO said it would be cost competitive with performance spinning disk storage.
May 9, 2012 8:44 AM
Posted by: Dave Raffo
, vmware view
By Todd Erickson, News and Features Writer
Pivot3 is continuing its push in the virtual desktop infrastructure (VDI) space by working closer with VMware.
Last week, Pivot3 announced that its virtual storage and compute (vSTAC) line of appliances for virtual desktop infrastructure (VDI) environments for SMBs now support VMware’s View 5.1 virtual desktop system as part of the Pivot3’s participation in VMware’s Rapid Desktop Program.
According to Lee Casswell, Pivot3’s chief strategy officer, the Rapid Desktop Program and View 5.1’s new vCenter Operations for View (vCOV) and View Storage Accelerator (VSA) features will help speed SMB virtual desktop pilot programs and deployments.
Pivot3’s vSTAC appliances use a distributed RAID and grid computing infrastructure to allow individual appliance resources to be shared among an entire vSTAC deployment to better handle a VDI deployment’s need for increased input-output operations per second (IOPS) and easy scalability. Each vSTAC appliance can support 100 virtual desktops. To increase an environment’s number of virtual desktops, administrators add more appliances.
Casswell said View 5.1’s vCOV will benefit SMB virtual desktop deployments because it allows IT departments without dedicated storage or virtual desktop administrators to monitor individual desktop and system health, troubleshoot issues, and assign resources from one pane of glass.
The VSA is similar to vSphere’s Content Based Read Cache (CBRC) in that it takes advantage of linked clones by caching desktop image blocks to reduce storage I/O while reading View images, Casswell said. The VSA reduces performance bottlenecks and lowers storage costs.
Pivot3 is targeting state and local governments, education and remote offices and branch offices (ROBOs) of larger firms. The main driver for these markets, particularly education, is the increased use of personal computing devices in the classroom and business by end users, the so-called bring-your-own-device (BYOD) trend.
“Users are expecting to use their own PCs,” Casswell said. That includes laptops, smartphones, and tablet computers. The IT administrators Casswell has talked with are shifting their focus away from standard desktop computing devices. “Rather than have to go and invest money in the end points, [administrators are] investing money in providing more intelligent centralized classroom designs,” Casswell said.
James Bagley, a senior analyst with the analyst firm Storage Strategies Now, believes Pivot3 has positioned itself well for its VDI-in-a-box solution. “They are targeting the right markets,” Bagley said. “Places where you have a medium-sized VDI environment and they’ve got a really easy way to address it.”
Bagley continued to say that the SMB VDI storage market has no clear leaders yet.
“I don’t know of anyone who right now really stands out,” Bagley explained. “All of the [storage] manufacturers right now are working on similar capabilities.”
Pivot3’s vSTAC View 5.1 support is available immediately with pricing starting at $350 per desktop, which includes all licensing fees.
May 6, 2012 8:31 PM
Posted by: Randy Kerns
automated storage management
, nand flash
, storage system
, thin provisioning
, wide striping
Storage systems are undergoing important changes. New systems are becoming available that are both sophisticated and make storage “simple.” Simple is mainly a euphemism for automating many complicated tasks that administrators had to deal with before, but there’s a lot more to this than just automation of tasks.
There are modern architectures where the underlying device abstraction or virtualization has been changed to enable advanced features such as:
• allocating capacity only on write operations (thin provisioning)
• distribution of data across devices to maximize the number of possible I/O operations (wide striping)
• applying device protection algorithms such as RAID or Forward Error Correction at the abstracted level
• and other advanced capabilities
I wrote about some of these architectural changes here.
Other updates have changed the way storage is configured. For advanced systems, element managers are made simpler by automating underlying actions. And, the tuning that was a cross between tribal knowledge and super specialist training is built into these systems.
Another ongoing change is the elimination of electro-mechanical devices for storage. The current trend is toward NAND flash used in solid-state drives (SSDs). These devices provide less power consumption, greater performance, and potentially longer lifespans than disk technology. Currently undergoing a rapid price decline, flash and the solid-state technology to follow will become the foundation of modern storage devices.
To use an automobile analogy, storage systems have moved from a relatively primitive state to a modern system that makes it seem simple. Automobiles that used to require a crank start, manual adjustment of the spark advance, and points changes every 10,000 miles are inconceivable to most of today’s drivers. How many car owners today know what a manual choke is?
Storage systems are making that same type of modernization transition. We’re at an inflection point for storage as we move to a modern generation of systems.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
May 3, 2012 8:03 AM
Posted by: Dave Raffo
cloud storage provider
Cloud storage provider Nirvanix completed a $25 million Series C funding round today with plans to build out a new engineering center in Colorado, and move toward an IPO.
The round brings Nirvanix’s total funding to $70 million. CEO Scott Genereux said the company is expanding its “Cloud Competency Center” in Boulder under its new VP of cloud storage engineering Dave Barr, who previously led engineering for LeftHand Storage iSCSI SANs at Hewlett-Packard. Nirvanix is also keeping its San Diego engineering team.
“Over the next six to 12 months you can expect to see our engineering team deliver innovation aimed at addressing petabyte-scale clouds,” Genereux said. He said NIrvanix will also likely increase its data centers from eight to at least 10, with additions in Europe, Asia and perhaps South America.
Nirvanix is trying to establish itself as the leading enterprise cloud storage provider, with the help of an OEM deal with IBM Global Services signed last October. Nirvanix is managing a 9 PB digital repository cloud at USC, and Genereux said the provider will soon announce another customer with even more data in the cloud. He said Nirvanix has signed 30 customers already through IBM.
“Customers are frustrated with the same old story, buy a box, hope you have enough capacity and manage it the same way and do a tech refresh in three years,” he said. “It’s a circle that box vendors have formed for them.”
Genereux said Nirvanix competes more with the “box vendors” than with other cloud services. He said EMC is the most frequent competitor, but most of the data Nirvanix moves to the cloud is coming off NetApp storage. “EMC might be losing against us when we’re bidding, but NetApp is losing the footprint,” he said.
Genereax said he hopes Nirvanix can reach profitability without another funding round. “We’re marching towards an IPO,” he said. “Our business strategy is based on taking the company to the public markets over the next few years.”
Khosla Ventures led the funding round, with previous investors Valhalla Partners, Intel Capital, Mission Ventures and Windward Ventures participating. Khosla general partner David Weiden joins the Nirvanix board.
May 1, 2012 9:54 AM
Posted by: Dave Raffo
cache area network
, project thunder
Move over SAN, WAN, LAN and MAN. EMC is pushing the notion of a CAN – cache area network – with its upcoming Project Thunder product.
During the Solid State Storage Symposium last week in San Jose, Calif., Brian Sorby, an EMC business development director, provided more details on the Thunder product for analysts and bloggers. EMC first disclosed Thunder when it officially launched its VFCache – formerly Project Lightning – in February. VFCache is a PCIe flash card that goes inside a server. Thunder will expand that by using PCIe flash in an appliance.
Sorby said Thunder would consist of a high-speed front end (InfiniBand or 40-gigabit Ethernet), a lightweight operating system and VFCache cards inside a 2U or 4U appliance.
He said that while VFCache lets end users accelerate LUNs on their SANs, it is limited because “putting these cards into every server in an enterprise is a tedious process.” That’s where Thunder comes in.
“Thunder is a perfect complement to blade servers and rack servers that can’t be messed with,” Sorby said. “It’s a cache area network to bring this above a storage area network to today’s 21st century storage bottleneck elimination. Our new term or new vision is to bring SSDs out of the array, and also to bring the SSDS out of having to buy an entirely different type of product just to get access to it, also to alleviate the problem of having to put a PCIe card in every server in your environment to take advantage of some high I/O feature. This is establishing our cache area network. It’s the next logical step, and the direction EMC is pointed in today.”
EMC is certain to provide more details of Project Thunder at EMC World later this month, and may even officially launch the product.
April 30, 2012 11:36 AM
Posted by: Randy Kerns
, storage products
Last week two major storage vendors made significant system announcements. Hitachi Data Systems rolled out its Hitachi Unified Storage (HUS) that has block and file support and is aimed at the mid-tier market. NetApp unveiled Dynamic Disk Pooling for the E-Series platforms. Dynamic Disk Pooling is a new storage pooling implementation enabling faster drive rebuilds than with traditional RAID.
I found it interesting that neither of these launches were coordinated with major storage events. This was a bit unexpected because most major storage announcements come just prior to storage events -– either industry-wide events or the vendors’ own shows — so the vendors have the opportunity to speak in depth with the assembled press, analysts and customers about their new products. In these cases, HDS and NetApp decided not to use any of the recent storage events or wait until the next storage event.
This raises the question of what would be considered a major storage event now.
VMworld, which occurs each August, is probably the biggest show for announcing new storage products from multiple vendors. VMworld is filled with IT professionals involved with server virtualization. And these pros usually realize that storage systems can make a large difference in the number of virtual machines supported per physical server and ultimately determine the success of server virtualization.
The National Association of Broadcasters (NAB) show held this month in Las Vegas has become another storage showcase event. There were more than 20 storage product announcements at this year’s NAB. But the storage systems at NAB have different usage characteristics than traditional IT.
There are also many storage announcements at the Supercomputing conference, with the next (SC12) scheduled for November. Super computing systems are focused on the high performance computing market.
Both NAB and SC have large numbers of attendees representing many different interest areas in their industry. NAB drew more than 100,000 attendees this year. VMworld is increasing yearly in attendance with the focus more from traditional IT than as a specialty vertical. Years ago the major announcement venues for storage were Comdex and CeBit.
Over the next two months, Symantec, EMC, Hewlett-Packard and Dell will all host their own conferences and launch products there instead of at industry-wide shows.
This means there is no longer a handful of major shows that we can look to for storage product news anymore. They can come from the remaining industry storage shows such as Storage Decisions or Storage Networking World (SNW), more targeted shows such as NAB, Supercomputing , vendor-sponsored shows, or independent of shows. You can’t stay up to speed by going to one or two shows a year anymore. Reading the coverage from SearchStorage and the other TechTarget storage sites is probably the best way to keep up with storage announcements.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
April 27, 2012 9:55 AM
Posted by: Dave Raffo
, dedupe for flash
, flash storage arrays
, primary deduplication
It’s been a slow news year for data deduplication. The data reduction technology has yet to make its big splash for primary storage and is taken for granted for backup. But things picked up this week as EMC Data Domain, FalconStor, Hitachi Data Systems and Permabit all either expanded their dedupe products or talked about their plans.
Permabit aims dedupe software at flash arrays
With the adoption rate of dedupe for primary storage slower than anticipated, Permabit this week unveiled Albireo for Flash Technologies, which is really a flashy way of saying it supports solid-state storage with its Albireo Software Development Kit (SDK) and Virtual Data Optimizer (VDO) for Linux.
Permabit does not sell Albireo software directly, but makes its SDK and VDO available for OEM partners.
Permabit founder and CTO Jered Floyd says primary dedupe adoption is slow because the large established storage vendors resist the notion of cutting into disk sales by shrinking data. (The large vendors dispute this, and all have or are working on some type of dedupe for primary data). Floyd maintains the benefits and needs for primary dedupe for flash are greater than for disk arrays, and the startups selling flash systems are more open to incorporating dedupe.
“We believe dedupe will be a basic required feature for any flash platform,” he said. “Permabit makes it so these companies building new flash platforms can easily and rapidly integrate dedupe.
Does dedupe have to be different for data on flash than hard disk? Floyd said there are benefits and challenges for dedupe on flash that goes beyond dedupe on hard drives. He said dedupe can not only significantly lower the cost per gigabyte of flash but also help improve latency and reliability and avoid wear by reducing the number of writes on a system. Floyd claims Albireo can meet the high demands of flash by handling more than 250,000 IOPS on a single core processor.
Permabit CEO Tom Cook said “a handful” of flash vendors are involved in the early access program for Albireo and he has commitments form a few. He expects to announce deals in the second half of the year.
It will be interesting to see who signs up for Albireo. All-flash startups such as Nimbus Data, Greenbytes, Pure Storage, SolidFire, and XtremIO have dedupe or are promising it for when they begin shipping. Does that mean the market for Albireo is smaller than Permabit anticipates?
“It would be a mistake to assume we’re not working with vendors who have announced dedupe but have not yet delivered,” Floyd said. “Not having dedupe in a flash storage system is going to be a huge liability.”
HDS prepares primary dedupe appliance
Hitachi Data Systems is planning primary data reduction for its newly released Hitachi Unified Storage, as well as a deduplication appliance, according to Fred Oh, HDS’ senior product marketing manager for NAS. He said data reduction for the file portion of the HUS will be available this year and the appliance is expected in the summer. Oh wouldn’t say if HDS is using technology from Permabit, which had an OEM deal with NAS vendor BlueArc before HDS acquired BlueArc.
FalconStor provides inline dedupe option
FalconStor added inline dedupe to its virtual tape library (VTL) product, FalconStor VTL 7.5. FalconStor now supports inline, concurrent and post-processing dedupe as well as its Turbo dedupe option for post-processing.
In the early days of dedupe, the inline versus post-process issue was hotly debated. Inline requires less disk capacity on the back end because it reduces data before moving it to the backup target. Post-processing dedupes at the target, so it requires more capacity but is usually the faster method. Faster processors have alleviated inline dedupe speed concerns, and some of the early post-processing advocates have added an inline option or switched from post-processing to inline.
FalconStor claims its dedupe options are the most flexible.
“We added inline dedupe as a fourth choice,” said Darrell Riddle, FalconStor senior director of product marketing. “We see it as a good fit for smaller systems or systems that need more power up front.”
For a four-node VTL cluster, FalconStor claims its inline dedupe can handle more than 28 TB per hour and post-processing dedupe can back up more than 40 TB per hour.
FalconStor’s concurrent dedupe runs post-process, but does not wait until all backups are completed before deduping on the back end. Riddle said FalconStor VTL customers can also turn off dedupe if they have little or no compressable data.
FalconStor VTL 7.5 software costs from $2,500 to $4,500 per terabyte under management, depending on the configuration.
EMC gives Oracle RMAN a DD Boost
EMC extended its deduplication accelerating DD Boost software to Oracle RMAN, an application known for getting poor deduplication ratios.
April 26, 2012 9:09 AM
Posted by: Dave Raffo
, google drive
, IT consumerization
, online file sharing
Just because Google Drive is aimed at SMBs and consumers doesn’t mean the cloud storage service will have no impact on enterprises.
Google Drive will almost certainly add to the consumerization of IT that Randy Kerns recently wrote about because it will expand the number of users functioning as their own storage administrators. And the attention it has already sparked will make it more likely that most businesses will at least consider using the cloud for some of its file storage and data protection.
“On the face of it, this topic does not appear to concern the corporate IT manager or CIO, but chances are employees will start using this service to do more than share family photos and recipes,” Ovum principal analyst Richard Edwards wrote in an e-mail about Google Drive’s impact on the enterprise. “Corporate email systems are notorious for their measly storage quotas and message attachment size limitations, and so the sharing and distribution of large corporate files, such as PowerPoint presentations, engineering drawings, and creative content are an obvious use case for Google Drive.”
Edwards said Ovum recommends what he calls “business-grade” cloud collaboration services such as Box and Huddle because of their superior feature management and administration capabilities. Google Drive is seen as a prime competitor to these services as well as other popular file sharing clouds from Citrix, Dropbox, Egnyte, Nomadix, SpiderOak, SugarSync and Syncplicity.
Andres Roldriguez, CEO of cloud NAS vendor Nasuni, said Google Drive can go beyond the file sharing services already on the market because it controls the application stack and a mobile operating system. And while he doesn’t see Google Drive as a competitor to enterprise storage vendors, he does warn that enterprise vendors need to address data on mobile devices in a hurry.
“File storage and synchronization engines are changing storage as we know it,” Rodriguez wrote in an email. “Any large storage vendor that isn’t thinking about how to extend its current data center offerings to mobile is going to be unpleasantly surprised in the next 24 months as more workers shift to accessing data from tablets and smart phones. The pressure on IT is already intense. The control points for much corporate storage today are the Domain Controller (DC) and the CIFS protocol. No one wants to re-architect access control because of mobile users. What we need to figure out is how to extend the access control model we have today to include the new platforms.”
Ranajit Nevatia, VP of marketing for Nasuni rival Panzura, says Google Drive is a long way from becoming an enterprise service because adding features such as global namespace, file locking and enterprise encryption is “damn hard.” He said there is a big difference between file sharing and project sharing, which is what enterprise storage must support.
“Google Drive, Box, Dropbox, iDrive, these are becoming a dime a dozen now,” Nevatia said. “Everybody’s coming up with file sharing with free amounts of storage associated with them. When you look at the target market and use cases they’re going after, it’s not overlapping with what we’re doing. It will put pressure on consumer level file sharing services, but it’s not meant for large enterprises. Our customers collaborate on projects like architectural engineer design or handle large amounts of research data. We’re not talking about two gigabytes or five gigabytes. We’re talking terabytes of data.”
Tom Gelson, Imation’s director of business development and its cloud strategist, said he has mixed reactions about Google’s entry into cloud storage. Imation’s data protection appliances are used by cloud providers and Gelson said the vendor plans on launching its own cloud service. And as an SMB vendor, that would make it a Google competitor. But Gelson agrees with Nevatia about the need for security in the cloud.
“Google rubber stamps cloud backup, because everybody knows Google,” he said. “It’s exciting, but we’re all concerned. Imation is focused on SMBs and if you talk to an SMB IT director, the biggest concern is security. That’s Imation’s biggest focus. We want to make sure data is secure once it sits on the cloud.”
Gelson pointed out Imation acquired three security companies in 2011-– Encryptx, MXI and Iron Key. He said Imation encrypts data in flight to the cloud, and also encrypts data on its RDX removable hard drive media.
Ethan Oberman, CEO of online file sharing company SpiderOak, brings up another potential sore spot for Google – privacy. Oberman wonders if Google will try to integrate Google Drive with Google Plus and if it will record users’ activities.
“Google has definitely been one of the more innovative companies since its inception, so the market will have high expectations for how Google Drive might change the way we work within the cloud,” said Oberman wrote in an e-mail statement. “There is obviously a very fine line between harvesting consumer data across Google platforms for a ‘richer experience’ versus the potential reality that every step we take on Google’s turf is recorded and analyzed. How Google addresses the 800-pound gorilla knocking on the door – privacy – will define how the company is widely perceived by the public. Google Drive will be a key part of this test.”