CDMI is a Storage Networking Industry Association (SNIA) standard for creating and managing data in the cloud, which is not exactly awash in standards. As explained here, CDMI lets users tag data with metadata that tells a cloud provider the services it should provide for that data. It will define the interface that applications will use to create, retrieve, update and delete data elements from the cloud. If the standard becomes widely implemented among cloud providers, it will make it easier for organizations to move data between clouds.
NetApp acquired Bycast for the StorageGRID software in 2010, and this is its first major release since then. StorageGRID is still used largely by healthcare as a repository for patient records. NetApp makes StorageGRID available along with the vendor’s E-Series arrays in its Distributed Content Repository storage system.
Richard Treadway, NetApp’s director of marketing for big data solutions, said CDMI support will enable developers to distribute content repositories without using proprietary APIs.
NetApp’s CDMI isn’t a big deal yet, because no cloud storage providers or other major storage vendors support it. It will require industry buy-in to become valuable. And standards often take a painfully long time to gain traction in storage.
“Is this another standard that no one’s going to pick up?” Treadway asked. “We believe it will become the standard for moving data in and out of the private cloud or public cloud. We think it also will be the standard for moving and accessing all large sets for big data applications.”]]>
Quantum revealed the OEM deal Wednesday, and said it will have a new family of disk systems with the object storage later this year.
CEO Jon Gacek said Amplidata will eventually become part of Quantum’s cloud architecture, but a “big data” appliance will be Quantum’s first product using the technology. That product will incorporate object storage as a tier on a device running StorNext. Quantum is targeting petabyte-scale content and data analytics with the product.
“The first incarnation will show up as a tier underneath StorNext,” Gacek said. “Some customers will use it with tape, and some will use it to replace tape.”
Gacek said Quantum looked at several object-storage vendors but picked Amplidata because of its performance and the way its BitSpread erasure coding algorithm disperses data to guarantee accessibility.
“Amplidata’s performance is very strong,” Gacek said. “More important to me is Amplidata’s ability to do bit spreading to protect data and expand that to geospreading data, opposed to doing RAID and replication. That really lowers the cost of archiving.”]]>
The Amplistor XT Storage System now supports 3 TB SATA drives in its new AS30 module, allowing it to hold 30 TB in a 1U box and scale to 1.2 PB in a rack with 40 modules. The AS30 will eventually replace Amplidata’s AS20, which holds 2 TB drives and 20 TB in one appliance.
Amplidata claims the AS30 uses about 30 percent less power than the AS20, requiring 2.2 watts per terabyte when idle and 3.3 watts per terabyte when in use. That’s about the same power of a 60-watt light bulb for the entire 30 TB module.
“The really big thing is the power consumption is just over 65 watts, when powered and idle with no disk activity,” said Paul Speciale, Amplidata’s VP of products. “When there is activity, it consumes 3.3 watts per terabyte. But just because of the low performance, these systems can go tens of gigabytes per system so you are not giving up on performance.”
Amplidata’s storage platform is designed for cloud archiving of media and entertainment files, and “big data” file storage. Amplidata sees the media and entertainment industry as a key target for the larger drives.
The vendor improved its BitSpread erasure coding software and data management with its latest AmpliStor XT software released earlier this month.
Randy Kerns, senior strategist at Evaluator Group, said erasure code-based technology becomes more important with higher capacity drives because there is a greater probability of drive failures in the larger drives.
“As you get to higher capacity drives, you have a greater exposure to a second drive failure and rebuild times are longer,” Kerns said. “With that exposure, the probability goes up. Two terabyte drives typically take eight hours to rebuild in a normal system, so it becomes more important when you go to three or four terabyte drives in a multi-petabyte system because you have a higher probability of a problem happening. Media and entertainment is very sensitive to these issues and Amplilidata is targeting that market.”
Amplidata’s AS30 has a starting price of under $0.60 per Gigabyte.]]>
According to the vendor, Riak CS lets customers store and retrieve content up to 5 GB per object, is compatible with the Amazon S3 API, has multi-tenancy features, and reports on per-tenant usage data and statistics on network I/O. Pricing for Riak CS starts at $10,000 per hardware node, which comes to about 40 cents per GB for a 24 TB node.
Riak CS is Basho’s second software application. Its Riak NoSQL database is based on principles outlined in the 2007 Amazon Dynamo white paper. While Riak is an open source application, Riak CS is not. Basho added multi-tenancy, S3 API compatibility, large object support and per tenant usage, billing and metering to Riak CS to make it a cloud application.
“We look at ourselves as an arms dealer of Amazon principles [outlined in the 2007 Amazon Dynamo distributed white paper],” Basho CMO Bobby Patrick said. “Riak CS is for large service providers looking for scalability and tenancy, and also large companies that want S3 without AWS [Amazon Web Services]. This is S3-compatible, but for a private cloud.”
He said several large multinational companies are evaluating Riak CS as a method of keeping important data in-house behind a firewall.
Riak CS is built to run on commodity hardware. Patrick said it will compete mainly with OpenStack Swift object storage, but it will also come into competition from EMC’s Atmos and software from smaller vendors such as Scality Ring and Gemini Mobile Cloudian.
“Any hosting company, any telecom company, any infrastructure-as-a-service company, is going to have to evolve from expensive shared storage to cloud storage for economic scale benefits,” Patrick said. “A new architecture is needed for that. They need to do it on cheap commodity hardware and in a way they can manage it.”]]>
New investor Intel Capital joins previous Amplidata investors Swisscom Ventures, Big Bang Ventures and Endeavour Vision to bring the vendor’s total funding to $14 million. Amplidata CEO Wim De Wispelaere said the funding will be used to beef up sales and marketing for the AmpliStor Optimized Object Storage system that has been making its way into cloud and “big data” implementations.
The vendor’s headquarters are in Belgium and it also has a Redwood City, Calif. office. Most of its early customers are in Europe, so you can expect to see a big marketing push in the U.S. now.
Object storage is considered one of the hottest emerging technologies and used for dealing with large data stores. AmpliStor features an erasure code technology called BitSpread to store data redundantly across a large number of disks, and its BitDynamics technology handles data integrity verification, self-monitoring, and automatic data healing.
De Wispelaere said Amplidata’s customers generally fall into two use case categories that require scalable storage. “The first use case is what we call online applications,” he said. “Customers have written their own application and need to scale out their storage to store photos, videos or file. Another big market is media and entertainment. We’re used as a nearline archive for a postproduction system, so data is readily available whenever it’s needed.”
Amplidata faces stiff competition in both areas. For the cloud, it’s going against startups such as Scality, Cleversafe and Mezeo as well as established players Hitachi Data Systems HCAP, EMC Atmos and Dell DX, and API-based service providers such as RackSpace OpenStack and Amazon S3. In scale-out storage, its competition includes EMC Isilon, HDS BlueArc, and NetApp.
Amplidata is part of Intel’s Cloud Builders alliance, and last fall demonstrated its system at the Intel Development Forum. That relationship –- and Intel’s investment – should ensure that Amplidata will be kept current on the Intel roadmap.
It’s possible that Amplidata is benefitting from its relationship with Swisscom as well. Swisscom offers cloud services, but De Wispelaere could not say if it uses Amplidata storage. “I have a strict NDA with Swisscom,” he said.]]>
HDS brought out the Hitachi Data Ingestor (HDI) caching appliance a year ago, calling it an “on-ramp to the cloud” for use with its Hitachi Content Platform (HCP) object storage system. Today it added content sharing, file restore and NAS migration capabilities to the appliance.
Content sharing lets customers in remote offices share data across a network of HDI systems, as all of the systems can read from a single HCP namespace. File restore lets users retrieve previous versions of files and deleted files, and the NAS migration lets customers move data from NetApp NAS filers and Windows servers to HDI.
These aren’t the first changes HDS has made to HDI since it hit the market. Earlier this year HDS added a virtual appliance and a single node version (the original HDI was only available in clusters) for customers not interested in high availability.
None of these changes are revolutionary, but HDS cloud product marketing manager Tanya Loughlin said the idea is to add features that match the customers’ stages of cloud readiness.
“We have customers bursting at the seams with data, trying to manage all this stuff,” she said. “There is a lot of interest in modernizing the way they deliver IT, whether it’s deployed in a straight definition of a cloud with a consumption-based model or deployed in-house. Customers want to make sure what they buy today is cloud-ready. We’re bringing this to market as a cloud-at-your-own-pace.”]]>
The DX6000G SCN is an appliance based on the Dell PowerEdge R410 server that connects to its DX6000 object storage nodes. Dell director of DX product marketing Brandon Canaday said the compression appliance can reduce data by 90%, depending on file types. Although Ocarina technology can dedupe or compress files, the object storage appliance will only use compression. It has two modes — Fast Compression mode is optimized for performance and Best Compression mode is optimized for capacity reduction. Customers can choose one or both modes.
Canady said customers can set policies to use fast compression when data is first brought onto the storage system and then switch to the best compression after a pre-configured time period. The appliance uses different compression algorithms depending on file type.
“It’s like applying tiered intelligent compression,” Canady said. “Because we maintain metadata with the file inside of the storage device, we can employ algorithmic policies as part of the lifecycle management of content.”
List price for the DX6000G SCN will begin at about $25,000, depending on the amount of data ingested. The appliance will become generally available next week.
Dell plans to incorporate Ocarina’s compression and deduplication across its storage systems, with more reduction products expected early next year. Canady said the performance and compression modes will likely show up in all of the data reduction appliances.
“Each implementation is likely to be slightly different, but we see value in having a performance approach and a capacity approach,” he said.]]>
The report, “Prepare for Object Storage in the Enterprise, defines object storage as “Storage of data that is broken into distinct segments, each containing a unique identifier that allows for retrieval and integrity verification of the data.”
The report isn’t anti-object storage. It points out object storage systems’ value in the areas of massive scalability, greater custom control over data, the ability to reduce management and hardware costs, and its WORM and shared tenancy features. It also recommends object storage for certain workloads. But it also looks at the downsides that need to be considered before adopting an object storage system.
“Poor performance, high data change rates, capacity sprawl, and lack of standards will prevent object storage from becoming ubiquitous, but it has the potential to significantly improve storage economics, ease of use, and control when mapped to the right workloads,” wrote the report’s lead author Andrew Reichman.
The right workloads, according to the report, include archiving, cloud storage, Web 2.0 and imaging applications. In other words, you shouldn’t be using object storage systems for databases.
As for drawbacks, Reichman writes that object storage focuses on data movement, high scalability and automation but not performance. “Performance, measured in inputs/outputs per second (IOPS) is not the strong suit,” he writes. “Put simply, object storage is just not designed as a replacement for SAN storage, deployed where high transactional performance is paramount.”
He adds that object storage’s use of unique identifiers is “not the most efficient design for data that gets edited frequently.” So while it’s good for picture or audio files that usually aren’t modified after creation, it’s not so good for databases and collaborative files.
And while file system vendors follow consistent standards and formats, there is no standard for object APIs. “In the end, the benefits of objects may take a back seat to the consistency and familiarity of files, unless the industry can get together on standardization,” Reichman wrote.]]>
Bycast’s StorageGrid software is used mostly in medical archiving products through OEM deals with Hewlett-Packard and IBM. Bycast added support for clustered NAS, security partitions, chargeback and virtual servers to its basic archiving features, and began marking StorageGRID as a building block for private and public clouds. It lets NetApp compete with EMC’s Atmos object storage system designed for internal clouds, as well as provide storage for the growing medical archiving market.
NetApp’s press release today cited Bycast’s value for collaborative projects, specifically in vertical markets such as healthcare, cloud service providers, digital media, and Web 2.0 companies.
“Bycast extends our unified storage strategy and enhances our solution for shared storage infrastructure by adding new capabilities for global data access and mobility,” Manish Goel, NetApp executive vice president of product operations, said in the release. “The addition of Bycast’s products enables NetApp to offer our enterprise customers and service provider partners a complementary solution that enables them to efficiently build and manage a very large-scale global repository of data central to many IT-as-a-service offerings.”
It’s unclear if StorageGrid capabilities will be built into NetApp’s core storage systems or will be sold as a separate product, but the reference to extending the vendor’s unified strategy seems to indicate some integration. It also remains to be seen if NetApp will continue Bycast’s OEM deals. IBM is a close NetApp OEM partner, but HP is one of NetApp’s biggest competitors.
NetApp isn’t saying how much it paid for Bycast, but says it was a cash transaction.
Check out our SearchStorage story for me on this acquisition.]]>