Object storage is a method of storing information that differs from the popular file storage and venerable block storage that are the most familiar in IT. It is another a type of storage where information and metadata are both stored, although the metadata may be stored with the actual information or separately.
We often see new object storage products these days with slightly different implementations. While many of these new object storage offerings are designed to solve specific problems for customers, all have the opportunity to be used across many different applications and environments.
The object storage of today is different than what some may have been familiar with in the past. Previously, a content address was used to identify data put into a storage system such as the EMC Centera. The new object storage, for the most part, is storing files with associated metadata frequently using HTTP and REST. The metadata can be different depending on the implementation or the application or system, and contains information such as data protection requirements, authorizations and controls for access, retention periods, regulatory controls, etc.
New object storage systems address storage challenges, including:
• Massive scaling to support petabytes and even exabytes of capacity with billions of objects.
• Hyper performance data transfer demands that go beyond the traditional storage systems used in IT today.
• Compliancy storage for meeting regulatory controls for data including security controls.
• Longevity of information storage where data can be stored and automatically transitioned to new technologies transparent to access and operational processes.
• Geographic dispersion of data for multiple site access and protection from disaster.
• Sharing of information on a global scale.
For the vendors offering new object storage systems, success with narrowly targeted usages can eventually spread to opportunities in enterprises. They address problems that already apply in the enterprise, but perhaps not at the scale that requires object storage yet.
Some of the vendors offering object storage today include:
Data Direct Networks Web Object Scaler (WOS)
HDS Hitachi Content Platform
Scality Ring Storage
Many of these vendors offer a file interface to their object storage as well as the native object API using HTTP and REST.
The types of object storage are developing so fast that the terminology is inconsistent between vendors. I attended the Next Generation Object Storage Summit recently that was convened by Greg Duplessie and The ExecEvent. This event was a great opportunity for vendors and analysts to discuss the technology, and how to describe it and understand the current market place. It was clear in the summit that the initial focus for new object storage should first be on the problems being solved today and then on the opportunities to move into more widespread usage.
This will be a developing area in the storage industry and Evaluator Group will develop a matrix to compare the different solutions.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
With major storage vendors in various stages of preparation to launch all-flash arrays in 2013, the startups already selling flash storage are working to stay a step ahead. For some, this means adding storage management and data protection, while others work on making systems redundant and still others try to reduce costs.
Whiptail’s plan for staying ahead of the game is to make its all-flash arrays the most scalable in the market. The startup is preparing to launch its Infinity architecture in the first quarter of next year. Infinity is an expansion of the vendor’s current Invicta platform, except for it scales to 30 nodes and 360 TB of flash compared to Invicta’s six nodes and 72 TB.
And that’s just the beginning, says Whiptail CEO Dan Crain. “Our largest tested configuration is 30 nodes,” he said. “We can probably go 10 times that, but we haven’t tested it.”
It’s unlikely that anybody will need – or want to pay for – 3.6 PB of flash in one system for a while, so Whiptail has time to test larger configurations. But Crain said his strategy is to have an architecture in place for his early customers to grow into as flash takes hold.
“Our basic message always has been organized around building a platform that folks can invest in and keep building onto,” he said. “People can take anything they’ve ever bought from us and organize it into Invicta.”
Whiptail claims it has achieved 2.1 million IOPS and 21.8 GB per second throughput in testing with a 15-node 180 TB set-up, and projects more than 4 million IOPS and 40 GBps with 30 nodes.
Infinity requires several pieces of technology, including version 5.0 of Whiptail’s Racerunner operating system, and enhancements to the array’s silicon storage routers.
Crain said he doesn’t expect flash to take over the storage world overnight. He predicts it will be a gradual process as early customers use it for high-performance applications and eventually move other critical data onto flash.
That’s why he wants to get an early customer base that will grow into Whiptail storage as it supports higher scale.
“We’ve always said we’re going to build into the market,” he said. “We never go out and tell everybody we’re going to take over the world because that’s not rational. Adoption of our technology is in its infancy.”
Crain said Whiptail already does things such as real-time error correction, clustering, auto-failover and asynchronous replication. Deduplication, a potentially key feature for SSD because of its limited capacity, remains a roadmap item.
“Over time we’ll have dedupe,” he said. “We’re very sensitive on performance latency, so we tend not to compete on cost per gig. Dedupe has benefits in general, but it’s still not yet widely deployed on primary storage.”
Hewlett-Packard has announced a single architecture across storage systems that can span different sizes of enterprises. This is the HP 3PAR StoreServ that now includes the 7000 model to complement the high-end enterprise models currently available.
That gives HP one architecture that covers from the small enterprise though the largest enterprise data center systems.
On the surface, the announcement of the HP 3PAR StoreServ 7000 appears to be a new system for the mid-tier and small enterprise. In reality, it represents a fundamental decision about leveraging investment in an architecture that can scale across multiple market segments and meet market demands such as performance, capacity, resiliency, and advanced operational features at different price points. By leveraging its investment in 3PAR, HP can maximize R&D and support for storage. The 7000 now allows HP a broad breadth of coverage with the single architecture.
Except for NetApp’s FAS platform, no major storage vendor has one architecture that spans from low-end SAN through the high-end of the enterprise. HP does have other storage platforms, such as the StoreVirtual (formerly LeftHand) and the XP P9500 that is re-branded from Hitachi for mainframe storage, but these fit on the extreme high and low ends. Extending 3PAR’s architecture allows HP to phase out its aging EVA midrange platform.
The advantage of leveraging a single storage architecture seems obvious but has been contradictory to the method most vendors use to deliver products to different segments of the market. That’s because they usually gain products through acquisition. That method is expedient but creates independent offerings that require separate (and costly) development and support teams. HP gained 3PAR through acquisition, but the architecture was flexible and scalable enough to address the range of customers from the small enterprise to the data center.
Leveraging one architecture has benefits for both the customer and for the vendor. For the vendor, focusing on one team for R&D and support drives down costs and makes for a simpler sales engagement.
The most important benefit for the customer is a longer product lifespan. With the vendor not having to invest in a diverse set of products, there’s an obvious commitment to the product line. That dramatically reduces customers’ worries that an end-of-life decision will be made based on the economics of investment in that product.
Other benefits include the availability of what may have been considered high-end enterprise features on lower-end systems. For the customer, the continuity of the storage architecture reduces the interruptions that occur when changing processes or moving from one model to another.
A single architecture is part of an evolving landscape for storage. The leverage of hardware technologies and embedded software has been in progress for some time. HP terms that Converged Storage and it is represented in other products included in the major storage announcements beyond the HP 3PAR StoreServ 7000. They include the HP StoreOnce for data protection, HP StoreAll for file and object storage, and HP StoreVirtual for flexible, economic iSCSI storage.
You can expect to see more major vendors going to a single storage architecture with highly leveraged hardware and embedded storage. It makes economic sense for the vendor and the customers. The key is that the architecture must be able to scale to meet the demands in the different usage models for performance, capacity, and price.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
The end of the year is a busy time for storage pros as salesman push to meet year-end quotas and IT plans for year-end operations.
Year-end operations can involve beginning the processes required to close the books of a company and many other business tasks. For the storage group, it typically means additional projects that can only be done when user activity is reduced. Most companies limit operations between Christmas and New Year’s, making it an opportune time for storage projects such as:
• Moving data from one storage system to another. This is done for several reasons: balancing workloads for data access to improve storage systems’ performance; balancing capacity to meet expected demands; and to utilize space more efficiently.
• Moving data off storage systems that are due to be retired as they come off maintenance or warranty.
• Deploying new storage systems to meet increased demand for capacity or performance. Deploying new storage is quickly followed by moving data again to distribute it according to application requirements.
• Increasing the size of data stores such as databases based on demands.
• Performing an end-of-year data protection cycle that will retain information based on business governance demands.
These projects are all critical to operations. Based on the conservative nature of storage professionals, they view these as tasks best done where the potential for impact is least.
So Happy Holidays for the storage (and other IT guys). The time will be spent either in the data center performing the tasks or at home monitoring and controlling the tasks remotely.The image of the storage guys watching multiple screens for operational status while the holiday parties rage on without them is real and has been the experience of most of us that have been in the industry for a long time.
So, why are there no Holiday-proof storage systems or data management software in wide usage? These would be ones that can balance data across the different systems (from different vendors). There are some systems and data management software that can balance or migrate data across like systems or a narrow subset of heterogeneous systems. There are even a few products that can work across any storage platform. But, for the most part, IT storage people still schedule these activities for reduced demand times to minimize potential impacts because they have experienced some impacts in the past and that memory was painful.
Some of the new products (software and hardware) seem to be quite good but the capabilities are limited to a few at this point. The confidence in using them is built over time and eventually the automation will seem like a commonplace activity and not something that requires special attention. But for now it’s the type of activity that makes storage pros fear a “Danger Will Robinson, danger” moment.
This will change eventually, and the confidence will grow for the systems and software that can do these activities across multiple operational environments and not only for very specific usages. Vendors continue to make advances and the successes will allow for greater usage in the non-holiday time. Then maybe the storage guys can attend those parties without having to be on call or check on status.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Seattle-based startup Qumulo closed a whopping $24.5 million Series A funding round last week, without even dropping the F-word (flash) or C-word (cloud) that many startups rely on to woo venture capitalists these days.
The Qumulo press release did delve into data growth and played up the team’s Isilon connection. CEO Peter Godman, CTO Aaron Passey and VP of engineering Neal Fachan helped develop Isilon’s OneFS clustered file system that propelled that company to an IPO in 2006 and a $2.25 billion buyout by EMC in 2010.
Qumulo’s executives left Isilon in between IPO and acquisition. Now Godman says he would like to recreate the Isilon culture, even if he can’t replicate the software because EMC now owns the intellectual property. The Isilon connection helped sway Highland Capital Partners, Madrona Venture Group, and Valhalla Partners to invest in Qumulo’s first round.
“Our Isilon experience was a relevant factor in our fund raising [with the VCs], but Isilon was also an extraordinary event in our lives,” Godman said. “It was a vibrant and unique culture, and I give credit to Isilon founders Sujal [Patel] and Paul [Mikesell] for creating that experience.”
Mikesell is VP of engineering at Clustrix and Patel is training for marathons after leaving EMC last month, but Qumolo is sure to have other former Isilon employees on the team. Godman said he plans to expand Qumolo’s 18-person team by 50 or so with almost all the hires based in Seattle.
Godman won’t talk about specifics of the product they are developing, but Qumulo’s press release said the startup will solve manageability, scalability and efficiency problems in storage. Those same characteristics apply to Isilon’s OneFS but Qumulo can’t copy that technology.
“We’re respectful of the IP ownership issue,” Godman said. “Everyone who is an engineer has had to deal with that need to stand clear of things you know are incumbent. But the flip side is it’s easiest to avoid infringing on things you know about.”
He said Qumulo will reveal the timeframe for its product next year. He is willing to address more general storage topics, such as how much the underlying technology has changed in the more than a decade since Isilon began developing its clustered file system.
“Object storage is starting to come into its own now, with a lot of vendors and Amazon S3 using it,” he said. “That part has changed a lot. Also, NAND flash is here now. Its rapidly dropping cost and performance characteristics are disrupting storage technologies, so the kind of storage you build for that looks different than the storage you build for hard disk drives. That couples nicely with the emergence of virtualization. Virtual machines place stress on storage that NAND flash is uniquely suited to address in a cost-efficient way.”
Are there any hints there in what Qumulo is doing? We’ll find out in 2013.
It should be apparent now that new technology drives advances of older technology. Storage companies have a lot to lose if their technology is eclipsed by newer technology, so they continue to fiercely market and invest in extending the older technology. Survival is a driving force for a company and for individuals that have invested their careers in aligning with a technology.
The competition, if you can call it that, between a new technology and the previous generational technology is interesting to watch if you are not invested in one or the other. The older technology must be advanced to stay relevant. In the storage industry, this competition usually means increased capacity at a lower cost with better performance.
An example of this is the advent of solid state storage in the form of NAND flash technology as part of a primary storage device. Flash-based storage continues to become less expensive quickly with capacity increases due to lithography process improvements, better efficiency with added intelligence to manage usage characteristics, and longer lifespan with techniques such as write amplification. It looks as if the reductions will continue for some time based on investments being made into the technology and the number of competitors, but NAND flash itself will be challenged in the future with new generation solid state technologies. Phase Change Memory (PCM) appears to the next solid state technology with promise for continued advancement in the 2015-16 time frame.
At the same time, solid state storage advances are pushing hard disk drives — the current generation technology for primary storage. Heat Assisted Magnetic Recording (HAMR) and Shingled Magnetic Recording (SMR) are technologies developed to increase the recording density.
These will address cost issues but may create issues in the areas of I/O access density, which measures the amount of work being done on a disk drive. Because the disk drive is an electro-mechanical device, the number of I/Os may not be increased in relation to the capacity increase and the ability to get information from the device may be reduced in that relationship. The time to rebuild information from a failed drive in a RAID configuration also suffers from new hard drive technology. The larger capacities may take so long to rebuild that the probability of a second failure becomes unacceptable. Some vendors have implemented advanced Forward Error Correction using Erasure Coding to address this problem.
The advances that new technology forces on older technology are valuable to customers and help “move the ball forward,” to use a football term. Competition is good for the customer whether it is in the form of storage vendors competing with products or in the form of new technologies competing for generational change and dominance.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
EMC acquired Isilon two years ago to fill a void among big data and scale-out NAS use cases that mainstream NAS products could not handle. Now Isilon is taking steps to become better suited to mainstream enterprise applications with the latest version of its OneFS operating system that works with all Isilon hardware platforms.
EMC is making its Isilon OneFS 7.0 operating system, code-named “Mavericks,” generally available Friday. Previewed at EMC World in May, OneFS 7.0 has data protection, performance, security and interoperability features more suited to mainstream NAS products than the traditional clustered NAS Isilon capabilities.
Isilon is used largely in media and entertainment, life sciences, oil and gas exploration, healthcare and other high-performance applications. Sam Grocott, VP of marketing for EMC Isilon, said the large capacity files used in those industries now increasingly show up in enterprises.
“Isilon has been used in a world of massive capacity and extreme I/O performance environments that can grow quickly,” Grocott said. “Now we’re seeing those types of data sets show up in enterprise data centers. For instance, we’ve seen much more rapid adoption of enterprise customers dealing with extremely large home directories. We’re seeing up to hundreds of terabytes for a home directory.”
EMC claims the new OneFS version increases single file system throughput by 25% over the former version and new caching capabilities reduce latency by up to 50%. OneFS 7.0 reduces latency by giving each storage node its own nonvolatile random access memory (NVRAM) with cache built in, mirroring writes to cache to other nodes’ caches via InfiniBand across clusters, and confirming the write after the mirror. Previous versions of OneFS would write data to disk after caching it before confirming the write was complete.
Data protection improvements include the ability to use an active snapshot as a writeable snapshot, so a snap no longer has to be copied into an active file system to replace a lost file. Copying the snap could require a lengthy wait in a big data environment. EMC also added one-click failover and fail back to Isilon’s SyncIQ replication software for disaster recovery.
New security features include compliance with SEC 17a-4 requirements for tamper-proof data protection, roles-based administration to prevent unauthorized change to files, and the creation of isolated storage pools with authentication zones.
“We’re not physically creating separate storage silos, but we are logically separating access and directories,” Grocott said of the authentication zones. “Service providers are big proponents of this.”
Interoperability improvements include a REST-based API for third-party vendors to write to, and support for VMware vStorage APIs for Array Integration (VAAI) and vStorage APIs for Storage Awareness (VASA).
While casting Isilon as a more mainstream storage system, EMC is stopping short of pushing its iSCSI support for block storage. The midrange VNX platform is EMC’s main unified storage product, even though Isilon does support iSCSI.
“The way customers use our storage, it’s predominantly file today and will continue to be that way,” Grocott said. “We’re going to be focusing on file-based storage.”
I recently spoke with a storage software vendor promoting a product that was an independent storage management and reporting tool. The functionality it performed was impressive. There were high-value capabilities that a storage administrator could find useful. The product ticked many of the boxes for what was needed by a storage administrator.
But, it was really a standalone product. It required a separate physical server for installation. It did not integrate with any top-level management software or any other real-time monitoring software. There was no link to any other storage management tools. The product had a narrow focus. It did one thing, but it did it well. I just got this visual image that I had one type of screwdriver given to me but I had to go find another whole box of tools to fix the car and keep it running.
The vendor had great pride in what the product did and that was understandable. The term “best in class” fit the product. But it was only one screwdriver. The tool would be useful, but the scope of managing storage is much bigger than that.
There was opportunity for the product. There would be IT storage administrators who needed that specific tool. The specialists that would use an independent tool are primarily in the high-end of the enterprise market. The lower segments of the industry typically have fewer unique specialists. The administrators there have multiple responsibilities, which in many cases now include server virtualization administration, operating system, networking and storage.
A tool that can work in an environment where the administrator is not necessarily focused on storage and does not have the specialist training would be much more useful. How this reconciles with the “best in class” designation may cause a rethinking of the parameters applied to that definition.
It comes down to being uncompromising in the way the tool operates versus making it work for a broader segment of the market. If there are enough sales of an independent product, the vendor will continue with this method. What constitutes enough sales depends on the vendor and the price of the product. The determination about making the product work in a more integrated environment has many considerations – additional development time required, changes that occur outside of the vendor’s control, which integrated environments to target, etc.
For the IT customer, the evaluation of storage management processes and tools needs to include the different offerings available and the needs for the environment. “Best in class” may not always be best in the environment.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
After another rocky quarter, FalconStor has hired investment banker Wells Fargo Securities to evaluate “strategic alternatives.”
FalconStor CEO Jim McNiel stopped short of saying the data protection vendor is for sale but said he is considering all options. He said FalconStor cannot fund all the projects on its development roadmap without a cash injection.
When asked during FalconStor’s earnings call Wednesday if he would consider selling off patent licenses or other intellectual property, McNiel said, “the main purpose behind retaining Wells Fargo is to explore all strategic alternatives. We have a sizable R&D pipeline and a number of key products we would like to invest in. We are developing [disaster recovery and next-generation deduplication and backup storage repository products], and those initiatives are being funded from operations. There are other projects we would love to fund, but we can’t do it from operations.”
McNiel predicted that the new data protection products in development would bring FalconStor revenue to double-digit growth, and he said he expects the company to be cash flow positive this quarter after cost-cutting moves including layoffs. But FalconStor’s $17.1 million revenue last quarter was down from %18.9 million last year for a 9% drop. FalconStor lost $3.6 million last quarter. That’s not as bad as the $5.4 million lost in the same quarter of 2011, but leaves the vendor with $25.8 million in cash and assets.
FalconStor is counting on $10 million in annual savings from its recent reorganization, but McNiel said it is difficult to close sales because of soft IT spending and heavy discounting by competitors.
“We weren’t terrifically happy with the results,” McNiel said on the earnings call.
He continued to paint a rosy product picture for the future, however, saying FalconStor’s near-term product roadmap represents a “new day of innovation.”
Rivals like to remind potential customers of FalconStor’s rough recent history, including a $5.8 million payment to the U.S. Securities and Exchange Commission (SEC) to settle criminal and civil charges that the vendor bribed a customer. Those charges prompted founder ReiJane Huai to resign as CEO in September 2010, and he committed suicide a year later.
While McNiel said the goal of any transaction would be to increase shareholder value, several FalconStor channel partners said they welcome a sale of the company when McNiel replaced Huai in 2010.
Scale Computing will use the $12 million funding round it recently closed to market its HC3 “hyper-converged” platform that combines storage, virtualization and networking. That marketing job includes explaining why the startup chose to use Red Hat’s KVM as the embedded hypervisor instead of VMware.
The funding release that went out last week claims that Scale has close to 100 HC3 deployments in the first four weeks of its launch and it accounted for half of the startup’s sales in the third quarter. Scale still sells storage-only systems but is clearly shifting its focus to its converged box.
“HC3 is now the foundational product for the company,” said Pat Conte, Scale’s general manager of worldwide field operations.
While much of the funding round will be spent on marketing, Conte said there are product enhancements on the short-term roadmap. One new feature will be the ability to migrate virtual machines from the user interface. Others include “more complex networking, and things that will enhance the UI but not change functionality.”
One thing Scale will not change in the near-term is its decision to use the KVM hypervisor instead of VMware inside the HC3. Conte said Scale supports VMware for customers who want to use HC3 as a SAN connected to external servers. But the servers inside HC3 are built on KVM along with Scale’s clustered file system.
Conte said Scale chose KVM over VMware because it is cheaper to license and “KVM is the fastest hypervisor we found.”
That’s the technical reason for snubbing VMware. There is also a strategic one, which Scale CEO Jeff Ready laid out in a recent e-mail exchange with me. Pointing to Hitachi Data Systems’ new Unified Compute Platform (UCP) converged stack, which is managed through VMware vCenter, Ready charged that HDS is the latest vendor to fall into EMC’s VMware trap. He added that having an integrated hypervisor is a crucial part of a converged stack, but EMC rivals should avoid using VMware because EMC is the majority owner of VMware. He expects EMC to integrate VMware into its hardware and work more closely with VMware than other storage vendors can.
”When products move to fully integrated hypervisors, it is EMC who sits in the driver’s seat with their relationship with Vmware,”Ready wrote. “And all hardware vendors who are driving customers to a semi-converged solution requiring VMware are falling into the trap EMC has set for them.”
Ready claims Scale avoids the EMC trap by using KVM, “eliminating VMware licensing costs entirely, and offering an elegant, powerful solution that steers customers clear of this EMC trap.” He also maintains the days of VMware’s vendor neutrality are over.
“To me, VMware, as we’ve known it, is dead,” ”Ready added. “The hypervisor itself, as a differentiator, is dead. What matters are the applications, the management tools, and the automation. Selling those types of tools — often at very high licensing costs — is the world of EMC. VMware is EMC. What we are offering is an alternative solution to this, one that leverages the current hyper-convergence trends – and builds its tools, licensing and integration into a package built specifically for the midmarket.”
Of course, basing your survival on the death of VMware is either lunacy or genius. That makes Scale worth watching either way.