Storage Soup

January 6, 2016  2:02 PM

Nimbus CEO: We’re still alive

Dave Raffo Dave Raffo Profile: Dave Raffo

Nimbus Data has not “vanished,” as I wrote in a story about storage vendors facing challenges in 2016. The all-flash vendor has just been keeping “very quiet” for the past 18 months or so.

Nimbus CEO and founder Tom Isakovich sent me an e-mail today saying he is still in business and has been working on a new all-flash product.

“Nimbus Data has been very quietly at work on its most ground-breaking all-flash technology yet and soon will be unveiling a battery of new systems and software,” Isakovich wrote. “The all-flash wars have only begun. Less than five percent of storage systems revenue is currently derived from all-flash systems. This is still the very early days of this industry.”

It was the first I heard from Nimbus since the summer of 2014 when the vendor abruptly cancelled a briefing for a new array. The “latest news” on the front page of the Nimbus web site is a press release dated June, 2014. Most industry analysts I talk to regularly thought Nimbus had closed its doors.

Gartner listed Nimbus as a niche player in the all-flash array Magic Quadrant published in June 2015, but the report said “Many customers who approach Nimbus Data and request information, offers, quotations and participation in RFIs and RFPs do not receive an answer.”

Nimbus has always been a lean operation, as Isakovich has not taken any venture funding. Still, he had always kept industry analysts and media informed of product news until suddenly going quiet.

“I understand that being quiet is out-of-the-norm for us, but we will return to our vocal selves soon,” Isakovich said in his e-mail.

Isakovich added that the vendor is still selling its Gemini all-flash arrays, which were last upgraded in May 2014.

January 5, 2016  8:33 AM

SimpliVity cozies up to Microsoft, Hyper-V

Dave Raffo Dave Raffo Profile: Dave Raffo

Hyper-converged pioneer SimpliVity has opened a research and development office in Seattle near to Microsoft, a sign that it is close to adding Hyper-V support.

SimpliVity’s OmniCube appliances combine storage, compute and virtualization. SimpliVity has supported VMware hypervisors since its start in 2013 and added KVM support in 2015 but still lacks support for Hyper-V.

SimpliVity chief marketing officer Marianne Budnik said OmniCubes will support Hyper-V in early 2016. She said the vendor will have around 30 developers in Seattle “working day in and day out to advance all things Microsoft.”

SimpliVity’s goal is for its OmniStack data virtualization platform to support any hypervisor, x86 server, cloud provider and management tool.

Budnik said SimpliVity would not follow rival Nutanix in developing its own native hypervisor, though. Nutanix last June launched its Acropolis hypervisor. That alleviates the need for an outside hypervisor, and also can help Nutanix customers move workloads across different vendors’ hypervisors. Nutanix also supports VMware, Hyper-V and KVM hypervisors.

“We haven’t heard customers ask for yet another hypervisor,” Budnik said when asked if SimpliVity would develop a hypervisor. “We’re focused on simplifying and lowering cost below the hypervisor level. We look forward to working more closely with VMware, Hyper-V and KVM.”

Microsoft has also accepted SimpliVity into the Microsoft Enterprise Cloud Alliance program. That will help SimpliVity integrate faster with the Azure cloud as well as future Hyper-V releases.

“We will develop alongside Microsoft instead of waiting for them to make a major release and then follow later,” Budnik said.

When SimpliVity adds Hyper-V support, customers will be able to manage OmniCubes through Microsoft System Center, similar to how they manage VMware hypervisors through vCenter.

Budnik said more than 80% of SimpliVity customers run Microsoft workloads such as SQL, Exchange or SharePoint on OmniCube appliances.

December 31, 2015  10:53 AM

Archive redefined

Randy Kerns Randy Kerns Profile: Randy Kerns

When the word “archive” is used in conversations about storing data, it brings up preconceptions depending on the individuals and their roles in IT. The most common thought is that an archive is where backup data goes, and the term is associated with backup software and tape. The other thought is that an archive is for data that is not needed anymore and is the place where data goes to die.

These preconceptions are unfortunate and wrong. They foster resistance to implementing or using an archive, and often lead to dismissal of the concept. They also lead to relegating an archive to usage by those who manage the backup process.

Certainly, this limits the flexibility for IT usage and leads business owners  believing that archive means a different type of access or unacceptable delay in getting access to their information. This attitude does not allow for using an archive as another tier of storage.

These concepts of archiving mistakenly assign a fixed value on the data stored.  But the value of data changes. It does not just diminish with time, but may increase in importance. Ultimately, these preconceptions lead to treating the archive as a location for abandoned data.

The term “archive” is both a verb and noun. The noun part – dealing with location – needs to be redefined.  Preconceptions are difficult to counter and this has led to use of a new term – “content repository.” That term connotes different types of usage, but mainly it is used to describe another tier of storage with a different cost structure that still provides online access expected by business owners. The content repository can serve as an archive for backup data, as a secondary storage location, and as what is described by some as an “online archive.”

It is difficult to change the preconceptions of what an archive is. The term needs to be redefined to reflect the economic value an online archive can provide. The easiest path may be to start a new discussion about a content repository and explain the usage in each case.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

December 23, 2015  5:12 PM

Backup appliance revenue up slightly

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Worldwide factory revenues for backup appliances grew 2.2 percent year-over-year in the third quarter this year, with revenue totaling $813.6 million, according to International Data Corp.’s Worldwide Quarterly Purpose-Built Backup Appliance Tracker.

EMC remains the leader in this market with 61.4% revenue share, or $499 million, while Symantec and its Veritas business came in at No. 2 with 14.3 percent or $116.5 million in revenues. IBM and Hewlett Packard Enterprise (HPE) followed, with IBM generating 5.2 percent in market share or $42 million in revenues. HPE garnered 5 percent in revenue share or $40.4 million in revenues. Dell generated about $24 million revenues or 2.9 percent market share.

Vendors in the “other” category generated about $92 million in revenues or 11.3%t market share, which was down from 11.5%  compared to the same period in 2014.

The total market revenues for the backup appliance market was $813.6 million in the third quarter of this year, compared to $796.4 million during the same period in 2014, which is an overall 2.2 percent increase.

IBM did experience a 16% year-over-year decline in revenues while HPE’s revenues increased 38%, generating its $40.2 million compared to $29.2 million in the third quarter of 2014. HPE had the biggest increase, followed by Dell with a 32.2% jump.

IDC defines purpose-built backup appliances as disk-based server engines that are used as a target for backup data and replicated backup data. The appliances are standalone systems for backup but also include features such as deduplication, compression, encryption and remote replication.

The total purpose-built backup appliance open systems factory revenue grew 3.8 percent year-over-year in the third quarter of 2015 with revenues totaling $742.4 million worldwide. The mainframe backup market declined 12.3% during the same period. Total worldwide capacity shipped for purpose-built backup appliances was 831 PB, which is an increase of 22.4% year-over-year.

Liz Conner, research manager for storage systems at IDC, said vendors have upgraded their products to support the cloud and they have placed a bigger focus on deduplication, backup software and a simpler single pane of grass management for backups.

December 23, 2015  10:19 AM

Nutanix prepares for 2016 IPO

Dave Raffo Dave Raffo Profile: Dave Raffo

Hyper-converged vendor Nutanix set itself up to become the first storage vendor to go public in 2016.

Nutanix filed an S-1 statement, which is the first step towards an initial public offering (IPO). IPOs have been rare in technology recently. Among storage companies, only all-flash array vendor Pure Storage went public in 2015.

Like Pure, Nutanix’s S-1 filing shows a history of impressive revenue for a young company but heavy losses as well with no sign of profitability in the near future.

Nutanix reported revenue of $30.5 million for fiscal year (ending July 31) of 2013, $127.1 million for 2014 and $241.1 million for 2015. It claimed $87.8 million in revenue last quarter.

But Nutanix lost $44.7 million in 2013, $84 million in 2014 and $126.1 million in 2015. It lost another $38.5 million last quarter, for total losses of $312 million over its history.

The filing gave no forecast of when Nutanix expects to become profitable.

Nutanix raised more than $312 million in venture funding, including a $140 million round in Aug. 2014.

Nutanix claims approximately 2,100 customers, including 226 of the Global 2000. More than 1,000 customers were added during fiscal 2015, and Nutanix added another 345 last quarter.  The Nutanix S-1 filing listed Activision Blizzard, Best Buy, Kellogg,, Nasdaq, Nintendo, Nordstrom, Inc., Toyota Motors of North America, and the U.S. Department of Defense as customers.

Those customers have come at a great cost. Sales and marketing is Nutanix’s largest expense. The vendor spent $161.8 million of its total $259.2 million sales/marketing in 2015 and another $58.6 million (of $89.8 million total expenses) last quarter. Research and development cost $73.5 million in 2015 and $23.9 million last quarter.

Hyper-converged systems combine storage, compute and virtualization in one box. Nutanix, founded in 2009, began selling the first hyper-converged systems on the market in Oct. 2011. Its early systems were targeted to VMware customers, but that strong partnership frayed after VMware launched its own Virtual SAN (VSAN) hyper-converged software in 2014. Nutanix developed its own hypervisor called Acropolis that became available this year to compete with VMware. Nutanix also supports Microsoft Hyper-V and KVM hypervisors along with Acropolis and VMware vSphere. Prism management software is the other key piece of the Nutanix platform.

Nutanix sells its software on appliances built by Super Micro. It also sells through OEM partners Dell and Lenovo, who sell Nutanix software on their servers.

Nutanix mentioned those OEM deals in the filing. Nutanix did not say how much revenue has come through Dell since Dell started selling Nutanix systems in late 2014, but did mention that the relationship is complicated by Dell’s proposed $67 billion acquisition of EMC. Dell will also acquire EMC-owned VMware, which competes with Nutanix.

“Dell will control VMware, and could combine the Dell, EMC and VMware product portfolios into unified offerings optimized for their platforms,” the filing stated.

December 21, 2015  10:36 PM

NetApp buys flash startup SolidFire, discontinues FlashRay

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

NetApp filled a gap in its flash storage portfolio today with an $870 million deal to buy SolidFire and finally threw in the towel on its long-delayed FlashRay product line.

The SolidFire acquisition will give NetApp a scale-out all-flash array with sophisticated volume-level quality of service (QoS) controls. The product initially held appeal in particular with cloud service providers, but the Colorado-based startup later added storage features to expand its customer base to general enterprises.

Meanwhile, NetApp relied largely on its FAS and EF Series storage systems loaded with only solid-state drives (SSDs) to compete in the hot all-flash array market. The company had long promised a scale-out FlashRay product designed from the ground up for flash. But the development work dragged on for years, and only a single-node FlashRay model has seen limited availability to date.

NetApp CEO George Kurian cited the SolidFire acquisition as “an excellent example of our investment in areas of growth,” but he said the trade-off is the immediate discontinuation of the FlashRay program. Kurian said NetApp would not bring the FlashRay product to market, although he claimed much of the program’s intellectual property is currently integrated into NetApp products or available for future development.

“We feel that with All Flash FAS we can cover the preponderant majority of the use cases that FlashRay used to be planned for, and with SolidFire, we can cover the remaining as well as multiple new use cases that neither FlashRay nor All Flash FAS would have been able to cover,” he said.

Kurian said NetApp will have all-flash offerings to address each of the three largest all-flash array market segments: the SolidFire line for “customers that want to deploy an Amazon- or a Google-like highly distributed, shared-nothing environment built on top of white-box economics,” the All Flash FAS (AFF) for the “enterprise buyer who values a lot of storage services,” and the EF Series for application owners focused on performance and consistent low latency.

He said the SolidFire technologies would complement the All Flash FAS product line and provide opportunities for new customers in the cloud service provider market as well as with enterprise data centers that want to deploy next-generation, Web-scale architectures on premises for applications such as NoSQL databases, Hadoop and DevOps.

“SolidFire’s market is bigger and faster growing than the All Flash FAS market. I think it represents the choices that customers make to deploy Web-scale designs increasingly in their data centers,” said Kurian.

But Kurian also said that NetApp was pleased with the growth rates of the EF and All Flash FAS products for enterprise use cases. He cited a $370 million run rate for the EF and All Flash FAS lines.

“The performance and economics of flash are continually improving, and therefore it is being used to address a broader and broader range of use cases in the enterprise,” said Kurian. “And this caused us to want to address those customer segments that are now transitioning from disk to flash.”

Kurian said NetApp reviewed all of the all-flash array architectures in the market and thinks it acquired “the best and the most differentiated set of capabilities for the customers’ data centers of the future.” SolidFire CEO Dave Wright will head the SolidFire product line within NetApp’s product operations, he said.

NetApp’s and SolidFire’s boards of directors unanimously approved the $870 million cash deal, which will be financed through debt, according to NetApp’s chief financial officer, Nick Noviello. He said NetApp expects to repay the debt over the course of its fiscal 2017.

“I’m excited about the potential of the SolidFire acquisition to further the topline growth of NetApp over time,” said Kurian.

NetApp expects the SolidFire acquisition to close in its fiscal 2016 fourth quarter. The company’s fiscal 2015 fourth quarter ended on April 24.

December 18, 2015  5:40 PM

Metalogix retools its Essentials for Office 365

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Cloud software provider Metalogix recently announced an enhanced version of its Essentials for Office 365 with backup support for Exchange Online mailboxes, a granular migration function and a security and permissions management tool.

The company’s Essentials for Office 365, a content management product, now supports backup and data protection for Microsoft Exchange. It allows IT administrators to to backup and restore data from Exchange Online mailboxes. The software comes preloaded with multiple applications that collects the data under a single console platform.

“The stock and trade of our company is migration. We continue to enhance that capability but we are also moving beyond that,” said Abe Peled, Metalogix’s CEO. ”

The software also was upgraded with more granular migration capabilities to move content, lists, libraries, workflows, permissions and metadata from multiple sources. The software also can connect to online file-sharing tools like Dropbox and Box to migrate data in and out of those systems.

A new Information Manager tab allows users to “put the controls right in the ribbon of their SharePoint interface.” The Essentials for Office 365 platform is fully compatible with the SharePoint 2016 beta, bringing the functionality across all versions of SharePoint. Users can take on-premise SharePoint file shares and bring it into the cloud-based Office 365 as a way to consolidate application management and tag, move, upload and download data in bulk.

The Essential for Office 365’s security also was enhanced with new security and permission management capabilties that include the ability to discover, analyze and manage users and permissions across multiple sites. The software can identify sensitive content and analyze it for pattern recognition, identify orphan data and revoke permissions.

Terri McClure, senior analyst at ESG Research, said Metalogix has retooled its Essentials for Office 365.

“It’s a new content lifecycle management platform,” she said. “Before they were primarily focused on migration capabilities. Now they are helping to do content lifecycle management across heterogeneous platforms, across multiple cloud services.”

December 17, 2015  8:37 PM

Exablox lets users pick their dedupe style

Dave Raffo Dave Raffo Profile: Dave Raffo
Data Deduplication, Exablox, Object storage

While Exablox describes its object-based OneBlox appliance as a production storage system, many of its customers use it for backup. This week the vendor added a useful backup feature – variable-length deduplication.

Variable-length deduplication is an alternative to fixed-length deduplication. Variable-length dedupe breaks a file system into chunks of various sizes while fixed-length breaks all files into chunks that are the same size. Because it can use smaller chunks, variable length dedupe can get better reduction ratios.

Exablox senior director of products Sean Derrington said he expects a backup on OneBlox with variable-length dedupe to typically provide a 10:1 ratio compared to 3:1 for fixed length.

EMC’s Data Domain and Avamar, and Quantum DXi disk backup systems also use variable block dedupe.

Exablox storage is object based, but users access data through an NFS or SMB file share. All of its dedupe and compression occur inline, and it also supports continuous snapshots and replication between boxes.

Exablox supported fixed-length dedupe on OneBlox from the start. Derrington said he expects some customers will still use fixed-length for primary storage but variable-block will be the more popular option for backup. Customers can use fixed- length and variable-length for different applications inside the same storage pool.

“For any applications that they’re storing on OneBlox, customers can define storage policies on a per share basis or application basis,” Derrington said. “They can decide if they want fixed-length or variable-length dedupe, if they want compression on or off, snapshots on or off, or remote replication on or off. They can use fixed-length dedupe because it’s better suited for primary data and turn compression off on videos or images because they don’t compress well. They can turn snapshots off if the application does [snapshots].”

Derrington said a customer using fixed-length dedupe now can switch to variable-length, and get the full benefit of the better ratios after the current retention period passes. “If they have a 14-day retention period, on Day 15 all the data that’s been backed is on variable-length dedupe,” he said.

Tim Stammers, a senior analyst at 451 Research, said Exablox offers “simple cheap and deep storage” with a twist. “It’s unusual to have native NFS and SMB on an object box,” he said. “Exablox supports existing apps and leaves you with object storage underneath.”

Exablox also added an on-premise option for managing OneBlox appliances. Private OneSystem proactively monitors and identifies potential problems. It is deployed as a virtual machine inside a customers’ data center. From the start, Exablox used a cloud-based OneSystem for storage management. Now customers can choose between on-prem and cloud management.

December 11, 2015  3:21 PM

Nakivo adds Synology support to its backup software

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Nakivo, Synology

Nakivo this week upgraded its Backup and Replication software, adding support for Synology RackStation and DiskStation NAS devices for virtual machine backup in VMware environments and to the Amazon Cloud.

In October, the company created a similar installer for Western Digital NAS devices for virtual machine appliances that are onsite and  offsite. The latest software install supports up to 20 Synology NAS models and next year Nakivo plans to support QNap NAS systems.

“Our requirements are quite modest,” said Sergei Serdyuk, Nakivo’s director of product management. “We support everything that has one gigabyte of RAM and two CPUs. Our software runs right on the [NAS] box. You don’t have to go through any protocols like CIFS. The installment has been made very simple and should not be a problem.”

Nakivo Backup and Replication runs on a physical or virtual machine within a VMware environment and helps boost backup speeds when the software is deployed directly on a Synology NAS. The backup data is written directly to the NAS disks, bypassing NFS and CIFS protocols.

The backup and replication software can use the available space on the NAS device to store virtual machine backups. The VMware data is automatically deduplicated at the block level so that only the unique data is written to the virtual machine backup repository.

The data deduplication works on a global level across the entire backup repository so that all the data from all the virtual machine backups are factored in. After the virtual machine data is deduplicated, the software automatically compresses each block of data to save space in the backup repository.

December 8, 2015  10:59 AM

Tintri adds entry-level all-flash, teases analytics

Dave Raffo Dave Raffo Profile: Dave Raffo

Virtual machine-storage specialist Tintri rounded out its all-flash product line today with an entry level version priced at $125,000 for 5.34 TB of raw capacity.

The VMstore T5040 is the third model in the T5000 series launched in August. The T5000 series now matches Tintri’s flagship T800 hybrid platform with three versions, each tuned for a certain number of virtual machines.

The three T5000 models use the same dual-controller 2u box, each with 24 solid-state drives. The difference is in the capacity of the SSDs. The T5040 uses 240 TB SSDs and is rated for up to 1,500 VMs. The T5060 holds 480 TB SSDs for 11.5 TB of raw capacity and 2,500 VMs, and the T45080 uses 960 TB SSDs for 23 TB of capacity and 5,000 VMs. All T5000 systems ship fully populated with 24 SSDs.

“We had designed all three from the start, but wanted to test our customer base and see if there was demand for something lower than the 5060,” said Chuck Dubuque, Tintri senior director of product marketing. “A lot of our customer needs are still provided by hybrid systems, but we did have requirements for 100 percent flash for certain workloads.”

Dubuque said Tintri expected about 10% to 20% of its new systems sold would be all-flash when it launched the T5000, and that forecast has been accurate. He said he expects most all-flash arrays to go to T800 customers who want better performance for workloads that prove difficult to virtualize.

Tintri on Thursday will preview a new version of Tintri Analytics in an online presentation.

Dubuque said new predictive analytics features expected in 2016 will build on Tintri’s current real-time monitoring and troubleshooting. The new analytics will predict problems and the results of changes to applications. Tintri analytics are done at the VM-level.

“We have this rich data set from every single VM in terms of metadata about performance, size, how much flash it is utilizing on hybrid arrays, the name of the VM, which hypervisor it’s on,” he said. “Customers will be able to use that to predict future growth requirements much more accurately. We will use customers’ historical VM level data to model workload growth needs. For example, it will show if a customer is running low on flash and might run out of flash performance before running out of physical capacity. This will tell you the best solution – whether you should buy another hybrid system, an all-flash system, and what you need to do to rebalance your system if your needs change.”

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: