Storage Soup


September 28, 2018  1:22 PM

Report: Support for high availability applications is ‘mixed’

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
High Availability

What are high availability applications if they’re not highly available?

According to a report released this month by SIOS, in partnership with ActualTech Media, one-quarter of respondents say their high availability applications fail every month. Only 5% said they never suffer an availability failure.

“An organization’s highly available applications are generally the ones that ensure that a business remains in operation. Such systems can range from order-taking systems to CRM databases to anything that keeps employees, customers and partners working with you,” the report said. “… The news is mixed when it comes to how well HA applications are supported.”

The report, “The State of Application High Availability,” gathered responses from 390 IT professionals in the United States and focused on tier-1 mission-critical applications, including Oracle, Microsoft SQL Server and SAP/HANA.

Twenty-six percent said their availability service fails at least once a month.

“This is a difficult statistic to grasp, as it would seem that there’s a fundamental flaw somewhere that needs to be corrected,” the report said. “Fortunately, not everyone is faring this badly.”

Among the rest of the 95% that said they suffer failures in high availability applications, 28% said it happens every three to six months, 16% said it happens every six to 12 months and 25% said it happens once per year or less.

High availability requires expertise, said Jerry Melnick, president and CEO of SIOS, a software company that manages and protects business-critical applications. That includes getting the right software to match requirements, getting the system configured correctly, plus discipline and management in how organizations approach the cloud, he said.

Is high availability up in the cloud?

As with many other uses, organizations are exploring the use of the cloud for high availability applications.

“Modern organizations are embracing the hybrid cloud and making strategic decisions around where to operate critical workloads,” the report said. “But not everyone is keen on moving applications into an off-premises environment.”

Twelve percent of respondents have not moved any high availability applications to the cloud. Twenty-four percent are running more than half of those applications in the cloud.

“Putting all those pieces together … requires a higher set of IT skills,” Melnick said.

Once an organization gets there, though, the cloud can help streamline high availability operations.

“The cloud offers a unique opportunity to cost effectively get to disaster recovery and handle disaster recovery scenarios,” Melnick said.

Sixty percent of organizations that haven’t made the full move to the cloud said they prefer to keep high availability applications on premises where they have more control over the infrastructure.

Melnick said he thinks some of those respondents will eventually move to the cloud.

September 27, 2018  10:11 AM

Startup raises $60 million to grow Datrium DVX business

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

Datrium’s latest $60 million funding will fuel its hybrid cloud computing and data management product line and business expansion into Europe.

The Series D funding round boosted the Sunnyvale, California-based startup’s overall total to $170 million since 2012. New CEO Tim Page closed the round as he tries to pivot the company from its SMB and midmarket roots to enterprise sales of Datrium DVX.

Former CEO Brian Biles, a Datrium founder who is now chief product officer, said the startup is having a great quarter, and Page has “re-energized a lot of our focus on go-to-market.” Page’s experience includes building out an enterprise sales organization while COO at VCE, the VMware-Cisco-EMC joint venture that produced Vblock converged infrastructure systems.

Datrium DVX first hit the market in early 2016 with server-based flash cache to accelerate data reads and separate data nodes for back end storage. DVX software orchestrates and manages data placement between the Datrium Compute Nodes and Data Nodes and provides storage features such as inline deduplication, compression, snapshots, replication, and encryption.

Separate Compute and Data Nodes
Datrium now pitches its on-premises DVX as converging “tier 1 hyper-converged infrastructure (HCI) with scale-out backup and cloud disaster recovery (DR).” But Datrium DVX is not HCI in the classic sense with virtualization, compute, and storage in the same box. The Datrium DVX system’s Compute Nodes cache active data on the front end, and separate Data Nodes store information on the back end, enabling customers to scale performance and capacity independently.

Customers have the option to buy Datrium Compute Nodes, supply their own servers, or use a combination of the two, so long as they’re equipped with solid-state drives (SSDs) to cache data. The compute nodes support VMware, Red Hat and CentOS virtual machines. Disk- or flash-based Datrium Data Node appliances handle the backend storage.

This year, Datrium added a software-as-a-service Cloud DVX option to back up data in Amazon Web Services (AWS) and CloudShift software for disaster recovery orchestration. The company claimed that more than 30% of its new customers adopted Cloud DVX within the first three months of its availability. Biles said Cloud DVX could lower backup costs in AWS because Datrium globally deduplicates data.

Biles characterized Datrium’s Series D funding as a “standard round” that will help to grow all parts of the company. He said Datrium currently operates in the United States and, to a lesser degree, in Canada and Japan, and the company plans to expand to Europe next year. Datrium has more than 150 employees and more than 200 customers, according to company sources.

“We have good momentum now, but we want to keep feeding that,” Biles said. He offered no estimate on when the company might become cash-flow positive. “A lot depends on the next couple of years of sales acceleration.”

Samsung Catalyst Fund led the latest funding round, with additional backing from Icon Ventures and prior investors NEA and Lightspeed Venture Partners. Icon’s Michael Mullany, a former VP of marketing and products at VMware, joined Datrium’s board of directors.


September 26, 2018  8:24 AM

Dell EMC, Nutanix share HCI market leadership

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Dell EMC extended its lead over Nutanix in hyper-converged systems sales in the second quarter, although Nutanix crept ahead of Dell-owned VMware into first when the market is measured by HCI software.

That was the verdict from IDC in its worldwide converged systems tracker report released last night.

IDC measures the hyper-converged infrastructure (HCI) market two ways: by the brand of the systems and by the vendor whose software provides the core hyper-converged capabilities. Dell-owned technologies led both HCI market categories in the first quarter with Nutanix second in both. Nutanix, which moved to a software-centric reporting model earlier this year and is getting out of the hardware business, jumped up in software revenue but lost ground to Dell EMC in systems.

Overall, IDC said the HCI market grew 78% year-over-year to $1.5 billion in the second quarter. Dell EMC’s $419 million in revenue gave it 28.8% share. That represented 95.2% year-over-year growth, outgrowing the market. Nutanix placed second in branded revenue with $275.3 million, up 48.5% year-over-year and basically flat from its first-quarter branded revenue of $273 million. Nutanix had 18.9% of the branded revenue, down from 22.7% a year ago and 22.2% in the first quarter of 2018.

On the software side, Nutanix revenue grew 88.9% year-over-year to $498 million and 34.2% of the HCI market. It slipped past VMware, which grew 97% year-over-year to $496 million and 34.1% share. IDC considers Nutanix and VMware in a statistical tie because they are within one percent in share. VMware’s share jumped from 30.9% in the second quarter of 2017 to 34.1% a year later. But it dropped from 37.2% share in the first quarter while Nutanix increased from 35.2% to 34.2% quarter-over-quarter to catch VMware. However, Dell did receive part of Nutanix’s revenue gains because the Dell EMC  XC platform uses Nutanix software through an OEM deal,.

Dell had $79 million in HCI software, putting it in a statistical tie Cisco ($77 million) and Hewlett-Packard Enterprise ($72 million). Dell had 5.4% share, Cisco 5.3% and HPE 4.9% — all within one percent. Because Cisco and HPE sell their software on their own servers, they had the same revenue and share in systems as in HCI software. HPE had the largest year-over-year growth of any systems vendor, increasing 119.4%. However, Cisco grew more since the first quarter, jumping from $60 million to $77 million and increasing share from 4.9%. HPE dropped share quarter-over-year, slipping from 5% to 4.9% while its revenue went from $61 million to $72 million.

Hyper-convergence was the only three of the converged markets that increased year-over-year. The certified reference systems/integrated infrastructure market declined 13.9% year-over-year to $1.3 billion in revenue. Integrated platform sales slipped to $729 million for a 12.5% decline. Dell led the certified reference systems market with $640 million, with No. 2 Cisco/NetApp at $481 million. Oracle led in integrated platforms with $441 million and 60.4% share. The HCI market is also now the largest of those three converged markets for the first time.


September 21, 2018  5:18 PM

NetApp Kubernetes Service launched to orchestrate containers

Garry Kranz Garry Kranz Profile: Garry Kranz

NetApp launched its Data Fabric architecture to adapt its storage to manage applications built for the cloud. Container orchestration had largely been a missing aspect in Data Fabric, but the vendor has taken a step to try and plug the gap.

NetApp has acquired Seattle-based StackPointCloud for an undisclosed sum. StackPointCloud has developed a Kubernetes-based control plane, Stackpoint.io, to federate trusted clusters and sync persistent storage containers among public cloud providers.

The first fruit of the merger is the NetApp Kubernetes Service, which the vendor claims will allow customers to launch a Kubernetes cluster in three clicks and scale it to hundreds of users. NetApp said it will levy a surcharge of 20% of the overall compute cost for the cluster to cover deployment, maintenance and upgrades. That equates to about $200 on $1,000 of overall compute.

The NetApp Kubernetes Service engine will allow customers to deploy containers at scale from a single user interface with underlying NetApp storage, said Anthony Lye, a NetApp senior vice president of cloud data services.

The Cloud Native Computing Foundation took over management of Kubernetes development earlier this year from Google. Docker Inc. popularized container deployments with its Docker Swarm orchestration management. Other open-source container tools include Apache Mesos and Red Hat OpenShift.

NetApp customers will still be able to use their preferred deployment framework, but Lye said Kubernetes is “the clear winner” among container operating systems.

He said Stackpoint completes the work NetApp started with its open source dynamic container-provisioning project, codenamed Trident. NetApp Kubernetes Service is available immediately.

Lye said his internal development teams were using the Stackpoint engine to deploy NetApp storage infrastructure at global cloud data centers run by Amazon Web Services (AWS), Google Compute Platform (GCP) and Microsoft Azure. In addition to the big three, StackPointCloud supports Digital Ocean and Packet clouds.

“My engineers were telling me this was the best thing they’d ever seen, plus the market was telling us that storage and containers need to go together and (enterprises) are using multiple clouds. Those three reasons led us to make the acquisition,” Lye said.

The DevOps trend has been fueled by container virtualization for writing cloud-native applications with specialized microservices. Linux-based containers also are gaining attention for the ability to “lift and shift” traditional legacy applications to hybrid cloud environments. Unlike a virtual machine, a container does not require a hosted copy of a full operating system.

Designed on Kubernetes Storage Classes, NetApp Trident was developed to simplify persistent-volume provisioning for OnTap-based storage, SolidFire and E Series arrays. Lye said the NetApp Kubernetes Service allows developers to run canary environments to test new applications with mixed nodes of graphics processing units and regular CPUs.

StackPointCloud launched in 2014 with bootstrapped funding. The transaction brings CEO Matt Baldwin to NetApp, along with an undisclosed number of StackPointCloud employees.

Stackpoint integration will start with NetApp HCI hyper-converged infrastructure and FlexPod converged systems. The plan is to extend NetApp Kubernetes Service across all of NetApp’s storage, Lye said. “Our strategy is to continue to build tighter connections between our cloud protocols and containers and extend the control plane from the public clouds down to support NetApp HCI or NetApp’s private clouds.”


September 19, 2018  8:40 AM

Cloud storage startup Wasabi Technologies raises $68 million

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

Newcomer Wasabi Technologies will try to build up its brand recognition and take on the Big Three public cloud providers after raising $68 million in Series B funding.

“You don’t go up against Microsoft, Google and Amazon with pocket change,” said David Friend, Wasabi CEO and founder.

The Boston-based cloud storage provider launched in 2017 with about $8.5 million in its coffers from a 2016 Series A funding round, when the company was known as BlueArchive. Wasabi added another $10.8 million a few months later through a convertible note that folded into the recently closed $68 million Series B round.

Friend said Wasabi Technologies needs senior sales staff with expertise in vertical markets such as genomics, media and entertainment, and surveillance. The startup also plans to add in-house sales representatives to handle the growing volume of calls. Friend said Wasabi currently employs about 45 and could have 60 to 65 staffers by year’s end.

Friend wants to expand quickly to make it hard for newer competitors to get into the market. He said he took the same approach while CEO at Carbonite, one of the early cloud storage success stories in the consumer and SMB space.

“We’ve got about 3,500 paying customers, so it’s time to really turn up the heat and start building a sales force and the Wasabi brand and all that sort of stuff,” Friend said. “I wanted to start ramping up now that I feel comfortable that the technology is really solid and I could spend a buck on marketing and get four or five bucks back in terms of customer value.”

Wasabi Technologies claims business is growing at a rate of 5% to 10% per week. The startup stores data in its own equipment at colocation facilities in Ashburn, Virginia, and Hillsboro, Oregon. Friend said a new European data center will operate the same way when it opens in the fourth quarter. He declined to disclose the location of the new data center.

Friend said he knows what only about 100 or 200 of Wasabi’s 3,500 customers are doing with the cloud storage. He said the smallest customer might store a few TB, and the largest has 5 PB to 10 PB, with plans to expand to 50 PB to 100 PB.

Wasabi Technologies claims its cloud storage is cheaper and faster than Amazon’s Simple Storage Service, with no egress charges to extract data. Friend said most large customers with significant IT budgets shift data from huge tape libraries or on-premise storage reaching end of life.

Wasabi’s biggest customers are Hollywood movie studios and other media and entertainment companies, Friend said. With those customers in mind, the startup introduced a Direct Connect option in Hollywood and San Jose, California, to transfer data via a high-speed, dedicated pipe. Wasabi Technologies also offers Ball Transfer Appliances that large customers can fill up and ship back, similar to the Snowball option that Amazon has.

Who ponied up for Wasabi Technologies?

Instead of typical venture capital financing, Wasabi’s new funding comes from individual investors and family-run firms such as Forestay Capital, the technology fund of Swiss entrepreneur Ernesto Bertarelli. Friend said 117 investors contributed to Wasabi’s Series B round, including many repeat backers from the Series A funding.

“I started out expecting to raise more like $40 million, but even at $68 [million], I had to turn a whole bunch of people away. People were just flocking in. It was unbelievable,” he said.

New Wasabi investor Bertarelli is worth $8.7 billion, according to Forbes. His family sold biotech firm Serono to Merck for more than $13 billion in 2007, and launched the Waypoint Group five years later. Forestay Capital is Waypoint’s tech fund.

Friend said Wasabi’s investors are engaged and helpful. “Most of them are self-made people who have built businesses of their own, and they’re excited about being part of the company,” he said. “They are opening doors for us at customer sites. In a couple of cases, they’ve helped us recruit some senior people.”


September 18, 2018  1:34 PM

Storage Product of the Year nominations closing soon

Dave Raffo Dave Raffo Profile: Dave Raffo
storage products

I can’t tell you exactly how many storage products have launched in the past year, but I know it was in the hundreds. I can tell you it was more than I can count. That’s because hardly a day goes by when I don’t receive a briefing, press release, or pitch for a briefing from a storage vendor. And the rest of the TechTarget storage editorial staff can tell you the same.

I do know how many storage products won Storage Magazine/SearchStorage Storage Products of the Year awards last year: 14. That’s not many considered all the new products that ship in a year.

Now it’s time to start judging the hundreds of products that came out in 2018, and pick the 14 or so that deserve the honor this year. If one of those products is yours, you can enter it for consideration by our judging panel, made up of our editors, independent storage analysts and end users. You can find the entry form here. The deadline has been extended to Friday, Sept. 28, 2018, at 5:00 p.m. The form also includes the judging criteria and tips for completing the form if you’re new to this. This is the 17th year we’ve been giving these awards, so many of you have been through this before and know how prestigious they are.

Check out this year’s categories: Storage Arrays, Software-Defined/Cloud Storage, Storage Management Tools, Backup and Disaster Recovery Software/Services and Backup and Disaster Recovery Hardware. That pretty much covers the gamut of storage products, so you should be able to find your category. But you have to be in it to win it, so make sure you don’t miss the deadline and get shut out.


September 13, 2018  4:01 PM

Nutanix customers trending larger, spending more on HCI

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Besides proving there is plenty of growth still going on in the hyper-converged infrastructure market, Nutanix’s last earnings report showed HCI has moved well beyond VDI and niche applications.

Nutanix closed a $20 million deal last quarter, its largest ever. The unnamed Department of Defense agency uses the Nutanix platform “to power combat edge clouds around the world,” according to Nutanix CEO Dheeraj Pandey. He said the customer will use Nutanix in 15 remote sites.

Then there was a financial services firm that spent $5 million on a Nutanix deal last quarter. The HCI pioneer claimed 23 deals worth more than $1 million in the quarter.

It’s not only Nutanix that’s seeing larger HCI deals. VMware also closed its largest deal for its vSAN HCI platform last quarter – a 1,200-retail store server refresh involving Dell EMC VxRail appliances.

“It’s part of the journey of maturation of hyper-convergence,” Pandey said in an interview after the earnings call. “We’re solving big problems for the enterprise. And there are different approaches. The approach we take is, we don’t bundle anything, we don’t have the luxury to bundle things with larger deals that are beyond just hyper-converged. For us, it’s all product and quality and customer service, and how we handle the highest-end workload.”

Nutanix is now nine months into its shift to a software-centric business model. It still sells branded appliances but has flexible licensing options. Customers can choose to spread their software licenses across Nutanix appliances or server purchased through OEM deals with Dell EMC and Lenovo or other hardware partners.

The change to a software model hasn’t changed Nutanix’s pattern of growing revenues while losing money. Nutanix reported $304 million in revenue last quarter, up 20% year-over-year. It lost $87.4 million compared to $66.1 million in the same quarter a year ago. For the full year, Nutanix generated $1.16 billion in revenue compared to $845.9 million in the previous year. Its full year loss of $297.2 million was less than the $379 million loss the year before.

Nutanix customers now number more than 10,000, including 1,000 last quarter.

“The biggest change is around consumption,” Pandey said of the move to a software model. “When people have portability, they can take these software licenses and run them on different hardware platforms. Many companies don’t want to buy hardware up front because of Moore’s Law and the commoditization of hardware. They want to buy more software and less hardware up front.”

Nutanix’s consumption model will likely change again over the next year with more cloud-based subscription coming. Its Xi Leap – a cloud-based disaster recovery service with one-click failover and failback – is due to be generally available the end of the year.

Beam, Nutanix’s multi-cloud management dashboard, is available now. And Nutanix in August spent $165 million to acquire Frame, a startup developing desktop-as-a-service that Nutanix intends to make available as part of Xi. Pandey said the goal for making Frame available is early 2019.

So now that Nutanix has $1 billion in annual revenue and projects to hit $3 billion in three years, what’s its timeframe for profitability? Pandey refers to the Nutanix strategy as “measured growth” and it funds its spending through free cash flow. Nutanix had $22.7 million in operating cash flow from last quarter.

“Right now we believe Nutanix customers are willing to pay us more, and it’s important that we create the foundation of a customer base that continues to buy from us,” he said. “We’re going to spend a lot of money this year, but we don’t want to touch the bank. How do we do this at cash-flow break-even rather than touching the bank? That’s one guardrail that helps balance the two paradoxes.”


September 13, 2018  8:09 AM

Economic impact of large capacity archival flash

Randy Kerns Randy Kerns Profile: Randy Kerns

Quad level cell flash technology is leading to high capacity flash devices that have applicability in archival storage or content repositories that are primarily read access. Currently at 96 layers, QLC devices are different than the flash devices used in primary storage systems.  The opportunity for this technology in archival storage and content repositories has led to the term “archival flash.”

Toshiba, Samsung, and Intel will most likely be the first vendors delivering the archival flash devices, and other flash device vendors may follow with their own devices.  Archival flash devices are projected to have capacities in the 100 TB range – far higher than almost any SSDs on the market now. While Nimbus Data already ships 100 TB SSDs, it is not using QLC and may not have the much lower cost of the newer technology.

The large capacity is warranted given the primary use for storing archival data.  Some in the industry remember the issues in primary storage when disks increased in capacity, raising concerns about rebuild times. Those familiar with the protection from device failures used in object storage systems — the most likely systems to use archival flash — will understand how the circumstances are different.  Object storage systems protect device failures through information dispersal algorithms and erasure codes within a node.  Node failures are protected with N+1 node protection using data distributed across those nodes.  A site is protected with either replication or geographic dispersion, adding another level of protection to the immutable data stored in object storage systems.

Archival flash economics

Unfortunately, some still look at the economics of storing data with a one-dimensional view of data-at-rest economics with the focus on acquisition costs.  Using data-at-rest economics effectively says that all data is equal and low cost is the only value a user would achieve from acquiring a system.  It ignores different types of devices that have many different attributes.  With achieving value from performance that has been demonstrated with current SSDs (not using QLC technology), data-at-rest economics should have been discounted.  For archival flash devices, the main value for customers will be in the longevity of the QLC devices.  Having 12 to 15 years of expected lifespan, the longevity changes the value for customers.

Data has gravity, meaning that data stored for a period of time tends to persist and incurs costs to move to new systems.  Archival flash longevity will invalidate the one-dimensional acquisition cost economics and require evaluation of TCO that include technology lifespan.

Longevity requires the storage system to have the capability to disaggregate the storage devices in an enclosure independent of the controller function.  Disaggregation allows the data to remain in place on the longer lifespan devices while the controller is updated based on the technology change rate for processor, adapters, etc. in the controller.  Many vendors have already accomplished this and feature the capability with “Evergreen” programs.  This allows the economics of the different technology change rates to be optimized.

Economic modeling

To show the effect of the longevity of archival flash used in an archival storage system, let’s look at an economic model to show the total cost of ownership over a 15-year time span.  The model includes a wide variety of parameters and what-if scenarios:

  • Initial capacity
  • Annual capacity growth rate
  • Lifespan of archival flash device and lifespan of standard storage device
  • Capacity of archival flash device and capacity of standard storage device
  • Costs and requirements used in TCO calculations for administration, deployment, space, power, rack, and cables.
  • Average yearly price declines for archival flash device and standard storage device
  • Average yearly price decline for controller/node
  • Number of devices attached to controller/node
  • Hardware and software average discounts
  • Controller/node costs
  • Software costs – capex and software capacity-based license charges
  • Initial per GB price of archival flash and standard storage devices
  • Device level – protection – number of devices data is distributed across and number of segments of data protection (for example – Archival flash default of 12 devices with toleration of three devices failures in addition to protection of data distributed across N nodes). For standard devices the example would be 8 devices with two used protection.

The first diagram shows the comparison with capacity and longevity values, which are shown on the right side.  This diagram has the acquisition costs of the archival flash to be 50% higher than that of the standard devices, which in this case are large capacity disks.  For simplicity, price declines are set to zero for this example.  Object storage systems capable of retiring a node and automatically redistributing data is assumed, which will avoid migration costs that would be evident in other storage systems.  In this model diagram, data reduction was enabled for both types of devices with equivalent effectivity.  Discounts were at 30% of list price.  Premium support services were not added to the costs.  Technology transitions occur in the final year of the lifespan, accounting for overlap in the technologies.

Archival Flash Economic Analysis TCO

It should be clear that the longevity of archival flash has a dramatic effect compared to the shorter lifespan of standard devices.  This also illustrates the inadequacy of using a one-dimensional, data-at-rest cost of storage measure.  Savings come from having to purchase fewer replacement devices with archival flash over the 15-year span because of the longer lifespan and from the higher capacity with archival flash requiring fewer nodes as the system scales. Increasing capacity of devices over time was not included in the projection because of the limited history with archival flash devices.

The following diagram shows where the economic savings are realized:

Overall Archival Flash Economic Analysis

The chart shows significant cost differences, with the largest contribution coming from the avoidance of acquiring replacement devices.

Looking at the “what-ifs,” the following chart includes changes in some parameters.  Specifically, the device prices were set to be equal and the lifespan of archival flash was set to 15 years.

Archival Flash Economic Analysis

Another “what-if” adds the price decline experienced over the last 5 years.  Those results show another 8% improvement in costs for archival flash over standard devices.

Given that a large amount of data is retained “forever” in archive and content repository systems, the economics of the large capacity requirements will impact archival flash.

It will be difficult to overcome initial thoughts such as the one-dimensional data-at-rest economics and limited understanding of data protection for large capacity devices used in object storage systems.  But that will only delay the rise of archival flash for storing and managing data.


September 6, 2018  11:05 AM

Dell EMC midrange convergence due in 2019

Dave Raffo Dave Raffo Profile: Dave Raffo

The long-awaited convergence of the Dell EMC midrange storage into one platform will happen in 2019, according to Dell Technologies’ chief storage honcho.

Jeff Clarke, VP chairman of products and operations, said on Dell’s earnings call today that the engineering teams from the current Dell EMC midrange platforms are hard at work on the next-generation midrange array. Midrange storage has been a sore spot for Dell’s storage sales since the EMC merger completed in 2016. Dell EMC storage did have its second straight quarter of strong growth, however, increasing 13% year-over year to $4.2 billion. Dell EMC has now consecutive quarters of double-digit year-over-year growth following a series of share losses during the merger transition.

“We’re pleased but not satisfied,” Clarke said of Dell EMC’s storage performance.

Clarke pointed to increased demand in high end storage, unstructured data and data protection products as storage highlights. Dell also reported triple-digit growth in hyper-converged infrastructure, but Clarke spent a lot of time talking about his midrange plans.

“I’m pleased with the progress we’ve made, and there’s more to do,” he said. “I’ve been very clear that we still have more midrange product than we’d like long-term. It takes a while to develop a new midrange product, which we are focused on and committed to have next year. In the interim, we’re going to increasingly make the portfolio we have more competitive. Behind the curtains, there’s more developers working together on new technologies and innovating as one single team than we had a year ago.”

He referred to recent upgrades to Dell EMC’s major midrange platforms, the Unity (legacy EMC) and SC (legacy Dell) arrays as steps to make them more competitive. The vendor made the products’ UIs look more alike and now supports CloudIQ predictive analytics on both products, foreshadowing the move to a single platform.

Clarke said Dell EMC VxRail and VxRack hyper-converged systems have accounted for more than $1 billion in revenue since their launch, and are on track for $1 billion for 2018.

Dell executives did not take questions on the privately held company’s plans to sell shares on the public market. However, Dell filed an amended S-4 registration notice this week with the Securities and Exchange Commission (SEC) and is expected to set a date for an initial public offering within a few weeks.


August 30, 2018  5:26 PM

Startup Infinite io ropes new investors for its cloud NAS controller

Garry Kranz Garry Kranz Profile: Garry Kranz

Sometimes a storage startup makes a bang, fades into the background, and you forget about it – until the company snares a passel of new investors. That was the case with NAS controller specialist Infinite io, a newcomer that wants to shake up traditional file storage.

The Austin, Texas-based network virtualization vendor this week said it has $10.3 million to speed its advance into NAS and cloud, with a special focus on NetApp shops.  The money was provided by a combination of institutional and private investors. Former Motorola CEO and Cleversafe founder Chris Galvin led the round with his son, David Galvin, who runs San Francisco-based Three Fish Capital.

Chris Galvin launched Cleversafe in 2005 and helped pioneer the concept of object storage. IBM acquired Cleversafe for $1.3 billion in 2015 and has adapted the technology as its IBM Cloud Object platform.

Infinite io also obtained institutional funding from Chicago Ventures, Dougherty & Company, Equus Holding and PV Ventures, a venture firm run by X-IO Technologies CEO Bill Miller.  Chicago Ventures is a repeat investor, having furnished Infinite io with $3.4 million in seed funding in 2015.

Another storage industry notable to invest is Dean Drako, the founder and former CEO of data protection specialist Barracuda Networks. Drako now runs cloud-based security vendor Eagle Eye Networks.

Infinite io CEO Mark Cree said the funding will be used to hire engineers, sales reps and operations staff. Six newcomers have been brought aboard this week, Cree said.

“A lot of it will be just getting more feet on the street in sales. The other part of it is that our device is so foreign to anything else out there in storage. We aren’t a file system or additional mount point. What we really are is a big flash meta-database,” Cree said.

Using a hardware gateway to offload data to the cloud is not a new idea. Avere Systems (now part of Microsoft), Nasuni Corp. and Panzura have offered NAS file gateways in the past. There is still demand for such hardware products, but enterprise preferences are changing. More and more data centers prefer to run cloud-based NAS software on industry-standard gear.

Scale-out NAS vendor Qumulo has added a cloud-spanning file fabric in its software-defined storage appliance, and there are a number of object storage vendors that position their products as a low-cost, low-latency but high-capacity archives.

Infinite io eschews a native file system, while still tackling the growing demand for native scale-out storage.  The 2U Infinite io Network Storage Controller (NSC) white box serves metadata from DRAM and includes 5 TB of flash to handle software code and large file systems.  The product embeds standard x86 code and off-the-shelf packaging.

Customers have the option to purchase it solely as a NAS accelerator or bundle it with Infinite io’s cloud tiering software for back-end object storage. Cree said his company plans to introduce a software-only version on prequalified commodity servers in 2019.

Cree said the product is used by organizations in genomics, government, media and entertainment.

The NSC appliance sits as a bump on the wire to encrypt each payload before it is sent to the storage. The in-band device fronts a NAS filer, but application clients see NSC as local storage. The transparent control plane serves as a proxy to connect servers and storage. Three nodes are required for failover, and a single cluster can scale to 12 nodes.  The cluster connects to any back-end object storage for cloud tiering.

Infinite io NAS software inspects all data traffic in the NAS head. That helps it build a metadata library of commonly accessed files. The cloud software automatically shuttles inactive data to the cloud, based on user-defined policies.

Cree has been down this road before. He launched NAS cloud-based caching startup StorSpeed in 2007, and it amassed $13 million before investors turned off the spigot. The company was renamed CacheIQ.  NetApp subsequently paid $90 million for the CacheIQ technology in 2012, tucking it on its unified FAS arrays.

Cree started Infinite io with Jay Rolette and Dave Sommers. Rolette is vice president of engineering and formerly was the chief technologist at Hewlett Packard Tipping Point, which was sold  to TrendMicro when HP split into two companies in 2015. Sommer is Infinite io’s vice president of operations and a former vice president of engineering at Adaptec, now part of Microsemi.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: