Storage Soup


March 2, 2018  11:27 AM

Nutanix CEO: Next, we’ll hyper-converged clouds

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Nutanix, recently recognized by Gartner as the hyper-convergence leader in the data center, wants to hyper-converge clouds over the next few years.

Nutanix continued its strong growth with a 44% year-over-year revenue increase last quarter. The hyper-converged pioneer also Thursday said it acquired cloud management startup Minjar to help it bring its hyper-convergence success to the cloud.

During the vendor’s earnings conference call Thursday night, Nutanix CEO Dheeraj Pandey said the market has reached “the inflection point of the journey of hyper-convergence of different data center tiers on a common operation centers using a common software platform.

“In the next few years, we intend to make a similar case for hyper-converging disparate cloud data centers using a common software platform.”

Unlike the early days of hyper-convergence, Nutanix has plenty of competition in the cloud. Almost every vendor is talking about their multi-cloud capabilities, and how they are enhancing those.

In an interview with TechTarget after the earnings call, the Nutanix CEO laid out Nutanix’s strategy for applying the fundamentals of on-premises hyper-convergence to multi-cloud data management. It involves rebuilding the Nutanix Prism management stack, integrating technology from Minjar and the Calm cloud orchestration technology Nutanix bought in 2016.

“With Prism, we hid the hypervisor details,” Pandey said. “We could talk to the underlying VMware, Microsoft Hyper-V or [Nutanix] AHV hypervisor, and with that we were able to deliver software-defined infrastructures to application administrators who were not virtualization experts. The cloud is a similar state. If you stack them, they look like the next-generation of hypervisors. There’s a need for a new Prism-like layer.”

Pandey said Nutanix will bring features such as replication and high availability across clouds, just as it did across all x86 server platforms. “All the hypervisor features in the last 15 years will come together in multi-cloud,” he said.

But Pandey said this scenario is three to five years away from fruition. “It’s misleading to say this will happen in the next few months or so,” he said.

Nutanix reported revenue of $287 million, up from $199 million a year ago. The vendor forecasted revenue of $275 million to $280 million this quarter, which compares to $206 million a year ago.

Despite its revenue growth, Nutanix still lost $63 million last quarter. That’s down from $76 million in the same quarter last year. Pandey said he is unconcerned about the losses because Nutanix has positive free cash flow – which is cash flow from operations minus capital expenditures. Nuanix had $32 million in free cash flow last quarter compared to $7 million a year ago.

“We’re running the business on free cash flow, like Salesforce or Amazon,” he said. “At the end of the day, it’s about free cash flow and growth. You can use free cash flow to acquire more customers. Repeat business is promising. On average our customer spends 4.5 times as much money as the original deal over the next 18 months. These repeat business patterns mean we should get that first dollar from customers. So why optimize for GAAP profitability when we can live in the world of free cash flow?”

Nutanix reported sales through OEM partner Dell accounted for close to 10% of total bookings in the quarter, and included three deals of over $2.5 million. Lenovo OEM deals included four deals of more than $1 million, and Nutanix landed its first customers from its OEM deal with IBM last quarter.

Nutanix also reported million-dollar software-only deals with customers running its software on servers from Cisco and Hewlett-Packard Enterprise facilitated through channel partners. Pandey said he is hopeful that these deals could lead to formal partnerships with those server vendors.

“The grassroots is where the rebellion happens,” Pandey said on the call. “The grassroots is the customers, the partners, they’re the ones who’ve been basically saying, look, ‘I love Nutanix and I would like for you to really run it in your servers.’”

Pandey estimated that 35% of Nutanix customers use non-branded appliances, and he expects that to grow to roughly half over the next 18 months. But he said the branded NX appliances will remain a key focus point even as Nutanix expands its partnerships.

“I think many customers just want one-stop support for both hardware and software,” he said on the earnings call. “We don’t want to throw the baby out with the bathwater as we go through this transition.”

Pandey said 30% of HCI nodes sold on Nutanix branded appliances include the vendor’s AHV hypervisor along with or in place of VMware or Hyper-V hypervisors.

Pandey said he is not worried about what a possible Dell-VMware reverse merger might do to Nutanix’s complicated relationship with Dell. Dell is Nutanix’s oldest and largest OEM partner, but Dell-owned VMware competes with Nutanix as the leading software for running hyper-converged appliances.

“We have been fielding this question for the last 24 months,” he said. “It’s difficult to speculate but I respect Michael Dell as a leader. He has massive roots in the server business … and they would not want to lose that by not being close to us. There are only two operating systems that are really emerging in this market. One is VMware and one is Nutanix. They’ve gotten closer to VMware, they might become one company. But I think for them to get close to another operating system would be a smart strategy.”

March 2, 2018  12:33 AM

Pure Storage revenue tops $1 billion, says ‘hello’ to profit

Garry Kranz Garry Kranz Profile: Garry Kranz

Pure Storage has joined the billion-dollar club. On Thursday, the all-flash provider completed two long-sought goals by posting fiscal-year revenue of $1.023 billion, up 41%, en route to achieving non-GAAP profit.

Pure Storage revenue last quarter shot up 48% year over year to $338.3 million, topping the $332 billion consensus. Product sales ($277.4 million) and support revenue ($61 million) each was up 48%. Earnings per share of 13 cents beat Wall Street expectations of 7 cents.

It marked the second straight quarter that actual Pure Storage revenue exceeded high point guidance, and it came in two points above the midpoint range.

Aiding the vendor’s surge last quarter was the addition of 500 customers. The list includes Advance Financial Corp, Jenny Craig, Mid America Pet Food, Portland Trailblazers, Suzuki Motor of America and the Texas Rangers.

Pure claims it now serves more than 4,500 customers, up nearly 50% from a year ago.

“It’s been nearly two decades since an independent company in our industry has reached this revenue scale (this rapidly). We achieved the $1 billion milestone in just over eight years since our founding,” Pure Storage CEO Charles Giancarlo said.

Sales of all-flash FlashArray accounted for 20% of Pure Storage revenue last year, including demand for the NVME-based FlashArray//X unstructured data array. Another highlight, Giancarlo said, was Pure producing a full year of operating cash flow ($72.8 million) and free cash flow ($7.7 million).

Although best known for storage hardware, Pure Storage last year added synchronous multisite replication and active-active clustering to boost remote backup. Pure Meta1 analytics also was added for storage for emerging uses in advanced analytics, AI, machine learning.

Pure will try to build on that software-defined product emphasis in 2018, Giancarlo said.

Getting cash-strapped IT customers to move upstream is always a challenge for vendors, but Pure highlighted gains there as well. About 70% of Pure Storage revenue last year stemmed from repurchases by existing customers, President David Hatfield said.

“Our increasing focus (to move customers) up market is working. The number of customers that spent more than $1 million with Pure doubled versus last year,” Hatfield said.

For its 2019 fiscal year, Pure Storage issued non-GAAP revenue guidance in the range of $1.31 billion to $1.36 billion.


March 1, 2018  7:29 AM

Komprise NAS migration leaves no data behind

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

Startup Komprise added NAS migration to its list of data management features in the new 2.7 release of its software.

Komprise Intelligent Data Management customers can now migrate from one NAS system to another NAS system, leaving no data behind on the source filer. The new feature targets customers who want to replace or decommission one NAS system in favor of a new NAS system, according to Krishna Subramanian, chief operating office at Campbell, California-based Komprise.

Subramanian said a future product release would support the migration of data from NFS- or SMB/CIFS-based file storage to on-premise or public-cloud-based object storage.

Earlier versions of the Komprise software enabled customers to move data from one NFS- or SMB/CIFS-based NAS system to another NAS system, or to Amazon S3– or Microsoft REST API-compliant object storage, for archival or disaster recovery (DR) purposes. In those scenarios, the source NAS system retained the hot data, and users set policies merely to shift the cold data to another NAS system or to on-premise of public-cloud-based object storage, Subramanian said.

She said customer demand spurred the addition of the new NAS-to-NAS migration capabilities in the 2.7 product release.

“Every three to five years, storage is becoming end of life. So companies always have some NAS migration activity happening,” Subramanian said. “They may be retiring a filer, and they have to move that data somewhere else. Or, maybe they are moving it altogether from on premise to cloud. Or, they are moving from cloud back to on premise.”

The NAS migration feature removes the need for Komprise customers to buy third-party tools, hire services or conduct a laborious manual process to move data. Subramanian said Komprise can migrate data between different vendors’ NAS, and works with any NAS array that supports supports NFS or CIFS/SMB protocols.

How Komprise NAS migration works
Komprises’ new task-based NAS migration works similarly to the long-running activities such as data archiving and data replication for DR. Customers set a policy , the Komprise software makes a copy of the data on the new NAS system, and — after confirming the copy is accurate — deletes the data from the source NAS filer. The Komprise software retries in the event the storage is unavailable or the network is down, Subramanian said.

The Komprise Intelligent Management software runs as a virtual machine, and the software automatically updates. The user interface has a new tab for migration that customers can choose to turn on or off.

Komprise’s new NAS migration capabilities are available to customers at no additional cost. The company charges for its software based on the amount of data under management. List price is $150 per TB for a perpetual license, and annual subscriptions and volume discounts are available, Subramanian said.

Also this month, the Komprise Intelligence Data Management software became certified to operate with Spectra Logic’s BlackPearl Converged Storage System. BlackPearl is an object storage gateway for long-term storage, including tape-, disk- and cloud-based options.

Subramanian said Komprise has about 65 or 70 customers spanning industries such as genomics, health care, media and entertainment, gaming, finance, and insurance.


February 28, 2018  11:38 AM

IT resiliency report: Downtime holding businesses up

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Storage

The intro to a recent IT resiliency report asks the question: “Is IT turbulence the new norm?”

In a year marred by ransomware attacks large and small, catastrophic natural disasters and other IT outages, that’s a valid question. But how are organizations handling the disruptions and IT resiliency planning?

Generally, not that well. “Overall, IT departments exceeded their maximum tolerance for downtime during a failure, a weakness that must be addressed,” said the Syncsort report, “The 2018 State of Resilience,” which surveyed 5,632 global IT professionals between January 2017 and July 2017 and came out last month.

“It continues to surprise us that organizations feel underprepared to deal with disasters,” said Terry Plath, vice president of global services at Syncsort, an enterprise software provider.

Only half of respondents met their recovery time objectives after a failure, according to the report. Eighty-five percent of professionals had no recovery plan or were less than 100% confident in their plan.

For those IT professionals whose organization suffered data loss, when asked how much was lost in their most significant incident, 28% said they lost a few hours and 31% lost a day or more of data.

The IT resiliency report found that many of the reasons for data loss involved lack of a quality backup. In order, the top primary reasons for the data loss were: Old backup copy, human error, lost data was in memory and no backup was made, malfunction in data protection platform, and data protection platform was not configured to back up the specific data.

Thankfully, when IT professionals look ahead the next two years, 45% say high availability/disaster recovery (HA/DR) is a chief IT initiative, second only to security at 49% and just above cloud computing at 43%. And in choosing top IT issues of concern in the coming year, 47% said business continuity/high availability, 46% said the ability to recover from disaster and 45% said security/privacy breaches.

Comprehensive IT resilience planning needed

Specifically regarding high availability or disaster recovery, businesses’ top initiatives in the coming year are tuning or reconfiguring the current HA/DR platform, expanding the current HA/DR platform to cover additional servers or data, adopting new technology to augment the current HA/DR, and incorporating cloud or hosting technology into the HA/DR strategy.

“Appropriate staffing, workforce training, better recovery planning and testing are needed to ‘bulletproof’ company systems,” the report said. “This is especially true, since a considerable majority of companies have HA/DR initiatives planned for the coming year.”

Surprisingly, when asked which technologies organizations use for data protection and archiving, just under 50% said tape backup, the second most popular answer behind hardware and storage replication at slightly over half.

Plath recommends several steps for a solid IT resiliency strategy. It starts with defining and documenting the disaster recovery process. Organizations then need to have the right tools in place and make sure that recovery time objectives and recovery point objectives are acceptable. They must ensure that IT staff or the third-party support providers are trained and up to speed on the DR platform.

And then there’s resilience testing. Organizations should take the time to schedule HA/DR switch tests. Companies should also have run books in place that define what needs to happen in a disaster, but many don’t, Plath said.

The IT resiliency study is a continuation of the annual survey conducted for the last 10 years by Vision Solutions, now part of Syncsort.


February 23, 2018  9:33 AM

More Nimble, HPE storage revenue jumps 24%

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Hewlett Packard Enterprise storage sales rebounded last quarter, thanks to the addition of Nimble Storage and an improvement of the 3PAR SAN platform.

HPE storage revenue of $948 million grew 24% year-over-year last quarter. That’s not quite as impressive as it sounds, because most of that growth came from Nimble, which HPE had not yet acquired a year ago. Still, HPE storage increased 11% in apples-to-apples comparison from a year ago – matching the growth of HPE’s overall revenue in the quarter.

That follows five percent growth in HPE storage revenue last quarter. In the first two quarters of last year, HPE storage revenue declined year-over-year.

HPE’s flagship 3PAR storage array line rebounded after the vendor brought 3PAR and Nimble sales team together into one group. The 3PAR revenue increased a few percentage points last quarter. In the previous earnings report, outgoing CEO Meg Whitman called 3PAR sales “soft.”

Antonio Neri stepped up from HPE president to replace Whitman as CEO this month.

“This was a good quarter for us in storage,” Neri said Thursday night on the HPE earnings call. “Obviously we have the Nimble numbers, but our organic storage grew 11 percent. In previous calls we talked about execution challenges in our go-to-market, particularly in the United States. So this was all about focus and execution.”

Besides bringing more revenue, the Nimble deal gave HPE storage the InfoSight predictive analytics technology. HPE now uses InfoSight in all of its storage arrays, brining artificial intelligence into systems monitoring and management.

Neri called InfoSight “a game changer for our storage business and is a major step on our journey to an autonomous data center.”

Flash was another driver for HPE storage growth. HPE execs said all-flash array revenue increased 16% year-over-year. While that’s less of a spike than other large vendors are gaining from flash – NetApp’s all-flash sales increased 50% last quarter – it shows flash is increasingly driving enterprise storage sales.

“We expect to see solid continued performance in storage,” Neri said.

HPE also said its hyper-converged inftrastructure revenue grew more than 200% over last year. That’s also misleading, because HPE hyper-convergence is based on its acquisition of SimpliVity in January 2017. HPE did have a hyper-converged product before SimpliVity but had little success with it, so most of the revenue  comes from a product HPE did not have year ago. Neri said the hyper-competitive hyper-convergence market is now an area HPE is placing a great deal of focus on.


February 22, 2018  6:40 PM

Pivotal move: Chad Sakac says Dell EMC in ‘rear-view window’

Garry Kranz Garry Kranz Profile: Garry Kranz

The question ‘Where is Chad Sakac?’ got answered this week: by Chad Sakac. The former president of the Dell EMC converged infrastructure division (formerly known as converged platforms) said he will join Dell’s Pivotal Software subsidiary in April to guide development of the Pivotal Container Service.

Sakac appeared to be the odd man out following a February reshuffling of the Dell Infrastructure Solutions Group (ISG). He informed readers of his blog this week that Dell EMC and the converged products are “in my rear-view window.”  He did not specify his new job title, saying only he would be the “glue” among Dell EMC storage, Pivotal Cloud Foundry (PCF) and VMware product development for DevOps customers.

Pivotal Container Service (PKS) stems from VMware’s partnership with Google. The “K” refers to Pivotal’s reliance on the Google Kubernetes orchestration framework.

“My role is to be part of the team that is laser-focused on driving the success of Pivotal Container Service (and the rest of PCF) together with the teams at VMware and Dell EMC.   The job is simple – helping make our aligned Dell Technologies developer platform come together…” Sakac said.

Dell Technologies in February realigned ISG, placing VCE converged infrastructure under the Servers unit, headed by Ashley Gorakhpurwalla.   The converged and hyper-converged infrastructure products, including  VxRack/VxRail and Nutanix XC Series, shifted to the Dell EMC storage division, led by EMC veteran Jeff Boudreau.

Dell EMC’s management moves emerged against a backdrop of other potential changes under consideration. Dell Technologies CEO Michael Dell has acknowledged his company is exploring strategic options to unload debt related to $60 million EMC merger in 2016.

Options reportedly being mulled include an initial public offering of the Dell EMC storage business, or allowing VMware to acquire Dell Technologies in a reverse merger.  It’s also possible – some experts say it’s likely, in fact – that Dell will keep its present structure intact. The vendor has not publicly disclosed a timetable for its decision.

Reorganizing ISG appeared to leave Sakac without a clearly defined role. Matt Baker, a Dell EMC senior vice president of strategy and planning, said at the time that Sakac would “continue to be a valuable part of the organization” who was “working hand in glove with the executive team to carve out his next role.”

That role now appears to be defined. On his blog, Sakac said he supports the Dell Technologies reorganization as a way to make converged storage a more useful tool to support development of cloud-native applications.

“The CI business is finding a new gear. In 2017, we lost some ground – but maintained our No. 1 position. That trend changed in the second half – and the CI business and primary storage show that. CI is essential to the primary storage business of Dell EMC. There are thousands of customers who depend on Dell EMC CI to be the foundation of their data center(s) – and have moved up to consume infrastructure,” rather than as a building blocks.


February 15, 2018  3:52 PM

Intel P4510 U.2 SSD ships with 64-layer TLC 3D NAND

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Intel launched a new P4510 Series of U.2 solid-state drives (SSDs) equipped with its 64-layer triple-level cell (TLC) 3D NAND flash and enhanced firmware, enabling greater storage density and lower random read latency than the prior P4500 model.

The enterprise Intel P4510 Series is the first datacenter SSD to use Intel’s latest 64-layer 3D NAND designed for bulk storage. The company began shipping client SSDs with the denser 64-layer flash technology last year.

The Intel P4510 PCI Express (PCIe) SSDs began shipping at at 1 TB and 2 TB capacities last year to cloud service providers (CSPs) and is now making available 4 TB and 8 TB drives to CSPs and channel customers. Intel expects to ship the new P4510 SSDs to OEM partners later this year.

The prior P4500 model used Intel’s 32-layer TLC 3D NAND technology. The highest capacity available for the P45000 SSDs in the U.2 form factor was 4 TB. Intel also lists 8 TB P4500 options in the new “ruler” form factor, named for its long, thin shape.

The latest Intel P4510 PCIe SSDs are 2.5-inch, 15-mm U.2 form factor. Intel plans to add lower power 110-mm M.2 and 7-mm U.2 P4511 datacenter SSD options later this year.

Intel claimed the new P4510 SSD boosts sequential write bandwidth up to 90% over the older P4500 model and improves quality of service up to 10 times. Firmware enhancements allowing granular I/O prioritization help the Intel P4510 SSD to cut mixed workload latency by up to two times and read workload latency by up to 10 times, according to Intel.

The Intel P4510 Series supports non-volatile memory express (NVMe) 1.2, with four PCIe 3.1 lanes, as well as the NVMe Management Interface (NVMe-MI) for operational insight.

Hot-pluggable U.2 SSDs
Intel enabled additional management and serviceability features in the P4510 SSDs through its Volume Management Device (VMD) and Virtual RAID on CPU (VROC). The VMD and VROC platform-connected technologies help to facilitate hot pluggability with U.2 SSDs, LED light management to help users locate failed drives, and RAID configuration simplification and acceleration, according to Intel.

Industry-wide, U.2 SSDs currently account for approximately 25% of the unit volume of NVMe PCIe SSDs, and they will become the majority in 2019, according to Greg Wong, founder and principal analysts at Forward Insights.

But Greg Matson, director of SSD strategic planning and product marketing at Intel, said Intel will push the Enterprise & Datacenter SSD Form Factor (EDSFF) 1U Long and 1U Short as the “preferred and optimized” form factors for 3D NAND SSD bulk storage this year. Intel introduced the early version of the EDSFF SSDs as the ruler form factor at last year’s Flash Memory Summit and then worked to get EDSFF standardized.

Matson said 1U Long SSDs allow massive capacity scaling, and the EDSFF SSDs are PCIe 4.0- and PCIe 5.0-ready and support up to 16 lanes.

“While we think U.2 is a pretty darn good form factor for storage, it’s not as good as EDSFF,” Matson said. “We can make much more thermally efficient platforms requiring about half the airflow, and about half the airflow is also less than power than the U.2 form factors.”

Matson said three of the four major drive suppliers and several “tier 1” ODMs and OEMs, including Quanta and Supermicro, support EDSFF. He noted that Intel has already shipped ruler SSDs to IBM Cloud and Tencent, one of the largest cloud service providers in China.


February 15, 2018  10:48 AM

NetApp cloud-flash pivot brings cheers

Garry Kranz Garry Kranz Profile: Garry Kranz
Storage

NetApp attributed strong product revenue growth last quarter in part to two-a-days – it’s averaging two displacements of rivals’ all-flash SAN systems every day.

Success in NetApp cloud and flash sales fueled a strong quarter, as revenue increased eight percent year-over-year to $1.52 billion. Product revenue of $920 million increased 17% over last year.

On  NetApp’s earnings call Wednesday night, CEO George Kurian said the vendor made solid gains with its all-flash arrays, including NetApp FAS, EF and SolidFire storage. Annualized net revenue from all-flash jumped nearly 50% to $2 billion.

Demand for all-flash FlexPod – sold with partner Cisco’s compute and networking — helped to boost converged infrastructure sales by 50%.

“Our growth in all-flash has helped us gain strength in both the SAN and converged infrastructure markets,” Kurian said. “Through our competitive take-out program, we average two SAN displacements per day. That enables us to gain share in the SAN market, and expand wallet share with our existing customers.”

Central to the NetApp cloud strategy is an integrated Data Fabric that allows customers to more easily manage data across local storage and multiple hybrid clouds.  NetApp cloud Data Fabric extended last year to add NFS file storage as a service in the Microsoft Azure public cloud.

Products introduced last quarter are in preview with selected customers, including NetApp Cloud Volume for Amazon Web Services (AWS) and support for VMware on AWS. Those offerings are expected to be generally available in 2018.

Other product rollouts included a software upgrade for NetApp AltaVault backup, including the addition of Microsoft Azure Archive Blob, plus the introduction of SnapMirror for the SolidFire ElementOS operating system.

Kurian dodged questions on reports that rival Dell EMC is considering strategic options to pay down debt. Although he did not name Dell EMC directly, Kurian said NetApp’s pivot from hardware to cloud infrastructure makes it a more formidable competitor against “our largest competitor.”

Dell EMC has to figure out how to “rationalize a completely confusing product portfolio. They lack a competitive flash offering with a road map to the future, and they’ve got to get a cloud story,” he said.

NetApp took a $506 million loss, the result of an $856 million one-time charge on repatriated capital due to new tax laws.

Kurian said the new tax law will provide “added flexibility” as a result of corporate rates getting slashed to 22%. NetApp plans to bring back an additional $4 billion parked offshore during the next 12 months.

NetApp closed the quarter with $5.6 billion in cash and short-term investments. The higher domestic cash balance is being used to pay down $800 million in bonds it issued  issued last year.

NetApp’s revenue guidance for the fourth quarter ranges between $1.525 billion and $1.875 billion, or an 8% increase year over year.


February 8, 2018  5:29 PM

NVMe flash hopeful E8 Storage goes software-only route

Garry Kranz Garry Kranz Profile: Garry Kranz

All-flash array startup E8 Storage has expanded into reference architecture with the launch of a software-only version.

The new product, E8 Storage Software, runs on rack servers from Dell EMC, Hewlett Packard Enterprise (HPE) and Lenovo. Customers can buy SKUs through channel partners, or purchase the software-defined flash directly from E8 Storage as an integrated nonvolatile memory express (NVMe) appliance.

E8 Storage Software is qualified with Dell EMC PowerEdge R740xd and PowerEdge r64, Hewlett Packard Enterprise ProLiant DL360 and DL380 Gen 10, and Lenovo ThinkSystem SR630. The validated hardware systems need to run Red Hat Enterprise Linux of CentOS 7.3 or higher.

The reference stack includes server chassis with 32 GB to 64 GB of memory, Intel Skylake processors, 24 NVMe U.2 hot-swappable SSDs, two 128 GB RAID 1-enabled M.2 boot SSDs with RAID, and two 100 Gigabit Ethernet (GbE) Mellanox Connect X4-C remote direct memory access network interface cards.

The E8 software-defined flash allows 96 clustered hosts to read and write to shared storage. The vendor’s flagship E8-D24 rack-scale system has dual controllers and scales to 140 TB of effective storage with high-capacity SSDs. E8 Storage also is previewing its E8-X24 block arrays with customers running the IBM Spectrum Scale parallel file system and Oracle Real Application Cluster environments.

Thje recent addition of host-level mirroring enables E8 Storage to market its software-only flash storage on its S10 entry-level appliance to enterprise customers. The S10 has a single-controller and has been used mostly with proofs of concept.

“Customers in database environments want us to fit into their existing disaster recovery environment, rather than running an additional layer.  Larger customers may want to do their own integration. We think smaller customers will still want a (turnkey) appliance,” said Julie Herd, E8 Storage director of technical marketing.

The NVMe standard is based on Peripheral Component Interconnect Express (PCIe) protocol. It is designed to squeeze the most performance from software-defined flash storage. Rather than running traffic through network host bus adapters, an application uses PCIe to talk directly to storage.

NVMe flash storage is maturing to the point that some industry observers predict an uptick in mainstream adoption in 2018. The NVM Express organization, a consortium of industry partners, is expected to help advanced NVMe over Fabrics technologies this year.

E8 Storage and other NVMe flash startups are jockeying for position, while established vendors Hitachi Vantara, IBM and Pure Storage are bringing systems to market built with custom flash modules.

Herd said the reference architecture stacks will help E8 Storage take on more workloads.  “We are a block system, so this will help us tackle file-based workloads, and also broadens the market for our channel partners.”


February 7, 2018  4:03 PM

Nutanix leads first hyper-converged Magic Quadrant

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Gartner now gives hyper-convergence a Magic Quadrant of its own, and places Nutanix as the leader in the upper right-hand corner.

Dell EMC, VMware and Hewlett-Packard Enterprise also sit in the leaders’ quadrant with Nutanix in the hyper-converged Magic Quadrant Gartner released this week. So Dell Technologies is also sitting pretty as owner of Dell EMC and VMware, and an OEM partner of Nutanix.

Previously, Gartner included hyper-converged systems as part of its Magic Quadrant for Integrated Systems.

Gartner defines hyper-converged infrastructure (HCI) as “a category of scale-out software-integrated infrastructure that applies a modular approach to compute, network and storage on standard hardware, leveraging distributed, horizontal building blocks under unified management.” It adds in the hyper-converged Magic Quadrant report that HCI vendors can build their own appliances with off-the-shelf infrastructure, or sell HCI software in partnership with system vendors or resellers/integrators. They can also sell HCI software directly to end users, or as HCI-as-a-service on-premises or in a public cloud.

Gartner credits HCI pioneer Nutanix with overcoming the IT world’s reluctance to invest in a new vendor, and raising confidence level in its product’s performance to continue to scale deployments. Gartner puts Nutanix’s customer base at more than 7,800.

Nutanix also won points for its robust management and self-service interface and choice of its KVM-based AHV hypervisor as an alternative to VMware ESXi. Nutanix’s negatives include lack of broad appeal to remote offices, departments, edge implementations and SMBs, according to Gartner.

Dell EMC is ranked high mainly due to its VxRail appliance and VxRack rackscale system. Those run on Dell PowerEdge servers and integrate with VMware technology and Dell EMC products such as Avamar, DataDomain, RecoverPoint and CloudArray. But Gartner points out that VxRail uses a different software release cycle than VMware and often lags behind the latest version of VMware’s vSAN HCI software.

Gartner ranks VMware vSAN separately from the Dell EMC HCI products, because vSAN is also sold as standalone software and packaged with other vendors’ servers. Gartner said VMware sells the broadest set of hyper-converged systems, but customers must pay extra for features such as deduplication, compression and erasure coding, and vSAN customers have reported performance and stability issues.

HPE bolstered its hyper-converged platform with the February, 2017 acquisition of SimpliVity, which is now sold on HPE ProLiant servers. Gartner says HPE doubled its HCI customer count to around 2,000 since the acquisition. HPE scores points for SimpliVity’s data services that include backup and disaster recover capabilities, but is cited for lack of flexibility with support only for VMware hypervisors and all-flash configurations.

Gartner lists Cisco, Pivot3 and Huawei as challengers in the hyper-converged Magic Quadrant, Stratoscale and Microsoft as visionaries, and Scale Computing, DataCore and HTBase as niche players.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: