Cisco finally figured out what to do with its Invicta all-flash array acquired from startup Whiptail for $415 million. It killed it off.
Cisco put out an end-of-life announcement last Friday for Invicta, and has stopped taking orders for the array. If you’re one of the few who bought an Invicta array, your final day to renew your service report is Oct. 19, 2019 with July 31, 2020 designated as the last day of support.
Cisco bought Whiptail in Sept. 2013, but the deal had problems from the start. With close storage partners such as EMC and NetApp pushing into flash storage at the same time, Cisco hesitated to declare the Whiptail arrays storage products. They were rebranded as Invicta, sold by the Cisco UCS server group, and omitted from the Vblocks sold through Cisco’s VCE alliance with EMC and the FlexPod reference architectures with partner NetApp.
However, quality issues prompted Cisco to take Invicta off the market last September with plans to fix the problems and bring it back out. That never happened, however, as Cisco confirmed last week that the product is finished.
Not only did Google offer new incentives with its Cloud Storage Nearline service to tempt customers to switch from Amazon and Microsoft Azure last week, the company also beefed up its ecosystem to make it easier to change cloud platforms.
Google added Actifio, CloudBerry Lab, Pixit Media and Unitrends as Google Cloud Platform partners. They join earlier partners Veritas/Symantec, NetApp, Iron Mountain and Geminare.
The partners have integrated Cloud Storage Nearline with disaster recovery, backup and archival and hybrid cloud solutions using Google’s open APIs. For example, copy data management vendor Actifio has mostly focused on selling into enterprise-level environments but now has set its sights on cloud platforms.
Actifio customers will be able to add a Vault profile that lets them move an application directly into Nearline.
Actifio CEO Ash Ashutosh said it makes sense to become a Google Cloud Platform Premier Partners program because “50 percent of our business comes from these cloud service providers.”
Cloud backup vendor CloudBerry allows its managed service provider (MSP) partners to integrate Nearline with all other Google Cloud Platform services using the same unified API.
Unitrends Free for Google Cloud platform will also support Nearline. Unitrends Free is free backup software that deploys as a virtual appliance in VMware vSphere and Microsoft Hyper-V, backing up data locally and connecting to the cloud.
Pixit Media, which sells storage for broadcast companies, has object plug-ins to Google Cloud Storage and Nearline.
EMC NetWorker and Avamar, and CommVault Simpana backup software also allow users to move data to Nearline, as does Egnyte’s file sharing application.
Google is trying to make an aggressive run at Amazon Web Services (AWS) and Microsoft Azure with its Nearline archiving service, plus new services such as the Cloud Storage Transfer Service and the Switch and Save progream. Switch and Save offers 100 PB of free storage in Nearline for up to six months for customers who switch from any other cloud provider or on-premise environments.
Google’s Nearline Storage is the answer to Amazon Glacier for cheap, cold storage. A new on-demand I/O service works with Storage Nearline to allow faster recovery for customers with large amounts of data.
EMC executives reduced their 2015 revenue forecasts for the second time this year following a quarter of tepid growth. They also said the vendor will implement plans to cut costs by $850 million a year and shift investment from traditional storage products such as its VMAX and VNX arrays to emerging technologies including flash and software-defined storage.
EMC CEO Joe Tucci also continued to defend keeping the EMC Federation intact instead of spinning off VMware or other significant pieces.
The forecast reduction and spending cuts came out during EMC’s quarterly earnings call. The vendor reported revenue of $6.1 billion last quarter, up three percent year over year. The storage business grew only one percent to $4 billion. As a result of those results, EMC now expects $25.2 billion in 2015 revenue, a $500 million downward adjustment over its previous guidance and $900 million from its original 2015 forecast.
“The results were mixed. We fell a bit short of revenue expectations,” Tucci said, noting profit of $487 million was a bit better than expected.
The plan to save $850 million annually in cost cuts will be in place by the end of 2016, with $50 million in cuts coming this year, according to CFO Zane Rowe. Tucci and Rowe said some of those savings will be shifted to growth products such as flash and ViPR, ScaleIO and Elastic Cloud Storage.
EMC emerging storage products, which include Isilon clustered NAS along with flash and software-defined storage, increased 49 percent year over year to $718 million. XtremIO grew more than 300 percent.
On the downside, VMAX revenue fell 13 percent to $892 million, and backup and data recovery dropped nine percent to $1.43 billion.
David Goulden, CEO of EMC Information Infrastructure (storage), said he expects traditional storage – VMAX and VNX – to grow two percent annually until 2018, and only about one percent this year.
“We believe the traditional storage market will not improve this year,” he said, adding that EMC will invest in flash, software-defined storage, big data and the cloud “to remain ahead of the market. We will rebalance resources to self-fund growth initiatives.”
Goulden said he expects a new Isilon release and the generally availability of the DSSD flash system in the second half of 2015 should help sales. Tucci said “I’ve never seen a product with as much demand for betas as DSSD.”
Tucci repeated that he is opposed to breaking up the EMC Federation, which includes VMware, Pivotal and RSA Security. Tucci said EMC II and VMware realize twice as much revenue from deals where both companies are involved than when each is in deals alone. He maintains that in the shift to convergence and cloud computing, the combination of companies makes EMC stronger.
“Splitting this federation or spinning off VMware is not a good idea,” he said. “One of the biggest transitions every company has to do is move to the cloud. Data centers are moving to cloud technology, both private and managed clouds. If you are doing that, would you rather do that as just VMware, as just EMC, as just Pivotal? Or are you much stronger doing it together?”
Tucci, who is also the EMC chairman, would not speculate on when EMC would name his successor as CEO. “I don’t want to comment on the timing,” said Tucci, who has postponed his retirement several times. “I am committed to giving the board the time they need to make sure the succession process works terrifically. I don’t want to put a deadline on the board, but they are actively engaged [in the succession process].”
SimpliVity claims its second quarter sales bookings increased 250 percent over last year, fueled by its Cisco partnership and an avalanche of interest in hyper-convergence.
SimpliVity does not give as much information on its financials as rival Nutanix, but is widely believed to be No. 2 behind Nutanix in hyper-convergence market share. The two private companies were pioneers of hyper-convergence, which combines storage, servers, virtualization and networking in one box.
SimpliVity began selling its software and the ASIC than handled data deduplication along with Cisco UCS servers in early 2015. It also continues to sell its OmniCube bundled appliances.
SimpliVity CEO Doron Kempel said sales with Cisco increased four times from the first quarter to the second, although he said Cisco still accounts for less than 20 percent of SimpliVity sales. He said the company has more than 550 customers and 2,000 units in the field.
Kempel said besides the bump from Cisco, SimpliVity also benefitted last quarter from hype around hyper-convergence. That hype is also attracting more competition, though.
“The market is starting to mature. There is much more noise about hyper-convergence,” Kempel said. “Now large vendors are trying to convolute what it means. A lot of analysts are completely confused. They confuse hyper-convergence with convergence.”
Many vendors are pushing converged systems – packaging discrete servers, storage and hypervisors – but they are also getting into hyper-convergence. VMware is among them with its Virtual SAN (VSAN) software, which it makes available to hardware partners through its EVO: RAIL program. Dell, Hewlett-Packard, EMC and Hitachi Data Systems are among large storage vendors who sell EVO:RAIL systems. Dell also sells Nutanix software on its PowerEdge servers through an OEM deal.
Kempel said most of SimpliVity’s deals are still against legacy storage, with about 20 percent coming head-to-head against Nutanix.
He said SimpliVity rarely sees VMware or EVO: RAIL appliances in the field. “The customers we talk to expect hyper-convergence to run tier one applications across sites,” he said. “EVO:RAIL and Nutanix are focused on single sites and VDI.”
Kempel will be talking to a lot more customers personally. Despite the increase in sales, he said he has replaced Mitch Breen as senior vice president of global sales.
“After 19 months with us, he completed his mission and I have assumed the role of global head of sales,” Kempel said. “We are intensifying our go-to-market activities.”
SimpliVity also added Jose Almandoz as SVP of operations and Randi Nichols as VP of human resources to help manage growth that Kempel said amounts to 10 new employees per week. He said he plans to grow from its current 550 employees to around 800 employees.
SimpliVity closed a $175 million funding round in March.
Data protection software vendor SIOS Technologies is branching out into IT analytics.
The vendor this week launched SIOS iQ, an analytics platform developed for virtual machines and their infrastructure that collects data and runs algorithms to identify patterns and possible problems. It can be used to troubleshoot or project the effect of changes to the technology.
The initial release works only with VMware hypervisors, but SIOS COO Jerry Melnick said the platform is designed to work with any virtual environment and he expects it to be expanded in future releases. The application tracks performance, efficiency, reliability and capacity metrics in an infrastructure, and alerts customers if it detects a potential problem or a way to make improvements.
Melnick describes iQ Vision as “a simple way to get answers to difficult questions in a complex environment. A lot of customers have moved into these [virtual] spaces with good intentions, but they’re dynamic environments and they keep getting bigger and bigger.”
Customers download iQ, install it and it works without needing any configuration, Melnick said.
The software isn’t specific to storage, but does look at the storage as well as hosts, VMs, applications and networks.
“Storage is probably the most interesting space,” Melnick said. “There are more issues in that space than the others.”
SIOS iQ’s host-based caching analytics can help improve storage performance. It analyzes blocks written to disk and identifies the read ratio and load profile to identify the VMs and disk that will benefit from caching. The application uses that information to make configuration recommendations on how much cache to add and what size the cache blocks should be, and predicts the performance impact from implementing the recommendations.
SIOS iQ also identifies under-used VMs and unnecessary snapshots that can be eliminated to prevent snapshot sprawl.
Other features include performance root cause analysis and advanced analytics for Microsoft SQL Server.
Unlike newer array management software, iQ is not cloud-based but SIOS plans to automatically deliver product upgrades every four to six weeks in what Melnick calls a SaaS (storage as a service) delivery style. SIOS iQ is sold as an annual subscription, with a list price of $150 per host per month.
Hewlett-Packard has added two software applications — one new and one upgraded for the tasks — to help manage unstructured data.
The new HP Storage Optimizer solution combines file analytics and policy-based storage tiering, while HP ControlPoint helps organizations prioritize which data is migrated to on-premise storage, the cloud, Hadoop or a virtual repository. The idea is to examine the contents for governance and risk assessment.
Storage Optimizer uses file analysis technology from the HP ControlPoint portfolio and works with HP Data Protector technology to handle file analysis and storage management of unstructured data across platforms, including Hadoop, SharePoint, Microsoft Exchange and HP StoreAll unstructured data storage platform. The technology analyzes metadata to determine which information should be offloaded from tier one storage to tier two as a way to manage costs. The goal is to reduce the storage footprint and improve management of data that falls under compliance mandates. Storage Optimizer uses data deduplication across repositories to reduce redundant data.
HP ControlPoint, which launched two years ago, has been updated to work better with Optimizer and is being positioned for new use cases for file analysis and data migration. It is integrated with the HP Helion cloud via a built-in connector, and migrates only the most relevant data to the cloud instead.
“ControlPoint organizes data into categories and groups and based on that makes decisions on that content,” said David Gould, HP’s global director of information governance. “For instance, it is able to recognize data that is a contract so all the contract-based storage is designed through policy to go to certain storage. It allows you to identify content and take action on that content.”
A major problem that IT professionals have dealt with over time is the creation of islands of storage. A common cause of islands is when organizations purchase and deploy new systems with their own storage for a specific purpose.
Storage islands create problems in these areas for administrators:
• Data protection. This requirement is usually assigned to a single group to insure its completion, manage recovery of information, and make sure business practices are followed. When storage spreads to islands, these tasks become more complex.
• Security. Islands of storage increase the effort required to address security for data-at-rest.
• Inflexible capacity. Islands prevent capacity from being applied to where there is immediate demand.
• Performance. Meeting changing performance demands to storage can be difficult and expensive. Individual islands of storage would have to be addressed as individual cases each time there is a performance challenge.
• Cost. The overall costs for managing storage islands can be significant and greater than expected when the islands were created.
The IT answer for islands of storage has been a consolidation to centralized storage, either through storage virtualization or large systems with advanced capabilities for performance, protection and security. Performance is addressed by the larger systems with the ability to manage quality of service and introduce solid-state storage. The economics for consolidation have been proven over time compared to isolated storage.
There are new ideas to “make storage easy” that have become popular but are creating even more islands of storage. Hyper-converged, converged systems and virtual SANs as implemented by many organizations create islands. It is difficult, if not impossible, to consolidate these systems. These types of systems result in faster deployment and greater simplicity, but they complicate data management. They are often deployed in a manner in which the IT professionals are unavailable to fulfill the requirements to meet business demands.
The challenges represent opportunities, however, for companies to create new solutions to solve or at least lessen the problems. It is unlikely that the requirements around protection, security, capacity demands, performance, and overall management cost will go away or be redefined out of existence. These will require new solutions.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
One petabyte-scale storage customer traveled to this week’s Red Hat Summit in Boston with dreams of Ceph and Gluster merging into a single product. He hoped Red Hat would take the best pieces of each and “slam them together.” He suggested the vendor could dub the new creation Red Hat Storage, Red Hat Software-Defined Storage or Red Hat Scalable Storage, or “come up with a fun new name.”
“I was waiting for it, and it just didn’t happen,” said Nicholas Gerasimatos, director of engineering at Fair Isaac Corp. (FICO).
And it isn’t likely to happen soon, if ever, according to the heads of product management for Red Hat’s commercially supported versions of open source Ceph and Gluster software.
“They’re very different, and the way we see Ceph and Gluster is that they’re targeted at different parts of the market,” said Neil Levine, the Red Hat director of product management who laid out Ceph’s long-term roadmap during a conference session this week. “Ceph is Fortune 500, ‘I’m building a huge Amazon-style cloud.’ Gluster is not mid-market, but it’s certainly for customers that have a problem ‘that I need to fix and I don’t have months to set this up; I’ve got like a week.’ ”
Levine said the reason the conversation crops up about combining Ceph and Gluster is “mainly because customers want a file system, which Gluster provides, but then they like the distributed smarts underneath Ceph.”
“It’s something that I don’t think we’re likely to do,” Levine said. “Customers can do it, but it’s not a supported configuration that we’re going to recommend or push. I think if you want our file system, you should use Gluster, and trying to put Ceph underneath it, you’re just giving yourself an operational headache and potentially expense if you’re going to buy those products from us as well.”
Separate communities develop the open source Ceph and GlusterFS projects. Ceph’s community is working on a file system, but CephFS is generally regarded as not ready for enterprise prime time yet. In the meantime, Ceph sees use for block and object storage. Gluster offers file and object capabilities.
“Instead of trying to combine the two products, we will come up with a control plane that makes these two products look consistent,” said Sayan Saha, head of product management for Red Hat Gluster Storage. “Our eventual goal is actually to get rid of the whole concept of Ceph and Gluster.”
Saha said the new control plane could provision, manage, monitor and tune Gluster and Ceph in the same way, “where all you care about is data services for your workloads as opposed to caring about where it is coming from.” Red Hat demonstrated the unified storage management technology at its booth in the conference’s exhibit hall. The company wrote the controller software in the last six or seven months, according to Saha.
“You want virtual block storage. You want file storage. Or, you want object storage. You will be able to come to that controller and request that, and it will be served out to you,” Saha said. “If you choose that you want to do block storage, it will give you Ceph, and then you go to the Ceph provisioning. If you say file, it will say Gluster.”
Red Hat currently recommends Ceph or Gluster based on the workloads the customers intend to run, but there can be overlap between the two products. For instance, Gerasimatos said when Red Hat visited FICO, one group told them to use Gluster, and the other group said Ceph. FICO engineers ultimately chose Ceph and decided to run the free, open source version of Gluster on top of Red Hat Ceph in cases where they needed a file system.
Craig Hadix, a data center architect for a global systems integrator, said Red Hat’s two-product storage strategy can be confusing. He said he would be surprised if Gluster is still around in a year. He thinks it would be smart to “take the feature set that Gluster provides and integrate it into one storage product that has Ceph and Gluster features.”
But, combining the source code from two distinct software applications can be a bear of a project. Just ask NetApp. The company spent years trying to merge the scale-out NAS software from its Spinnaker Networks acquisition with its Data OnTap operating system.
“NetApp does aggregation for eight years,” Saha said. “There’s no product.”
PernixData executives this week disclosed three coming additions to its core FVP software, which clusters server flash and RAM to accelerate I/O and reduce latency.
PernixData claims around 400 customers use FVP to serve read and write I/O requests inside VMware hosts.
Founder and CTO Satyam Vaghani previewed the new products for storage bloggers this week at a Tech Field Day event, and vice president of marketing Jeff Aaron filled in details in a subsequent interview.
FVP Freedom, PernixData Architect and PermixData Cloud are expected to officially launch around the time of VMWorld in August, along with FVP 3.0. The additions will enhance FVP, although Freedom and Architect can be used independently.
Freedom is a free version of PernixData’s acceleration software that pools RAM but not flash resources. Freedom will be available on an unlimited number of hosts and VMs, and in clusters up to 126 GB of memory. Support will be limited to the PernixData community.
“We took read acceleration and said ‘We’re going to make that free,’” Aaron said. “We know once people feel it and touch it, they’ll want more [and upgrade to a standard FVP license.].”
PernixData Cloud is an analytics program that provides insights into customers’ environment.
It collects metadata from all customers using FVP – and in the future Freedom – and shares high-level results with other users to give them an idea of how their environment stacks up.
“We collect data and feed it back to you,” Vaghani said. “Our vision is to share the entire planet’s metadata with each other. You want to know if you’re running 4,000 hosts and 40,000 virtual machines, what other people who are running 4k hosts and 40,000 virtual machines are doing. Before this, here was no good data to share and no good way to share it.”
Or, as Aaron put it, “it’s like we’re crowd-sourcing” to provide information that analysts deliver in reports.
The goal of Architect is to monitor virtual servers and storage devices, and help optimize applications running on them. It does predictive analysis to detect when problems could occur, but can also be set to take prescriptive action to fix problems.
“If a VM goes bad, you get a red blinking light that tells you what went wrong,” Aaron said. “There are tools for monitoring servers and tools for monitoring storage, but nobody ties them together like this. It suggests remediation and can be automated to do the remediation.”
Aaron said while Architecture can run without FVP, it has great insight into a user’s environment and can fix more problems if used with FVP.
The next version of FVP will include support for VMware vSphere 6 and VVOLs, along with a new user interface and connectivity with Architect. FVP’s VVOL support should be interesting, considering Vaghani helped write the original VVOLs specs in his previous job as VMware’s principal engineer and Storage CTO.
Other items on the PernixData roadmap include support for Microsoft Hyper-V and KVM hypervisors (FVP only supports VMware today) and containers. But those additions are not expected in the next version of FVP.
New NetApp CEO George Kurian today said he has a free hand to make any changes he believes are necessary at the company while giving no indication that any sweeping changes are coming.
Kurian, who replaced Tom Georgens as CEO on June 1, discussed his vision for the company during a webcast hosted by UBS IT hardware analyst Steve Milunovich. Customers, partners and investors looking for the company to go in a new direction will likely be disappointed by Kurian’s insistence that he is looking to execute on the plan put in place before his promotion to CEO.
Although new chairman Mike Nevens said there is a CEO search underway, Kurian is not listed as an interim CEO and he said he is free to make the moves he wants.
“I am operating as CEO of the company and I do not feel constrained in any manner as far as making changes to necessitate success,” he said.
NetApp revenue has declined for the past two years and the company forecasts little or no growth this year. But if Kurian plans any changes, he kept them to himself today while taking questions from Milunovich and others on the call. He said NetApp will continue to embrace hybrid clouds, Data OnTap storage software and three all-flash array platforms.
When asked why the CEO change was made and what he will do different than Georgens, Kurian said he would “translate the strategy we have into more vigorous execution.”
Kurian played down suggestions from several questioners that NetApp is too focused on its core Data OnTap software, and that it needed to diversity its products. He said the vendor has moved into hybrid cloud storage, cloud backup, all-flash arrays, object storage and hyper-convergence (through a partnership with VMware) in recent years. He added that he doesn’t think NetApp needs to add more products through acquisitions.
He said it was not new to have competition from smaller companies such as Nutanix, Pure Storage and Nimble Storage competing with NetApp as well as large rivals such as EMC and Hewlett-Packard. “We’ve had multiple competitors before. We always worry about competition and how to differentiate ourselves,” he said.
Kurian also proclaimed NetApp the leader in software-defined storage, but said that does not mean it will be releasing software-only versions of its main storage platforms. “The value of software-defined storage is as a consistent way to manage data across a diverse set of hardware,” he said. “You can have the same data management landscape across extreme performance and extreme capacity configurations, remote offices and the hybrid cloud.”
Kurian was grilled on NetApp’s all-flash strategy, which includes all-SSD versions of its FAS enterprise and EF Series high performance computing arrays, and a built-from-the-ground up FlashRay platform that is not yet generally available. Kurian defended that strategy, although NetApp all-flash sales lag those of EMC’s XtremIO, Pure’s FlashArray and IBM’s Flash System.
“If you believe flash is transformative, it will be used in a broad range of use cases,” he said. “One of those is where people want mature data services, enterprise-grade resiliency and performance. OnTap allows customers to have no-compromise use cases. There will also always be customer that wants extreme performance and the fastest thing in the world. That’s the EF Series. For customers who want a few features but the full capability of a large data center system, that’s where we target FlashRay.”
Kurian was also asked about NetApp’s EVO: RAIL hyper-converged product that it will sell in partnership with VMware. He said hyper-convergence is “a carbon copy of all other parts of the virtualization landscape” and customers will want to seamlessly move workloads and have consistent data protection.
Kurian said he has worked with new Cisco CEO Chuck Robbins during his time at Cisco and during his four years at NetApp, and expects the FlexPod reference architecture partnership between the two vendors to remain strong.
As for what is probably NetApp’s most immediate product concern — customers struggling to upgrade to clustered Data OnTap from the vendor’s standard OnTap — Kurian said the early adopters of the clustered product have been new customers or those who only have one workload to migrate. NetApp’s IT team required months to complete its upgrade to clustered OnTap, but Kurian said that was because the storage migration was done along with other projects, including Microsoft Windows and SQL Server upgrades.