Data protection software vendor SIOS Technologies is branching out into IT analytics.
The vendor this week launched SIOS iQ, an analytics platform developed for virtual machines and their infrastructure that collects data and runs algorithms to identify patterns and possible problems. It can be used to troubleshoot or project the effect of changes to the technology.
The initial release works only with VMware hypervisors, but SIOS COO Jerry Melnick said the platform is designed to work with any virtual environment and he expects it to be expanded in future releases. The application tracks performance, efficiency, reliability and capacity metrics in an infrastructure, and alerts customers if it detects a potential problem or a way to make improvements.
Melnick describes iQ Vision as “a simple way to get answers to difficult questions in a complex environment. A lot of customers have moved into these [virtual] spaces with good intentions, but they’re dynamic environments and they keep getting bigger and bigger.”
Customers download iQ, install it and it works without needing any configuration, Melnick said.
The software isn’t specific to storage, but does look at the storage as well as hosts, VMs, applications and networks.
“Storage is probably the most interesting space,” Melnick said. “There are more issues in that space than the others.”
SIOS iQ’s host-based caching analytics can help improve storage performance. It analyzes blocks written to disk and identifies the read ratio and load profile to identify the VMs and disk that will benefit from caching. The application uses that information to make configuration recommendations on how much cache to add and what size the cache blocks should be, and predicts the performance impact from implementing the recommendations.
SIOS iQ also identifies under-used VMs and unnecessary snapshots that can be eliminated to prevent snapshot sprawl.
Other features include performance root cause analysis and advanced analytics for Microsoft SQL Server.
Unlike newer array management software, iQ is not cloud-based but SIOS plans to automatically deliver product upgrades every four to six weeks in what Melnick calls a SaaS (storage as a service) delivery style. SIOS iQ is sold as an annual subscription, with a list price of $150 per host per month.
Hewlett-Packard has added two software applications — one new and one upgraded for the tasks — to help manage unstructured data.
The new HP Storage Optimizer solution combines file analytics and policy-based storage tiering, while HP ControlPoint helps organizations prioritize which data is migrated to on-premise storage, the cloud, Hadoop or a virtual repository. The idea is to examine the contents for governance and risk assessment.
Storage Optimizer uses file analysis technology from the HP ControlPoint portfolio and works with HP Data Protector technology to handle file analysis and storage management of unstructured data across platforms, including Hadoop, SharePoint, Microsoft Exchange and HP StoreAll unstructured data storage platform. The technology analyzes metadata to determine which information should be offloaded from tier one storage to tier two as a way to manage costs. The goal is to reduce the storage footprint and improve management of data that falls under compliance mandates. Storage Optimizer uses data deduplication across repositories to reduce redundant data.
HP ControlPoint, which launched two years ago, has been updated to work better with Optimizer and is being positioned for new use cases for file analysis and data migration. It is integrated with the HP Helion cloud via a built-in connector, and migrates only the most relevant data to the cloud instead.
“ControlPoint organizes data into categories and groups and based on that makes decisions on that content,” said David Gould, HP’s global director of information governance. “For instance, it is able to recognize data that is a contract so all the contract-based storage is designed through policy to go to certain storage. It allows you to identify content and take action on that content.”
A major problem that IT professionals have dealt with over time is the creation of islands of storage. A common cause of islands is when organizations purchase and deploy new systems with their own storage for a specific purpose.
Storage islands create problems in these areas for administrators:
• Data protection. This requirement is usually assigned to a single group to insure its completion, manage recovery of information, and make sure business practices are followed. When storage spreads to islands, these tasks become more complex.
• Security. Islands of storage increase the effort required to address security for data-at-rest.
• Inflexible capacity. Islands prevent capacity from being applied to where there is immediate demand.
• Performance. Meeting changing performance demands to storage can be difficult and expensive. Individual islands of storage would have to be addressed as individual cases each time there is a performance challenge.
• Cost. The overall costs for managing storage islands can be significant and greater than expected when the islands were created.
The IT answer for islands of storage has been a consolidation to centralized storage, either through storage virtualization or large systems with advanced capabilities for performance, protection and security. Performance is addressed by the larger systems with the ability to manage quality of service and introduce solid-state storage. The economics for consolidation have been proven over time compared to isolated storage.
There are new ideas to “make storage easy” that have become popular but are creating even more islands of storage. Hyper-converged, converged systems and virtual SANs as implemented by many organizations create islands. It is difficult, if not impossible, to consolidate these systems. These types of systems result in faster deployment and greater simplicity, but they complicate data management. They are often deployed in a manner in which the IT professionals are unavailable to fulfill the requirements to meet business demands.
The challenges represent opportunities, however, for companies to create new solutions to solve or at least lessen the problems. It is unlikely that the requirements around protection, security, capacity demands, performance, and overall management cost will go away or be redefined out of existence. These will require new solutions.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
One petabyte-scale storage customer traveled to this week’s Red Hat Summit in Boston with dreams of Ceph and Gluster merging into a single product. He hoped Red Hat would take the best pieces of each and “slam them together.” He suggested the vendor could dub the new creation Red Hat Storage, Red Hat Software-Defined Storage or Red Hat Scalable Storage, or “come up with a fun new name.”
“I was waiting for it, and it just didn’t happen,” said Nicholas Gerasimatos, director of engineering at Fair Isaac Corp. (FICO).
“They’re very different, and the way we see Ceph and Gluster is that they’re targeted at different parts of the market,” said Neil Levine, the Red Hat director of product management who laid out Ceph’s long-term roadmap during a conference session this week. “Ceph is Fortune 500, ‘I’m building a huge Amazon-style cloud.’ Gluster is not mid-market, but it’s certainly for customers that have a problem ‘that I need to fix and I don’t have months to set this up; I’ve got like a week.’ ”
Levine said the reason the conversation crops up about combining Ceph and Gluster is “mainly because customers want a file system, which Gluster provides, but then they like the distributed smarts underneath Ceph.”
“It’s something that I don’t think we’re likely to do,” Levine said. “Customers can do it, but it’s not a supported configuration that we’re going to recommend or push. I think if you want our file system, you should use Gluster, and trying to put Ceph underneath it, you’re just giving yourself an operational headache and potentially expense if you’re going to buy those products from us as well.”
Separate communities develop the open source Ceph and GlusterFS projects. Ceph’s community is working on a file system, but CephFS is generally regarded as not ready for enterprise prime time yet. In the meantime, Ceph sees use for block and object storage. Gluster offers file and object capabilities.
“Instead of trying to combine the two products, we will come up with a control plane that makes these two products look consistent,” said Sayan Saha, head of product management for Red Hat Gluster Storage. “Our eventual goal is actually to get rid of the whole concept of Ceph and Gluster.”
Saha said the new control plane could provision, manage, monitor and tune Gluster and Ceph in the same way, “where all you care about is data services for your workloads as opposed to caring about where it is coming from.” Red Hat demonstrated the unified storage management technology at its booth in the conference’s exhibit hall. The company wrote the controller software in the last six or seven months, according to Saha.
“You want virtual block storage. You want file storage. Or, you want object storage. You will be able to come to that controller and request that, and it will be served out to you,” Saha said. “If you choose that you want to do block storage, it will give you Ceph, and then you go to the Ceph provisioning. If you say file, it will say Gluster.”
Red Hat currently recommends Ceph or Gluster based on the workloads the customers intend to run, but there can be overlap between the two products. For instance, Gerasimatos said when Red Hat visited FICO, one group told them to use Gluster, and the other group said Ceph. FICO engineers ultimately chose Ceph and decided to run the free, open source version of Gluster on top of Red Hat Ceph in cases where they needed a file system.
Craig Hadix, a data center architect for a global systems integrator, said Red Hat’s two-product storage strategy can be confusing. He said he would be surprised if Gluster is still around in a year. He thinks it would be smart to “take the feature set that Gluster provides and integrate it into one storage product that has Ceph and Gluster features.”
But, combining the source code from two distinct software applications can be a bear of a project. Just ask NetApp. The company spent years trying to merge the scale-out NAS software from its Spinnaker Networks acquisition with its Data OnTap operating system.
“NetApp does aggregation for eight years,” Saha said. “There’s no product.”
PernixData executives this week disclosed three coming additions to its core FVP software, which clusters server flash and RAM to accelerate I/O and reduce latency.
PernixData claims around 400 customers use FVP to serve read and write I/O requests inside VMware hosts.
Founder and CTO Satyam Vaghani previewed the new products for storage bloggers this week at a Tech Field Day event, and vice president of marketing Jeff Aaron filled in details in a subsequent interview.
FVP Freedom, PernixData Architect and PermixData Cloud are expected to officially launch around the time of VMWorld in August, along with FVP 3.0. The additions will enhance FVP, although Freedom and Architect can be used independently.
Freedom is a free version of PernixData’s acceleration software that pools RAM but not flash resources. Freedom will be available on an unlimited number of hosts and VMs, and in clusters up to 126 GB of memory. Support will be limited to the PernixData community.
“We took read acceleration and said ‘We’re going to make that free,’” Aaron said. “We know once people feel it and touch it, they’ll want more [and upgrade to a standard FVP license.].”
PernixData Cloud is an analytics program that provides insights into customers’ environment.
It collects metadata from all customers using FVP – and in the future Freedom – and shares high-level results with other users to give them an idea of how their environment stacks up.
“We collect data and feed it back to you,” Vaghani said. “Our vision is to share the entire planet’s metadata with each other. You want to know if you’re running 4,000 hosts and 40,000 virtual machines, what other people who are running 4k hosts and 40,000 virtual machines are doing. Before this, here was no good data to share and no good way to share it.”
Or, as Aaron put it, “it’s like we’re crowd-sourcing” to provide information that analysts deliver in reports.
The goal of Architect is to monitor virtual servers and storage devices, and help optimize applications running on them. It does predictive analysis to detect when problems could occur, but can also be set to take prescriptive action to fix problems.
“If a VM goes bad, you get a red blinking light that tells you what went wrong,” Aaron said. “There are tools for monitoring servers and tools for monitoring storage, but nobody ties them together like this. It suggests remediation and can be automated to do the remediation.”
Aaron said while Architecture can run without FVP, it has great insight into a user’s environment and can fix more problems if used with FVP.
The next version of FVP will include support for VMware vSphere 6 and VVOLs, along with a new user interface and connectivity with Architect. FVP’s VVOL support should be interesting, considering Vaghani helped write the original VVOLs specs in his previous job as VMware’s principal engineer and Storage CTO.
Other items on the PernixData roadmap include support for Microsoft Hyper-V and KVM hypervisors (FVP only supports VMware today) and containers. But those additions are not expected in the next version of FVP.
New NetApp CEO George Kurian today said he has a free hand to make any changes he believes are necessary at the company while giving no indication that any sweeping changes are coming.
Kurian, who replaced Tom Georgens as CEO on June 1, discussed his vision for the company during a webcast hosted by UBS IT hardware analyst Steve Milunovich. Customers, partners and investors looking for the company to go in a new direction will likely be disappointed by Kurian’s insistence that he is looking to execute on the plan put in place before his promotion to CEO.
Although new chairman Mike Nevens said there is a CEO search underway, Kurian is not listed as an interim CEO and he said he is free to make the moves he wants.
“I am operating as CEO of the company and I do not feel constrained in any manner as far as making changes to necessitate success,” he said.
NetApp revenue has declined for the past two years and the company forecasts little or no growth this year. But if Kurian plans any changes, he kept them to himself today while taking questions from Milunovich and others on the call. He said NetApp will continue to embrace hybrid clouds, Data OnTap storage software and three all-flash array platforms.
When asked why the CEO change was made and what he will do different than Georgens, Kurian said he would “translate the strategy we have into more vigorous execution.”
Kurian played down suggestions from several questioners that NetApp is too focused on its core Data OnTap software, and that it needed to diversity its products. He said the vendor has moved into hybrid cloud storage, cloud backup, all-flash arrays, object storage and hyper-convergence (through a partnership with VMware) in recent years. He added that he doesn’t think NetApp needs to add more products through acquisitions.
He said it was not new to have competition from smaller companies such as Nutanix, Pure Storage and Nimble Storage competing with NetApp as well as large rivals such as EMC and Hewlett-Packard. “We’ve had multiple competitors before. We always worry about competition and how to differentiate ourselves,” he said.
Kurian also proclaimed NetApp the leader in software-defined storage, but said that does not mean it will be releasing software-only versions of its main storage platforms. “The value of software-defined storage is as a consistent way to manage data across a diverse set of hardware,” he said. “You can have the same data management landscape across extreme performance and extreme capacity configurations, remote offices and the hybrid cloud.”
Kurian was grilled on NetApp’s all-flash strategy, which includes all-SSD versions of its FAS enterprise and EF Series high performance computing arrays, and a built-from-the-ground up FlashRay platform that is not yet generally available. Kurian defended that strategy, although NetApp all-flash sales lag those of EMC’s XtremIO, Pure’s FlashArray and IBM’s Flash System.
“If you believe flash is transformative, it will be used in a broad range of use cases,” he said. “One of those is where people want mature data services, enterprise-grade resiliency and performance. OnTap allows customers to have no-compromise use cases. There will also always be customer that wants extreme performance and the fastest thing in the world. That’s the EF Series. For customers who want a few features but the full capability of a large data center system, that’s where we target FlashRay.”
Kurian was also asked about NetApp’s EVO: RAIL hyper-converged product that it will sell in partnership with VMware. He said hyper-convergence is “a carbon copy of all other parts of the virtualization landscape” and customers will want to seamlessly move workloads and have consistent data protection.
Kurian said he has worked with new Cisco CEO Chuck Robbins during his time at Cisco and during his four years at NetApp, and expects the FlexPod reference architecture partnership between the two vendors to remain strong.
As for what is probably NetApp’s most immediate product concern — customers struggling to upgrade to clustered Data OnTap from the vendor’s standard OnTap — Kurian said the early adopters of the clustered product have been new customers or those who only have one workload to migrate. NetApp’s IT team required months to complete its upgrade to clustered OnTap, but Kurian said that was because the storage migration was done along with other projects, including Microsoft Windows and SQL Server upgrades.
EMC remains the leader in backup appliance sales but its grip on the market is loosening.
While backup appliance revenue increased seven percent year-over-year in the first quarter of this year, EMC revenue fueled mainly by Data Domain dropped more than five percent, according to the latest IDC tracker numbers. Second-place Symantec shaved more than 10 percentage points off EMC’s market share lead following 46 percent growth in the quarter.
EMC still has most of the market share, but its share slipped from 59.2 percent in the first quarter of 2014 to 52.4 percent this year. Symantec jumped from 13.5 percent share last year to 18.5 percent this year after strong growth by its NetBackup appliances. EMC revenue came in a $377 million compared to Symantec’s $133 million as EMC fell below 60 percent share for the first time in a year.
The next five vendors behind Symantec also increased revenue from the previous year. IBM increased seven percent to $37.5 million while its share remained at 5.2 percent. Hewlett-Packard followed at $32.5 million after growing three percent, but its market share slipped from 4.7 percent to 4.5 percent. IDC lists IBM and HP in a statistical tie because their share was within one percent of each other.
Barracuda and Quantum each had a bigger percentage increase than Symantec. Barracuda grew 65 percent year-over-year to $25.2 million, and its market share increased from 2.3 percent to 3.5 percent. Quantum grew 58 percent to $24 million with share increasing from 2.3 percent to 3.5 percent. Dell, in a statistical tie for fifth with Barracuda and Quantum, increased 28 percent to $18.3 million and its share rose from 2.1 percent to 2.5 percent.
All other vendors combined for 10 percent of the market after slipping one percent in revenue from last year. The total market hit $719 million for the quarter and 647 PB shipped for an increase of 32 percent over 2014.
VMware is giving its hardware partners more options for building EVO:RAIL hyper-converged appliances.
VMware’s new configuration options can support approximately 1,600 generic virtual machines (VMs) and 2,400 virtual desktops in an eight-appliance (32-node) cluster, up from 800 generic VMs and 2,000 virtual desktops in the previously supported configuration.
EVO:RAIL hardware options now include dual six-, eight-, 10- or 12-core Intel Haswell or Ivy Bridge CPUs per node, 128 GB to 512 GB of memory per node and two capacity options. The first capacity option is one 400 GB solid state drive (SSD0 with three 1.2 TB hard disk drives per node for 1.6 TB of raw SSD capacity and 14.4 TB of total hard disk capacity per appliance. The second option is one 800 GB SSD and five 1.5 TB hard disk drives per node for 3.2 TB of raw SSD and 24 TB of hard disk capacity per appliance.
The original EVO:RAIL configuration set last August was for dual six-core Haswell or IVY Bridge CPUs, 192 GB of memory and the 14.4 TB capacity option.
Mornay van der Walt, vice president, EVO:RAIL Group, VMware, said VMware planned to make these additional options available later this year but moved them up at its partners’ request. Some partners have already started using their own configurations. For example, EMC’s Vspex Blue EVO: RAIL product launched in February with 126 GB and 192 GB memory options.
“As customers are finding their own swim lanes, we’re getting demand for more configuration options,” van der Walt said. “This was on our roadmap, but given the demand we said, ‘Let’s roll this forward.’”
VMware has yet to make VSAN 6 – the latest version of VSAN – available to EVO:RAIL partners. EVO:RAIL appliances still run VSAN 1.5, with VSAN 6 expected to show up on them in the second half of 2015.
Van der Walt said VMware has sold more than 1,200 VSAN licenses since it launched in 2014.
Disk storage revenue grew 6.8 percent during the first quarter of 2015, with server-based and hyperscale storage picking up the slack for SAN and NAS sales according to IDC’s latest storage tracker numbers.
External storage – SAN and NAS – dropped 0.6 percent from the first quarter of 2014. It slipped from $5.64 billion to $5.61 billion. That compares to overall disk sales that jumped from $8.21 billion in the first quarter of 2014 to $8.77 billion in the first quarter this year. Total capacity shipments increased 41.1% year over year to 28.3 exabytes during the quarter.
“Following a busy end-of-year spending environment, the enterprise storage market fell back into what has become a familiar market pattern,” IDC storage research director Eric Sheppard said in a statement. “Spending on traditional external arrays fell during the quarter while demand for server-based storage and hyperscale infrastructure was up strongly during the quarter.”
From a vendor perspective, Heweltt-Packard (HP), Dell and Hitachi Data Systems (HDS) increased year-over-year while sales from EMC, NetApp and IBM decreased. Original design manufacturers (ODM’s) selling directly to hyperscale data center customers accounted for 12.6 percent of global spending and the “others” category made up 30.5 percent of disk storage sales.
In the overall disk market, EMC held on to the No. 1 vendor spot despite a 6.7 percent decline to $1.51 billion for the quarter. EMC’s market share dropped from 20 percent to 17.4 percent. No. 2 HP increased 19.3 percent to $1.28 billion and increased market share from 13.1 percent to 14.6 percent. Dell moved past NetApp into third, growing six percent to $897 million for 10.2 percent share. NetApp revenue fell 10.5 percent to $765 million and it slipped from 10.4 percent market share to 8.7 percent. Fifth-place IBM’s revenue declined 29.3 percent to $525 million and went from nine percent share to six percent.
EMC and NetApp remained the top external disk vendors – all of their revenues are in that category – with EMC at 27.3 percent share and NetApp at 13.6 percent. HP ($512 million) and HDS ($507) are in a virtual tie for third with HP holding 9.1 percent share and HDS nine percent. IBM placed fifth with $424 million, falling 12.8 percent and dropping from 8.8 percent share to 7.7 percent. Dell was sixth with $395.1 million, an increase of 2.1 percent over 2014 for seven percent share.
Pentaho software will be part of the Hitachi Scale-Out Platform (HSP) hyper-converged system launched at HDS Connect last month. HDS will also use Pentaho in a Hitachi Unified Compute Platform (UCP) product for SAP HANA and Hadoop to analyze big data.
Pentaho’s standalone and embedded software for analytics, data integration and visualization is a big part of the HDS social innovation strategy the company highlighted during its Connect conference. Hitachi is trying to become a leader in the Internet of Things market, including storage and other infrastructure products.
That means analytics will play a big role in nearly all of its products going forward, said Sarah Gardner, HDS CTO of social innovation.
“Internet of things solutions are not designed to operate in the background,” she said. “We need to wire them into other parts of the environment. The days of people wanting standalone analytics are gone.”
HDS disclosed its intention to acquire Pentaho in February. Pentaho will operate independently as Pentaho, a HDS company, according to HDS.