Caringo has expanded its Swarm object storage to Microsoft Azure.
The company recently said its object storage software, which already has Amazon S3 support, now is available on the Azure cloud so customers have the ability to move applications seamlessly from Amazon S3 to the Azure cloud by using the Swarm software.
This capability gives customers another storage tier option, allowing them to tier to the Azure cloud without changing mount points or work flows. Files can be consolidated from all filers into a scalable object storage tier that is accessible via a web-based portal for search while also giving the ability to deploy disaster recovery sites globally. Data can be accessed and managed universally through cloud and file protocols and RESTful APIs.
“We’ve talked to a number of customers that don’t want to be connected just to one vendor,” said Tony Barbagallo, Caringo’s vice president of product. “This is an expansion of our platform. We have customers who use our storage on-premise but also want to replicate to the cloud and customers want multiple storage targets.
“They want a hybrid solution so they can distribute cloud storage or replicate to cloud storage,” Barbagallo.
Caringo’s object storage manages objects in a flat address space, making it easy to adjust to petabyte scale configurations. Each object is assigned a unique identifier, which allows a server to retrieve it without needing to know the physical location of the data. These characteristics make it a good fit for cloud storage.
This latest capability gives companies the ability to migrate Amazon S3-based application to the Azure cloud with the use of a RESTful interface.
The new Microsoft Azure template can be deployed in a 16TB, 32TB or 64TB Swarm cluster with an SSL Amazon S3 interface. It requires 18, 26 and 42 available processor cores for 16TB, 32TB and 64TB sizes and each size needs a Jump-Box virtual machine to increase the requirement for each size by one additional processor core.
Once on Swarm, files can be combined and protected in a searchable pool for continued use and complex analysis.
“Azure can offer our operating system as a server running on Azure hardware,” Barbagallo said. “For S3 we have a proxy server. If an application has a S3 problem, it is transferred to our protocol and translated. When the information is sent back, the translation goes the other way.”
In September 2015, Caringo came out with software that lets customers move data back and forth between file-based primary storage and Caringo Swarm object storage software. FileFly for Caringo Swarm is a Windows-based application that plugs directly into the Windows NTFS file system.
FileFly uses policy-based automation to identify and migrate aged data from primary Netapp file servers and arrays running the Windows Storage Server operating system to Caringo Swarm on the back end. No changes are required to applications or end user workflows.
Barbagallo said Caringo has no immediate plans to support the Google Cloud platform.
“Right now, we have no current plans to expand beyond this,” he said.
Nearly five years into sales of its copy data virtualization, Actifio is finding customers are looking for a simpler way to manage data across its appliances.
Today the vendor launched Actifio Global Manager (AGM) and a new Actifio Report Manager to give customers one HTML5 screen to manage their installed appliances.
AGM lets customers set service level agreement-based management from one interface. Actifio previously had a report manager, but the new one is re-designed with a new engine to create, manage and view reports on application, protection and recovery job performance, and SLA compliance.
Until now, Actifio customers used a desktop user interface to manage one appliance at a time. Chandra Reddy, Actifio’s vice president of product marketing, said the average Actifio enterprise customer uses Actifio to protect 100 TB of data, and some protect petabytes. To protect 100 TB, an organization would need two appliances on site and two more at a remote site for DR. Those four appliances would be managed separately without AGM.
Service providers might use 30 or more appliances, Reddy said.
“Now a customer can go to a single dashboard and get an aggregated view,” he said.
Customers can use AGM to migrate applications between Actifio appliances and to load balance applications across appliances. AGM also allows customers to manage processes such as snapshots, deduplication, live cloning and replication.
“Customers want a single centralized pane of glass to manage, monitor and troubleshoot thousands of protected virtual machines,” Reddy said. “That was the genesis of AGM.”
AGM will require a separate license. For new customers, AGM is free for the first two Actifio appliances and existing customers will not be charged for their first 10 appliances.
Veeam Software has started off the year by making Veeam Availability Suite 9 generally available to customers. The software has more than 250 features, according to Veeam.
While the data protection application is only going GA now, Veeam pre-announced features throughout the past eight months.
“The software has been a year in the making,” said Doug Hazelman, vice president of product strategy at Veeam. “The focus has not only been on adding scalability but enterprise features that the mid-market and enterprise segments are looking for. So version 9 focuses on them.”
One of the main enhancements include integration with EMC VNX and VNXe hybrid storage. Veeam announced this in May 2015. Enterprises will able to use Veeam Backup from storage snapshots to create backups from EMC VNX or VNXe storage snapshots in two minutes or less via Veeam Explorer for Storage Snapshots without the need for intermediate steps.
The Veeam software also adds more granularity and intelligence from the VNX storage snapshots and Veeam Explorer can do recoveries from an individual files or restore individual application items from Microsoft Exchange or SharePoint. It also can recover a single virtual machine. Veeam also added new primary storage integrations with Hewlett Packard Enterprise (HPE), NetApp and EMC.
Last September HPE announced integration between Veeam Backup and Replication with HP StoreOnce Catalyst. The Veeam software also works with NetApp FAS and FlexArray V-Series so customers can create image-based backups as often as needed with little impact on production environments. Veeam also has enhancements with Veeam Explorer for Oracle and a disaster recovery as a service that is powered by Veeam Cloud Connect Replication.
Hazelman said the software had cloud backup in version 8, but the new version includes the ability for service providers to replicate to the cloud so they can provide disaster recovery as a services (DRaaS).
The latest Veeam software also has an unlimited scale-out backup repository.
Avere Systems rolled out a new high-end and midrange FXT Edge physical appliances that offer more performance and storage compared to the previous systems that will be replaced.
The new 1U appliances have more density compared to the previous 2U 3850 and 4850 generation, while holding more CPU cores, solid state drive (SSD) capacity, NVAM and, in the case of the 5600, more DRAM.
The FXT 5600 holds 9.6 TB SSDs per node and scales up to 480 TB per cluster. That is double the amount of SSDs compared to the previous FXT model. The FXT 5400 holds 4.8TB per node and replaces the and scales up to 240TB per cluster.
Jeff Tabor, Avere’s senior director of product management and marketing, said the new systems boast new performance that can be clustered to up to 50 systems so the performance builds across all the nodes. The 5600 can achieve 3.7 GBs per second while the 5400 can achieve 2.7 GBs per second. Avere is positioning the 5600 as both a capacity and performance system, while the 5400 is more of a capacity play.
“It’s just less expensive for those who don’t need quite as much,” Tabor said. “Our mission is to optimize on-premise infrastructure and enable the ability to move to the cloud.”
Both the 5600 and 5400 include 16 CRU cores and four GBs of NVAM. They have four 10GbE port and four 1GbE ports. The3 5600 has 384 GBs of DRAM and the 5400 use 256 GBs of DRAM. Both FXT 5000 models are 50 percent smaller than the prior FXT systems.
Avere’s FXT Edge filers are scale-out NAS devices built for the hybrid cloud and are used to boost performance for NAS devices. The systems support bothNFS and SMB protocols so businesses can store data and run applications on premises or in the cloud with minimal latency.
The systems support Avere’s FlashCloud for Amazon S3, Google, IBM and Cleversafe, HGST and Amplidata and SwiftStack. Avere layers a file system on object storage to accelerate performance.
n October 2015, Avere Systems expanded its cloud storage strategy with CloudFusion virtual NAS software designed specifically for Amazon Web Services (AWS). The CloudFusion virtual NAS filer is a 64-bit file system that Avere claims can scale to 1 exabyte in the public cloud.
It follows Avere’s 2014 launch of its Virtual FXT Edge Filer, a software-only version of its FXT Edge Filer that works as network-attached storage (NAS) in the Amazon Elastic Compute Cloud (EC2). CloudFusion is based on the same vFXT Edge software technology designed for big data processing and storage in the cloud.
CloudFusion uses three tiers of AWS, storing data on EC2, Elastic Block Storage (EBS) or Simple Storage Service (S3), depending on usage patterns. It supports Network File Storage and Server Message Block protocols, and comes in two versions — one that supports four virtual CPUs and includes 30.5 GB of DRAM, and larger version that supports eight virtual CPUs and 61 GB of DRAM. Both support snapshots, compression and data tiering in RAM, solid-state drives and SATA disk drives.
Flash memory vendor Diablo Technologies named a new CEO and secured $19 million in Series C financing as the company tries to rebuild momentum for its memory technology business after fighting off legal challenges in 2015.
Chairman and CEO Mark Stibitz takes the reins from Diablo founder Riccardo Badalone, who moves into the chief product officer role. Stibitz had been an independent member of Diablo’s board of directors since February 2012. His business management and product development experience spanned start-up and public companies including Anobit, Elliptic Technologies, PMC-Sierra, Agere Systems and Lucent/AT&T-Microelectronics.
Stibitz said he started to work with Badalone on the executive transition in October. “Riccardo is the creative genius behind the innovation in the product, and he wants more time to see that through with the customers,” Stibitz said.
The $19 million Series C capital infusion raised total investments to $77.8 million since Diablo’s 2003 founding. The Ottawa-based startup commenced with $9.8 million and banked $36 million in Series A funding in November 2012 and $13 million in Series B funding in April 2015. Leading the Series C financing is ICV, a new investor, joined by Battery Ventures, BDC Capital, Celtic House, and Hasso Plattner Ventures.
Stibitz said Diablo needed money after battling legal claims from Netlist Inc., a memory module manufacturer that was its former development partner. Netlist’s accusations against Diablo included patent infringement, trade secret misappropriation, breach of contract, and incorrect inventorship.
Diablo claimed victory in March 2015 after a U.S. District Court jury found no breach of contract or misappropriation of trade secrets. The company’s law firm, McDermott Will & Emery LLP, claimed another win in December after the Patent Trial and Appeals Board decided in Diablo’s favor in three inter partes review proceedings on the intellectual property claims.
“The judge did not allow us to conduct business until we went through the legal process. We had high legal fees, and until we got through that, the company was basically put on hold,” Stibitz said. “Part of the amazing piece of the Diablo story is working through that legal process, coming out successful against the claims, clearing our good name and then [maintaining] the incredible support of the investors along the way.”
Stibitz said the Series C funding will enable Diablo to devote its full attention to its Memory1 all-flash DDR4 module and customer deployments. He said the company wants to expand its sales force, customer support team at headquarters and in the field, and research and development arm.
Target customers for Memory1 are hyperscale data center operators and major server OEMs. Stibitz said customers use the flash-based Memory1 module to increase the amount of memory in their server systems at a lower cost than DRAM.
Unlike Memory1’s use of flash as system memory, Diablo’s first product uses flash as high-performance block storage. Stibitz claimed the Memory Channel Storage product was gaining momentum until a judge ordered the company to stop selling the product last January. The judge later lifted the restrictions, but he said, “The pause was tough for us.” Diablo had sold its all-flash Memory Channel Storage through SanDisk and Lenovo under the name ULLtraDIMM.
Stibitz said Diablo faced a choice of updating its DDR3-based Memory Channel Storage product to DDR4 or focusing on the new Memory1 technology. The team chose the latter and pushed off the Memory Channel Storage refresh to this year. Diablo expects to sample the DDR4-based Memory Channel Storage product for top customers in the middle of the year and go into production by year’s end, according to Stibitz.
“We’re rebuilding the momentum the company had previously gained, but we’re doing it around flash as memory,” Stibitz said. “We felt that flash as memory was very innovative, and we wanted to get that out first in the restart process.”
Scality just got a bigger endorsement — and a cash infusion — from Hewlett Packard Enterprise.
The object storage software company said HPE has made an equity investment in the startup and the two companies have forged a tighter partnership in engineering, go-to-market and sales. HPE has had a reseller relationship with the San Francisco-based Scality since October 2014 when the two signed a formal agreement that paired the Scality Ring
Now Ring software is available with the HPE storage portfolio.
“The difference between now and before is we didn’t have an equity stake in Scality,” said Patrick Osborne, senior director of product management and marketing, HP Storage Division at HPE. “And the Ring software was only sold with the server line. Now it’s been expanded to the storage channel.
Scality announced it raised $45 million in funding last August to expand its North American sales force, continue international expansion and build out its reseller channel. The company, which is targeting an IPO in 2017, has raised a total of $80 million since its founding in 2009.
Scality also scored its second major server reseller deal last year when Dell added the object storage vendor to the Blue Thunder program that combines software-defined storage with Dell servers.
Osborne would not disclose how much Hewlett-Packard Ventures has invested in Scality but other reports put the figure at $10 million. Erwan Menard, president and CEO of Scality, said the HPE investment is part of a new D round of funding that brings total funding to $92 million.
Osborne said the Scality partnership will focus on sales that require large capacity compared to its HPE 3PAR SAN array product line that focuses on latency and performance sensitive configurations.
“This is for applications that require capacity optimization,” he said. “Workloads that have unstructured data and rich media that create large amounts of data that is static and need dense servers with Apollo servers. This creates a great solution for capacity-oriented solutions.
Menard said the two companies also now have a joint engineering partnership that will focus on leveraging hardware upgrades and enhancements for faster time-to-market releases.
“We are an application that runs on standalone servers so we will be able to leverage hardware innovations very quickly,” Menard said. “People say that no one cares about hardware but customers do care about hardware, especially petabyte-scale customers. Hardware if very important and we will get access to hardware innovations a lot quicker.”
Flash and hybrid storage array vendor Tegile Systems has hired a CFO with experience bringing companies public, and CEO Rohit Kshetrapal said he is aiming Tegile in that direction.
“We don’t see an IPO [initial public offering] as a tomorrow-morning-thing or a day-after-tomorrow-thing,” Kshetrapal said. “We want to put the fundamentals in place.”
Tegile probably won’t go public before 2017, but the new CFO Mike Morgan was hired to start the process. Morgan has already led two IPOs as a CFO and most recently was CFO at cloud storage controller vendor Panzura. He has been a CFO at technology companies since 1991.
Morgan said Tegile “has taken a smart, measured approach to how they grow the business,” he said. “All the other players in the space have a particular niche – Tintri in virtualization, Pure in all-flash, Nimble on the lower-end side. The market here for Tegile is bigger than for the others because of the breadth of our product line.”
Tegile added all-flash arrays in 2014 to complement the hybrid arrays it began selling in 2012.
Tegile raised $117.5 million in four funding rounds since 2010. Its investors include venture capitalists August Capital, Meritech Capital, Capricorn Investment Group, Pine River Capital and Cross Creek Adivsors, and strategic investors Western Digital and SanDisk.
Kshetrapal said Tegile has more than 1,200 customers and about 35% of them now are using all-flash arrays. He said units shipped increased 350% year-ever-year in 2015.
Kshetrapal didn’t disclose revenue, but a source with knowledge of the company said its 2015 revenue was between $30 million to $35 million, depending on the final fourth quarter results.
Following several years of large funding rounds in storage, venture capitalists are expected to pull back in 2016. That would leave IPOs as the best way for mature private companies to raise cash over the next few years. But storage vendors have had mixed results with IPOs in recent years, so it won’t be easy to pull off.
Morgan and Kshetrapal said they want their bottom line to be in better shape than Pure Storage and Nutanix. All-flash vendor Pure, which went public last year, and hyper-converged vendor Nutanix, which has filed to go public in 2016, have had impressive revenue growth during their histories but still experience large losses every quarter.
“You can’t spend three times your revenue to buy your business,” Morgan said. “Everybody is taking a more sober look at how to grow the business.”
Kshetrapal added: “We want to show both growth and an effective cost structure.”
According to the annual Piper Jaffray CIO survey published this week, 10% of CIOs who say they plan to deploy all-flash arrays this year named Tegile as their preferred vendor. That placed Tegile tied for fourth with Tintri behind EMC, Pure Storage and IBM.
Nimbus CEO and founder Tom Isakovich sent me an e-mail today saying he is still in business and has been working on a new all-flash product.
“Nimbus Data has been very quietly at work on its most ground-breaking all-flash technology yet and soon will be unveiling a battery of new systems and software,” Isakovich wrote. “The all-flash wars have only begun. Less than five percent of storage systems revenue is currently derived from all-flash systems. This is still the very early days of this industry.”
It was the first I heard from Nimbus since the summer of 2014 when the vendor abruptly cancelled a briefing for a new array. The “latest news” on the front page of the Nimbus web site is a press release dated June, 2014. Most industry analysts I talk to regularly thought Nimbus had closed its doors.
Gartner listed Nimbus as a niche player in the all-flash array Magic Quadrant published in June 2015, but the report said “Many customers who approach Nimbus Data and request information, offers, quotations and participation in RFIs and RFPs do not receive an answer.”
Nimbus has always been a lean operation, as Isakovich has not taken any venture funding. Still, he had always kept industry analysts and media informed of product news until suddenly going quiet.
“I understand that being quiet is out-of-the-norm for us, but we will return to our vocal selves soon,” Isakovich said in his e-mail.
Isakovich added that the vendor is still selling its Gemini all-flash arrays, which were last upgraded in May 2014.
Hyper-converged pioneer SimpliVity has opened a research and development office in Seattle near to Microsoft, a sign that it is close to adding Hyper-V support.
SimpliVity’s OmniCube appliances combine storage, compute and virtualization. SimpliVity has supported VMware hypervisors since its start in 2013 and added KVM support in 2015 but still lacks support for Hyper-V.
SimpliVity chief marketing officer Marianne Budnik said OmniCubes will support Hyper-V in early 2016. She said the vendor will have around 30 developers in Seattle “working day in and day out to advance all things Microsoft.”
SimpliVity’s goal is for its OmniStack data virtualization platform to support any hypervisor, x86 server, cloud provider and management tool.
Budnik said SimpliVity would not follow rival Nutanix in developing its own native hypervisor, though. Nutanix last June launched its Acropolis hypervisor. That alleviates the need for an outside hypervisor, and also can help Nutanix customers move workloads across different vendors’ hypervisors. Nutanix also supports VMware, Hyper-V and KVM hypervisors.
“We haven’t heard customers ask for yet another hypervisor,” Budnik said when asked if SimpliVity would develop a hypervisor. “We’re focused on simplifying and lowering cost below the hypervisor level. We look forward to working more closely with VMware, Hyper-V and KVM.”
Microsoft has also accepted SimpliVity into the Microsoft Enterprise Cloud Alliance program. That will help SimpliVity integrate faster with the Azure cloud as well as future Hyper-V releases.
“We will develop alongside Microsoft instead of waiting for them to make a major release and then follow later,” Budnik said.
Budnik said more than 80% of SimpliVity customers run Microsoft workloads such as SQL, Exchange or SharePoint on OmniCube appliances.
When the word “archive” is used in conversations about storing data, it brings up preconceptions depending on the individuals and their roles in IT. The most common thought is that an archive is where backup data goes, and the term is associated with backup software and tape. The other thought is that an archive is for data that is not needed anymore and is the place where data goes to die.
These preconceptions are unfortunate and wrong. They foster resistance to implementing or using an archive, and often lead to dismissal of the concept. They also lead to relegating an archive to usage by those who manage the backup process.
Certainly, this limits the flexibility for IT usage and leads business owners believing that archive means a different type of access or unacceptable delay in getting access to their information. This attitude does not allow for using an archive as another tier of storage.
These concepts of archiving mistakenly assign a fixed value on the data stored. But the value of data changes. It does not just diminish with time, but may increase in importance. Ultimately, these preconceptions lead to treating the archive as a location for abandoned data.
The term “archive” is both a verb and noun. The noun part – dealing with location – needs to be redefined. Preconceptions are difficult to counter and this has led to use of a new term – “content repository.” That term connotes different types of usage, but mainly it is used to describe another tier of storage with a different cost structure that still provides online access expected by business owners. The content repository can serve as an archive for backup data, as a secondary storage location, and as what is described by some as an “online archive.”
It is difficult to change the preconceptions of what an archive is. The term needs to be redefined to reflect the economic value an online archive can provide. The easiest path may be to start a new discussion about a content repository and explain the usage in each case.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).