Formation Data Systems CEO Mark Lewis has strong opinions on the direction that storage needs to take.
He sees the adoption of on-demand, “as-a-service” cloud models as the future, in contrast to the traditional networked storage model that “so many players out there in startup storage land” continue to follow.
Lewis founded Formation Data Systems in September 2012 after failed attempts to create a “ubiquitous data virtualization layer” at EMC with Invista and at Compaq/Hewlett-Packard with VersaStor. Formation raised $24.2 million in Series A funding in December 2013 from Pelion Ventures, Third Point Ventures, Dell Ventures and Mayfield.
The FormationOne Dynamic Storage Platform is data and storage virtualization software that runs on commodity x86 server hardware, whether bare metal or virtual machines (VMs), at multiple service levels, from archive to tier 1. The objective was to create a “consistent data layer” to enable capabilities such as snapshots, replication and deduplication across blocks, files and objects.
Lewis contrasts Formation’s approach to the model followed by EMC, which he said must write management code separately for siloed platforms such as Data Domain, VNX, Isilon, Symmetrix and XtremIO.
In the following interview excerpts, Lewis addressed some of the hottest technologies:
What is your strategy on hyperconvergence?
Lewis: My belief is that, from a market framework, the storage market in aggregate is going to go through two disruptions. At the entry level, we see hyperconverged, and I would characterize that as Nutanix, SimpliVity, et al, which has been going on for four or five years now. We’ll do very, very well at the entry to mid-tier and what I call single application, VDI frameworks because it’s very economical. It will replace a lot of low-end SANs, iSCSI, low-end NFS clusters, stuff like that because at that end, why do you need even storage and servers separated?
We believe at the high end that hyperconverged is not that interesting. When you’re going to need an elastic system that may operate against hundreds of applications, many, many use cases, the idea of converging the ratios of servers, network and storage and having to have a one all in box actually is economically suboptimal. So we believe that with larger scale systems, you really do want to consolidate as you have around networking, compute and storage in elastic deliverable pools because you might start out with a small amount of storage and a large amount of compute and then have to grow the storage or change the networking. And when you have hundreds and hundreds of potentially scale-out applications, those ratios aren’t the same. We believe that the new unified platform storage – we call it dynamic storage – becomes the disruptor for the mid- to high-end market vs. legacy large SANs and what not.
Which vendors or technologies have you gone up against with pilot customers?
Lewis: We’ve gotten most of our deal flow through people who have tried Ceph and been unable to be successful there or found that that it was far too much work . . . Other than that, we have some people that were presently on Amazon or [Amazon Web Services] AWS, and for scaling and other flexibility reasons want to build some or all of their own data centers. These would be startups software-as-a-service companies.
Then again, it’s less competition and more selection of alternatives. Some will say, ‘Well, I’m just not ready to do anything different.’ And so the alternative is to do nothing. We’ll see how it shakes up.
How do you differ from Ceph and vendors that claim to be software-defined, with the ability to run on any server hardware?
Lewis: By any definition that I’ve seen of the word, we are software defined. I believe that’s kind of like saying we’re defined as being a car or something. It’s accurate but not descriptive or helpful. It’s been so overused. I see people rebranding their old arrays saying, ‘We’re software, and we run an Intel processor in there,’ even though it’s unique, and ‘We’re going to be software-defined.’
We’re different in both technology and customer enlightenment and focus. We are trying to build something that will ultimately get categorized as modern enterprise storage – not technology, not open source.
Ceph started its life as open source software. Really cool stuff. Really technical. But really not very usable within enterprise storage . . . We looked at Ceph as the potential framework for Formation, but it didn’t have the enterprise-type technology we felt was needed. We are trying to appeal to people that need enterprise storage features and still would like to have it done within a private cloud. You have to be able to snapshot, to have quality of service guarantees, multi-tenancy, policy-based management, things like that.
Copy data management (CDM) is a relatively new term for many in Information Technology. At first literal consideration, its meaning seems self-evident. However, it is really a topical area that vendors address with new products and terminology.
Making copies of data for IT applications is a fundamental task. The how and why have been evolutionary processes. New developments have come from vendors to deliver solutions to manage and automate CDM.
The “why” of making copies starts with the basic function of data protection. Protection is from a disaster (which also includes an orchestrated recovery process) or from corruption or deletion due to application, user, or hardware error. The copy can also be used to create a point-in-time record of information for business or governance reasons.
Another reason for making a copy is to use that data for more than just the primary application. This could be for test/development, analytics, or just because the application owner or administrator just feels safer having another copy. Especially in the case of test/development and analytics, another copy insulates the primary application from problems. Besides corruption and deletion, these problems can include potential performance impacts to the primary application.
Making data copies comes at a cost. The different types of copy mechanisms (the “how” of making copies) include making full copies of data or making snapshot copies where only changed data is represented along with the snapshot tables/indexes. The copies can be local, remote or both. Full copies will take time to create and require additional storage capacity. Snapshot copies can grow in capacity over time. All copies not only eventually consume storage space for usage but also consume time and space in backup processes. Copies of data must also be managed, especially snapshots which tend to proliferate.
This sets the stage for copy data management with the goal to orchestrate and automate the management of copies of data and to minimize the impacts on capacity utilization and copy actions. There have been two approaches to address CDM: software to manage copies/processes and a combination of software and hardware to create a “golden” copy to leverage for other needs. The details and merits of each require a more involved evaluation. Managing copies has the potential to improve IT operational processes (including disaster recovery) and minimize costs.
There are a number of considerations, however. CDM crosses responsibility areas from an IT perspective.
- The first area to consider is the backup administrator. The administrator often uses deduplication software or hardware to reduce the size of copies, and no longer sees the proliferation of copies as the problem it once was. Why there are multiple copies being created does not concern backup administrators, and they do not need to be the champion of making changes.
- A storage administrator will manage the storage system and that usually includes managing the snapshot, copy and replication functions. A storage administrator is concerned with the amount of space consumed and will utilize snapshots as a means to reduce space requirements without challenging the application owner/administrator on the need for copies.
- Application owner/administrators sometimes make complete copies of data (databases for example) rather than snapshots to fit their usage. Usually, they will not inform the storage administrator about usage as long as there is enough capacity available. Integration with applications for automation enhances the value of CDM.
Snapshot management with tools outside of storage system element managers is a relatively new task for storage administrators. A useful tool is critically important for effective adoption and to gain confidence for the administrator. The tool manages the lifecycle of a snapshot copy, but the administrator would not think of it in that way.
Consolidating administration of copies – complete or snapshots, local or remote, including cloud – to a single tool has potentially high value. The more difficult part is making changes in the operational and personnel responsibilities. Those who gain from consolidation of these functions may also influence budgeting for the solution.
CDM represents a new tool and embracing a new tool is sometimes difficult. It does not help that there have been inconsistent descriptions from vendors in their effort to market their solution as unique.
Looking forward, CDM could become part of the element manager for storage systems as an integrated function that works for that specific system. This method is probably too limiting in achieving overall value. The process needs to be applied across IT, inclusive of copies at remote or cloud locations. The best way is likely to integrate CDM with overall orchestration software .This will take a long time given the change required for IT. Meanwhile, we will continue to see individual products that provide value for copy data management.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
EMC says its XtremIO all-flash array has cracked the $1 billion cumulative bookings mark in 588 days – or roughly six quarters – while remaining on track to do $1 billion of business in calendar 2015.
Perhaps because EMC’s traditional storage systems aren’t exactly going like gangbusters these days, it celebrated its billion-dollar baby with a blog and provided updated 2014 market share numbers from Gartner. The Gartner numbers also stick a pin in Pure Storage’s planned IPO announcment. Pure was demoted from second to third in market share after its S-1 revenue figures came in below what it had led Gartner to believe they were.
EMC said it took VMware five years and Isilon scale-out NAS needed more than 11 years to hit $1 billion in sales. XtremIO is coming off a $300 million quarter, which included more than 40 $1 million-plus orders and 40 percent of the customers were repeat buyers.
Gartner puts EMC’s 2014 revenue at $443.6 million, giving it 34 percent of the $1.29 billion all-flash array market. IBM was second with $233.3 million and 18 percent share with Pure third at $149.4 million and 11.5 percent. Pure was listed at second with $276.3 million when Gartner released its numbers earlier this year, but Gartner edited its charts to reflect the reported revenue from Pure’s IPO filing.
EMC hails XtremIO’s scale-out performance, inline always-on data services, copy data management and application integration as reasons for its success.
Because you don’t hear copy data management cited often as an all-flash selling point, we asked senior director of XtremIO marketing Andy Fenselau to explain. He said XtremIO’s copy data management comes from its use of in-memory snapshots and metadata.
“For us, a copy is not a traditional back-end storage copy, it’s a simple in-memory copy,” he said. “It’s instant, takes no additional space until new blocks are written, and only unique compressed blocks are written. And it’s full performance. Customers present them to application teams for self-service.”
Fenselau said about 51 percent of XtremIO’s revenue comes from customers using it for databases, business applications and analytics. Server virtualization, private cloud and VDI are also common use cases.
“We were expecting lot of enterprise adoption, but we’re also seeing a wonderful amount of midmarket adoption,” he said.
Object storage vendor Scality has raised $45 million in investment that will be used to expand its North American sales force, continue international expansion and build out its reseller program. The company, which is targeting an IPO in 2017, has raised a total of $80 million since its founding in 2009.
“We started a satellite office in Japan and we will continue to invest there. We started a satellite office in Singapore and we will expand there as well as the existing market in North America and Europe,” said Leo Leung, Scality’s vice president of marketing.
This latest funding round includes a new investor and partner BroadBand Tower, Inc., which is expected to expand Scality’s presence in Japan.
“They believe in the technology and the company,” Leung said. “They were one of the first companies in Japan to push some early trends such as virtualization. Now they are pushing software defined storage.”
Other investors include Menlo Ventures, IDnvest, the Digital Ambition Fund, Iris Capital, Omnes Capital and Galileo Partners. Also, 65 percent of Scality employees participated in the latest Series D funding round. The company has 160 employees worldwide.
Leung said Scality is planning an 80 percent channel-based and 20 percent direct sales force model.
“We have larger system vendors (but) just because we are in the price books doesn’t mean they are actually selling it,” he said. “And we are going to make more investments on the tech side. We do substantial research and most of it is research in technology. There are some new things coming down the road. There are still some very hard problems out there, especially when it comes to multi-geographies and security and interoperability.”
Scality’s Ring software uses a de-centralized distributed architecture, providing concurrent access to data stored on x86-based hardware. Ring’s core features include replication and erasure coding for data protection, auto-tiering, and geographic redundancies inside a cluster.
NetApp, Brocade and Hewlett-Packard last week all reported storage revenues that were better than expected, impressing Wall
Street analysts and investors. Yet in each case their revenue declined from last year. The better-than-expected numbers were achieved because of lowered expectations due to sagging storage sales.
NetApp beat its previous forecast in its first quarter following a CEO change, but its revenue continued to shrink from the previous year as it transitions to its clustered Data OnTap operating system.
NetApp revenue of $1.33 billion decreased 10 percent from last year, although it came in above the mid-point of the vendor’s previous forecast. Product revenue (outside of maintenance and service fees) of $664 million was down 25 percent from last year and 27 percent from last quarter, and below expectations.
“We did what we said we would, but we’re clear that we have a lot more work to do,” new CEO George Kurian said on the earnings conference call with analysts.
Kurian, who replaced Tom Georgens as CEO in June, added, “We have a heightened sense of urgency in working with our customers to enable their move to the modern architectures delivered by our portfolio.”
He said NetApp is “aggressively pivoting” towards a product portfolio consisting mainly of software-defined storage, flash, converged infrastructure and hybrid cloud.
One of NetApp’s major challenges is to upgrade customers from Data Ontap operating system to clustered OnTap. Kurian said shipments of clustered systems grew by around 115 percent last quarter and was deployed in 65 percent of the FAS arrays shipped compared to 25 percent a year ago. But those clustered deployments are predominantly with new customers, and clustered OnTap still accounts for only 15 percent of NetApp’s total installed base. That is up from 11 percent in the previous quarter, but there is still a long way to go.
“The percentage of our installed base that has migrated to clustered OnTap has been small,” Kurian said.
Brocade’s storage sales usually reflect those of the large storage vendors who sell Brocade switches and large directors as part of their SANs. Brocade’s storage revenue of $309 million decreased five percent from last year, which was a little better than expected after EMC, NetApp, HP, Hitachi Data Systems and IBM all reported flat or declining storage product revenue.
Brocade CEO Lloyd Carney said his company took a “prudent view of the storage business” with its forecast for last quarter but said he sees the Fibre Channel market stabilizing and “will remain durable for many years.” He also said IP storage and flash are pushing sales of network storage switching.
Carney said as long as data grows, companies will have to add storage.
“I worry about this space when people stop buying more storage,” he said. “When the overall storage market stops growing, then I start to worry. As long as overall storage market continues to grow … year over year, there’s going to be the need for more either Fibre Channel storage or IP-based storage.”
HP’s storage revenue of $784 million was down two percent from last year, which is better than HP has done in recent years. CEO Meg Whitman called it a “strong quarter” for storage.
As usual, HP’s best storage performer was its 3PAR StoreServ platform, which grew in double digits. HP converged storage (3PAR, StoreOnce, and StoreAll) revenue of $393 million grew eight percent, and now makes up most of the vendor’s storage sales.
“We’ve turned the corner in storage,” Whitman said. “3PAR is fulfilling the promise that we’ve all known 3PAR has had for many, many years. So listen, we’re feeling good about this business. We’ve got good momentum, and I think you’re going to see continued strength here over the next few quarters.”
NetApp, HP and Brocade all said they are receiving a boost from flash in storage arrays.
NetApp said revenue from all-flash arrays increased 140 percent from last year and HP reported 400 percent growth in 3PAR all-flash storage, although those all-flash products were in early stages last year. Brocade VP of storage networking Jack Rondini said around 70 percent to 80 percent of flash systems use Fibre Channel and will help keep Fibre Channel relevant.
“The attachment of flash continues to be one of the most disruptive factors in the data center,” Rondini said.
Scality scored its second major server reseller deal this week when Dell added the object storage vendor to the Blue Thunder program that combines software-defined storage with Dell servers.
Hewlett-Packard has been reselling Scality Ring software since late 2014.
“This is a formality of the transactions we’ve seen on the field,” said Erwan Menard, chief operations officer at Scality. “We have hand a number of customers that have built a high-performance NAS that runs on Dell hardware.”
Scality’s Ring software uses a de-centralized distributed architecture, providing concurrent access to data stored on x86-based hardware. Ring’s core features include replication and erasure coding for data protection, auto-tiering and geographic redundancies inside a cluster. Reference hardware configurations for Scality include using Ring with Dell PowerEdge R73oxd rack servers or a combination of the Dell PowerEdge R630 rack server with Dell Storage MD3060e.
Menard said Scality configurations typically are large-scale deployments in the petabyte range.
“Never under 200 terabytes, for sure,” he said. “It’s definitely large scale, very much with an emphasis around archiving. The Blue Thunder SDS platform promises customers are single-point of accountability. The Ring software can run on any PowerEdge server and we will offer the best solution for running petabyte deployments that Scality is suited for.”
Having the reference architectures allow customers to deploy Scality Ring on new hardware as soon as it is available.
“We can offer sample configurations based on use cases,” said Travis Vigil, executive director of product management at Dell storage. “You can think of it as a recipe for customers to easily move Scality deployments into their environments.”
NEC Corp. of America has enhanced its Hydrastor scale-out backup and archive storage platform with a Universal Dedupe Transfer and added an Open Storage Technology (OST) Accelerator for Veritas NetBackup. The new features in HydraAstor 4.4 are aimed at cloud service providers.
The company’s Universal Dedupe Transfer capability supports Linux while Windows and Solaris support are on the road map. Dedupe Universal Transfer pre-processes data stream on the media server and allows customers to do high-speed full backups from a remote location.
It’s built on NEC’s Universal Express I/O, which was introduced in the previous software upgrade, and includes in-flight compression and encryption. The capabilities can be deployed together to maximize bandwidth and get higher performance.
“This opens the door to extend remote backups. Rather than put separate systems in each location,” said Gideon Senderov, NEC’s director of product management and technical marketing. “Just place dedupe transfer software on the source side. Before if someone wanted remote backup, they would have a full system in remote locations and replicate data. This eliminates the need for that. It makes it possible to do full backups over the wire. Each hybrid node gets up to 40 Terabytes an hour.”
NEC’s Universal Dedupe Transfer also does not require application-specific integration. The capability is built on the Universal Express I/O function that NEC introduced in the previous upgrade.
NEC also introduced an OST Accelerator for NetBackup for automated and speedier backups. This capability offloads the synthetic full back up process from the media server to Hydrastor nodes. It also automates the synthesis of the subsequent full backup as the new incremental backup is received. Users can eliminate weekly full backups from the job schedule and maintain an up-to-data backup image with only incremental backups.
“The full backups and incrementals are all read into the media server so it doesn’t have to collect from all the clients,” Senderov said. “So the media server creates new files and writes them to the storage system. With incrementals, it automates the synthetic changes into full backups at the storage array.
“The incrementals are done to the array and the synthetics constantly create a new full. We optimized metadata pointers and eliminated a lot of reads and writes so there is a high GBs performance improvement.”
NEC also upgrated its HydraStor monitoring management tool, adding a command line interface that aggregates deduplicaton and compression statistics per backup job.
The executives who have been working to turn Veritas Technologies into a standalone company say their plans have not changed with the $8 billion sale to The Carylye Group.
Symantec disclosed last October that it would spin off its storage business into a separate company, and said in January that the new company would be called Veritas. The plan called for Veritas to become a separate public company in January 2016.
However, Symantec shopped Veritas to interested suitors and found Carlyle’s price was right to buy Veritas and run it as a privately held company.
The Carlyle acquisition is expected to close around the end of this year, around the same as the spinout was planned. Matt Cain, Veritas EVP and chief product officer, said little else will change from the company profile and strategy he laid out for SearchDataBackup.com in June. He added this week that Veritas is continuing with plans to roll out backup and data management products that it disclosed last month, including NetBackup 7.7.
Cain said more products will be launched before the Carlyle acquisition closes.
“There’s no change to our strategy,” Cain said. “Employee count, location, leadership team, product roadmap will be the same. We may be accelerating the pace at which we execute, either through acquisitions, or other inorganic growth.”
Cain will remain in his position after the acquisition, and Brett Shirk will stay on as VP of worldwide sales for Veritas.
Veritas general manager John Gannon, who led the transition period in anticipation of a spinout, will join the Veritas board.
The Carlyle Group said Bill Coleman will be CEO and Bill Krause will become chairman when the deal closes.
Symantec CEO Michael Brown said during a conference call Tuesday that Symantec “considered other options for our Veritas business and ultimately determined that a sale of Veritas to Carlyle is in the best interest of Symantec’s shareholders because it delivers both an attractive and certain value.”
Gannon would not disclose those other options.
“We just got married,” he said. “We can’t talk about who else we dated.”
He did say Carlyle executives “strongly believe in our strategy, our market position and product portfolio and want to be part of it.
“This is a different outcome than the spinout we originally announced but we believe it is tremendous value to Symantec and Veritas because the outcome is certain.”
Symantec actually received $5.5 billion less for Veritas than it paid for the storage software vendor in its $13.5 billion 2005 acquisition.
Count Toshiba among the group of drive makers demonstrating new enterprise PCI Express (PCIe) and SAS SSDs at this week’s Flash Memory Summit in Santa Clara, California.
Toshiba unveiled three families of PCIe SSDs that support the non-volatile memory express (NVMe) protocol – one for notebooks and PCs, another for thin notebooks and tablets, and a third for servers and enterprise storage appliances. The company touted low power consumption with its new enterprise PX04P series, which is due for release in the fourth quarter.
Cameron Brett, director of SSD product marketing for Toshiba’s storage products business unit, claimed the enterprise PX04P drive can deliver more than 650,000 IOPS at 18 watts of power for certain workloads. He said the enterprise NVMe PCIe SSDs are geared for data center, hyperscale and cloud users trying to eke out as much performance as possible while keeping down their power costs.
The PX04P series is Toshiba’s first enterprise NVMe PCIe drive. The product is available in two form factors: 2.5-inch SSD with SFF-8639 connector and half-height half-length (HHHL) add-in card. The drives support up to four lanes of PCIe 3.0 and use Toshiba’s QSBC error-correction technology. Some versions support self encryption, according to Brett.
Brett said the PCIe SSDs are hot swappable if the system supports it. He said a “surprise removal, where everything is up and running and you just yank the drive” must be done with a Toshiba device driver.
PX04P customers have a variety of endurance and capacity options. The base model offers capacities of 800 GB, 1.6 TB and 3.2 TB. But, Brett said users could increase the capacity by altering the endurance level. For instance, with a 3.2 TB drive, the user could change the endurance from 10 drive writes per day (DWPD) to one DWPD to boost the capacity to 4 TB, according to Brett.
Brett said the PX04P NVMe PCIe drives are based on the same controller chip as the PX04S Series of enterprise 12 Gbps SAS SSDs that Toshiba announced last week. He said the same Japan-based development team worked on the SAS and enterprise PCIe NVMe SSDs. Toshiba listed the following endurance choices:
High Endurance (PX04SHB): Supports 25 DWPD with a 100% random workload. (Toshiba noted: “One full drive write per day means the drive can be written and rewritten to full capacity once a day every day for five years, the stated product warranty period.”)
–Capacity options: 200 GB to 1.6 TB
–Target workloads: Write-intensive virtualized data centers, big data analytics and high-performance computing.
Mid-Endurance (PX04SMB): Supports 10 DWPD.
–Capacity options: Up to 3.2 TB.
–Target workloads: online transaction processing (OLTP) and e-commerce.
Value-Endurance (PX04SVB): Supports 3 DWPD.
–Capacity options: Up to 3.84 TB
–Target workloads: Read-intensive applications such as media streaming, data warehousing and web serving.
Read-Intensive (PX04SRB): Supports 1 DWPD.
–Capacity options: Up to 3.84 TB.
–Target workloads: Enterprise and Web-based applications such as video on demand and data warehousing.
“With the SAS drives, we’re going to be offering all the different endurance points as separate models, where with the PCIe, we’re going to offer one base model and then you change the overprovisioning to the capacity and the endurance you need,” said Brett.
Toshiba isn’t the only vendor offering customers a choice of varying endurance and capacity levels. For instance, Seagate this week unveiled NVMe PCIe SSDs with models of differing capacities that are either endurance-optimized/mixed-workload or capacity-optimized/read-intensive. Last week, Seagate teamed with Micron on the launch of new 12 Gbps SAS SSDs that offer four endurances options at various capacity points.
Pure Storage today filed for an initial public offering (IPO) in hopes of becoming the second all-flash storage array vendor to go public following Violin Memory.
In its filing with the U.S. Securities Exchange Commission, Pure claimed it has more than 1,100 customers since launching its FlashArray platform in 2012 as one of the early all-flash startups on the market. Pure was second in market share behind EMC in all-flash arrays in 2014 according to analyst firms IDC and Granter.
Pure’s filing put its 2014 revenue at $174 million, more than quadrupling its 2013 revenue. Pure reported $74 million in revenue in the first quarter of this year, tripling its revenue from the first quarter of 2014.
However, Pure continues to sustain heavy losses. It lost $183 million in 2014 following losses of $79 million the previous year and $23 million in 2012. In the first quarter of 2015, Pure lost $49 million. Pure has 1,100 employees.
Despite its rapid revenue growth, Pure remains far behind EMC, which forecasts $1 billion in sales of its XtremIO all-flash array for 2015. While Pure has been by far the leader among the group of startups that entered the market around the same time, it also faces increased competition from all the major storage vendors who have added all-flash systems.
The SEC filing said Pure hopes to generate at least $300 million from its IPO, but will probably seek more. No date for the IPO is set, but it is unlikely to happen before late 2015 or early 2016.
Pure has raised $531 million in eight funding rounds, and its valuation was placed at more than $3 billion during its last round.
Pure executives hope they can go public more successfully than Violin, which is still struggling with losses and its $2.33 share price is less than one-third of its disappointing $9 IPO price from 2013.