Amazon Web Services (AWS) rolled out a new type of storage for infrequently accessed data within the S3 tier that cost 1.25 cents per GBs to store but only 1 cent per GB to access.
The cloud has become a repository for unstructured data storage that is rarely accessed. Amazon already has its Glacier service for this type of storage. However, now it has introduced a new pricing tier for its high-throughput Amazon S3 standard.
“The new S3 Standard – Infrequent Access (Standard – IA) storage class offers the same high durability, low latency, and high throughput of S3 Standard. You now have the choice of three S3 storage classes (Standard, Standard – IA, and Glacier) that are designed to offer 99.999999999 percent … of durability. Standard – IA has an availability SLA of 99 percent,” according to the Amazon blog post.
Earlier this month, Amazon also reduced the price for its data stored in Amazon Glacier from $0.01 a GB per month to $0.007 GB per month.
“This price is for the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions; take a look at the Glacier Pricing page for full information on pricing in other regions,” Amazon stated in its blog.
The new tier service still allows customers to define data life-cycle policies to move data between different Amazon S3 classes, such as storing new data on the standard S3 storage class and then move it to the Standard-IA after a certain time that it has been uploaded. Over time, it can be moved to the Amazon Glacier service after the data is 60 days old.
“The new Standard-IA class is simply one of several attributes associated with each S3 object,” according to the AWS blog. “Because the objects stay in the same S3 bucket and are accessed from the same URLs when they transition to the Standard-IA, you can start using Standard-IA immediately through lifecycle policies without changing your application code. This means that you can add a policy and reduce S3 costs immediately, without having to make any changes to your application or affecting its performance.”
IDC Tuesday correccted the purpose-built backup appliance (PBBA) market tracker numbers it issued last week, giving market leader EMC more than $55 million in additional revenue for the second quarter.
The initial report showed steep declines for the market overall and EMC specifically. EMC apparently made a persuasive case that IDC under-reported its true backup appliance revenue, which consists mostly of Data Domain disk libraries. The new numbers show a less bleak picture for appliance sales, although they still declined slightly in the quarter.
The revised numbers give EMC $469.9 million compared to $414 million in the original report. The new total represents a 5.8 percent year-over-year drop for EMC and a 60.1 percent market share. The original numbers represented a 16.9 percent year-over-year drop and 57.1 percent share for EMC.
The revised numbers put total worldwide revenue at $781.1 million for last quarter, a one percent drop from last year instead of the eight percent decline from last week’s report. IDC includes revenue from appliances that require separate backup software along with integrated appliances that bundle software with storage.
Even a modest fall indicates a reversal of recent trends. The PBBA market grew 6.9 percent year-over-year in the first quarter of 2015 and increased 4 percent for the full year in 2014 over 2013.
No. 2 Symantec’s revenue fell 3.7 percent to $104.5 million last quarter, according to IDC. Barracuda Networks made the biggest revenue jump, growing 67.6 percent to $26.8 million and remained in fifth place with 3.4 percent share. That followed a 64.9 percent year-over-year jump in the first quarter for Barracuda following an aggressive rollout of backup appliances that support replication between appliances or to the Barracuda Cloud.
No. 3 IBM grew 0.8 percent to $54 million and No. 4 Hewlett-Packard increased 8.8 percent to $36.7 million. All other vendors combined to grow 13.4 percent to $89.6 million and 11.5 percent market share.
In the press release detailing the revenue report, IDC attributed the revenue drop to “market evolution.”
“Focus continues to shift away from hardware-centric, on-premise PBBA systems to hybrid/gateway systems,” said Liz Conner, IDC research manager for storage systems, in the press release. “The results are greater emphasis on backup and deduplication software, the ability to tier or push data to the cloud, and the increasing commoditization of hardware, all of which require market participants to adjust product portfolios accordingly.”
SanDisk is putting its investments in private storage companies to good use. Two of the companies it has invested in – Nexenta and Tegile Systems – have signed on as OEM partners for SanDisk’s InfiniFlash all-flash storage platform.
Nexenta is a software vendor that is porting its ZFS-based NexentaStor application onto the InfiniFlash platform, which consists of proprietary NAND cards.
Tegile is expanding its all-flash platform with its IntelliFlash HD product, combing its software and controller with the SanDisk InfiniFlash array. Tegile launched its home-built all-flash arrays in June 2014, and also sells hybrid flash systems combining hard disk drives and solid-state drives.
Tegile VP of marketing Rob Commins said because the IntelliFlash system scales far higher than Tegile’s other all-flash arrays, there won’t be much overlap among customers. Tegile’s all-flash minimum capacities range from 12 TB to 48TB in an array while the IntelliFlash system starts at 127 TB and scales to more than 10 PB of usable capacity in a 42u rack.
Commins said the average price of Tegile’s all-flash platform is around $100,000 while the IntelliFlash system will average around $250,000 to $300,000.
“We said that’s a nice logical extension of capacity optimized media,” Commins said of the IntelliFlash platform. “We can pulll out our disk drives and use IntelliFlash HD as cheap and deep capacity.
“Our premise is there will always be performance optimized media and capacity optimized media. We’ll eventually go to PCIe and NVDIMM to keep going cheaper and deeper on the capacity layer.”
Tegile’s software stack will enable its IntelliFlash system to support block and file storage. Tegile supports Fibre Channel, iSCSI, NFS and SMB protocols.
Tegile expects IntelliFlash to cost around $1.50 per GB of raw capacity, and as little as 50 cents per usable GB after dedupe and compression when it is released in early 2016.
Commins said the IntelliFlash system should be a good fit for big data analytics and oil/gas exploration companies. “It’s a real nice screamer, but at super high capacity,” he said.
Hard disk drives (HDDs) are up to 8 TB and 10 TB, and flash storage may be all the rage, but tape keeps rolling along.
Hewlett-Packard (HP), IBM and Quantum – the Linear Tape-Open (LTO) Program Technology Provider Companies (TPCs) – announced this week that the seventh generation specifications of the LTO Ultrium format are available for licensing by storage mechanism and media manufacturers.
The new LTO-7 specification lists the maximum compressed capacity at 15 TB per tape cartridge, more than double the 6.25 TB compressed capacity of the prior LTO-6 generation. The specification assumes a compression ratio of 2.5 to 1.
The compressed data transfer rate soars from 400 megabytes per second (MBps) with LTO-6 to 750 MBps with the new LTO-7 technology. That means users potentially could transfer more than 2.7 TB per hour per drive with LTO-7, up from 1.4 TB per hour per drive with LTO-6.
Paving the way for the higher capacity and data transfer rates were technology enhancements such as stronger magnetic properties and a doubling of the read/write heads in advanced servo format to allow the drive to write more data to the same amount of tape within the cartridge.
The new LTO-7 generation carries forward features of prior generations, including partitioning to enhance file control and space management with the Linear Tape File System (LTFS), hardware-based encryption, and write-once, read-many (WORM) functionality.
An LTO-7 Ultrium drive can read data from LTO-7, LTO-6 and LTO-5 cartridges and write data to an LTO-7 or LTO-6 cartridge.
Vendors who have already announced product support for LTO-7 include Quantum and SpectraLogic. Quantum expects LTO-7 technology to be available in its Scalar i6000 and Scalar i500 libraries in December, with other platforms to follow, and the company currently offers an LTO-7 pre-purchase program for interested customers.
The LTO-7 specification’s 15 TB compressed capacity and 750 MBps data transfer rate are slightly lower than the figures the LTO Program projected last year with the release of its extended roadmap. The September 2014 roadmap indicated the LTO-7 generation would provide a compressed capacity of 16 TB per tape cartridge and a compressed data transfer rate of 788 MBps.
The newly updated LTO Ultrium roadmap lists the following maximum compressed capacities and data transfer rates for future generations:
LTO-8: Up to 32 TB and 1,180 MBps
LTO-9: Up to 62.5 TB and 1,770 MBps
LTO-10: Up to 120 TB and 2,750 MBps
The LTO Program notes that the roadmap “is subject to change without notice and represents goals and objectives only.”
The LTO Program plans to provide further insight into the LTO roadmap and technology at the Storage Decisions conference on November 3-4 in New York, at the SC15 supercomputing conference running November 15-20 in Austin, Texas, and at the Government Video Expo on December 1-3 in Washington, D.C.
Market research firm Dell’Oro Group’s mid-year snapshot showed that total storage systems revenue is on track to grow 1% in 2015, driven largely by sales to hyperscale service providers of direct-attached storage (DAS) devices for servers.
The Redwood City, California-based company said total storage systems revenue approached $10 billion in the second quarter – a 1% increase compared to the same time frame in 2014. Revenue for internal storage rose 3%, while sales in the larger external storage segment stayed flat in the quarter, as high-end systems continued to experience a year-to-year decline, according to the recently released Dell’Oro report.
EMC maintained the top spot for overall storage revenue through the first half of the year, and Hewlett-Packard (HP) was No. 2. IBM dropped from third place at the end of 2014 to fifth place in the aftermath of the sale of its x86 server line. Dell and NetApp were third and fourth respectively.
Rapidly growing Huawei snuck ahead of Hitachi into fifth place in total storage systems revenue for the second quarter, but Dell’Oro said Huawei often has a strong second quarter after a seasonally weak first quarter.
Dell’Oro’s numbers varied a bit from those released by IDC earlier this month. IDC put total disk storage sales at $8.8 billion for the second quarter for a 2.1 percent increase over the second quarter of 2014. IDC said external storage sales declined 3.9 percent. In vendor market share, IDC had IBM in fourth place ahead of NetApp. IDC agreed with Dell’Oro that hyperscale storage is growing rapidly, putting it at a 26 percent increase over the second quarter of 2014.
Flash continued to factor into a higher percentage of total capacity for both internal and external storage systems. Dell’Oro estimated that flash drives represented 8% to 10% of the total capacity of hybrid arrays, and nearly 75% of midrange and high-end external storage systems included some flash. Dell’Oro expects the percentage to approach 100 within a few years.
Shipments of Fibre Channel (FC) and Ethernet ports for networked external storage systems remained even at about 50% each, and Dell’Oro expects the breakdown to stay the same for at least the next year.
For FC, the big trend was 16 Gbps taking share from 8 Gbps, as 69% of the switch ports and more than 20% of the adapter ports shipped at the higher data transfer rate in the second quarter. But Dell’Oro said total SAN revenue, including FC switches and adapters, dropped 5% from the first to second quarters to $550 million (the lowest level since Q2 of 2009), and the 1.9 million in port shipments represented a 7% decrease.
Dell’Oro attributed the SAN revenue decline to the resurgence of DAS as well as new storage alternatives, such as scale-out architectures, software-defined storage, hyperconverged infrastructure and cloud storage. Ethernet-based storage has also grown, although it still trails block-based storage in revenue, Dell’Oro said.
With Ethernet storage networking, 40 Gbps made inroads on 10 Gbps, but Dell’Oro expects the 40 Gbps Ethernet pattern to be short-lived as options such as 50 Gbps, 75 Gbps and 100 Gbps emerge in future years.
Despite all the talk about disaster recovery testing, most organizations still don’t do it enough. And recovery point objectives (RPOs) are still way too high to facilitate adequate DR, according to a survey conducted by cloud vendor CloudVelox.
CloudVelox, which offers automated disaster recovery in the cloud, interviewed 343 IT executives responsible for DR in their organizations from nine vertical markets. The surveyed organizations ranged from less than 100 employees to more than 1,000.
The survey found 58 percent of the respondents ran DR tests once a year or less. Another 33 percent tested their DR infrequently or never, while 26 percent tested it quarterly and 16 percent did it monthly.
These results should not be surprising because other recent surveys have had similar results, including one conducted by our parent company TechTarget.
So why aren’t people testing more often? Fifty-six percent of the CloudVelox respondents said their DR testing was infrequent because they didn’t have adequate internal resources. Another 34 percent found the process complex, while 19 percent did not find it to be a priority and 12 percent said it costs too much.
Respondents also say their traditional DR solutions don’t offer adequate RPOs. One-third said their RPO was more than 12 hours, with only 21 percent claiming it is two hours or less and 46 percent said it is between two hours and 12 hours.
“The fact that RTO and RPO in this day and age is still in the two-to-12-hour range shows that disaster recovery is broken,” said Vasu Subbiah, CloudVelox’s vice president of products. “And IT does not have the resources. The average IT spend for disaster recovery is between five to seven percent. If they test less frequently, then mistakes are compounded when they try to recover in the future.”
Cloud Velox, formerly called CloudVelocity, offers cloud-based disaster recovery, cloud data migration and testing and development in the cloud. The July 2015 surveyed verticals that included oil and gas, basic materials, industrial, consumer goods and services, healthcare, telecommunications, utilities and finance.
The survey also found variations based on the vertical. For instance, the survey found the oil and gas industry has the highest average RPO, with 70 percent stating their it took 12 hours or more and they had the lowest test frequency, with 80 percent of those surveyed said they test once a year or less. Thirty percent of the all the industries included in the survey stated they had an RPO of 12 or more hours.
In healthcare, 69 percent tested once a year or less. Consumer services and healthcare were most willing to embrace cloud-based DR if they could automate network and security controls to the cloud. Sixty-five percent of respondents in consumer services and 64 percent of healthcare would do cloud DR if they had the option of automation.
One in four of the respondents said they experience failure or delays over half of the time when they tested their secondary data center. Fifty-three percent said network connectivity was the common cause of failure when testing their disaster recovery environment. Another thirty-seven percent cited wrong configuration and 33 percent cited missing patches.
Network and security concerns often are singled out as barriers to cloud adoption. CloudVelox’s survey found that 55 percent of respondents would use cloud DR if they could automate their on-premises network and security controls in the cloud, while the other 45 percent would not consider the cloud even if they had on-premise network and security controls
External storage sales are shrinking.
The total worldwide enterprise storage systems factory revenue grew to an $8.8 billion during the second quarter of 2015, according to IDC. However, sales are tilting more toward hyperscale data centers and sever-based storage. External storage capacity — SAN and NAS — still represents the largest portion of the market, but sales dropped 3.9 percent compared to the second quarter of 2014.
Total disk revenue grew 2.1 percent, and capacity shipments were up 37 percent year over year to 30.3 exabytes during the quarter.
EMC was still the largest storage systems supplier with 29.9 percent of external storage worldwide revenues, while IBM, NetApp and HP came in second in a statistical tie for second with revenue shares of 11.1 percent, 10.9 percent and 10.5 percent, respectively. Dell and Hitachi also were in a statistical tie for fifth with Dell earning 6.6 percent and Hitachi earning 6.5 percent of the worldwide external storage revenues market during Q2.
Most of the top vendors declined in year-over-year revenue, with NetApp, IBM and Dell suffering largest revenue declines. NetApp dropped 19.6 percent, finishing at$615 million in Q2 this year compared to $765 million in Q2 2014. IBM revenue fell 11 percent, coming in at$631 million in Q2 this year compared to $712 million in Q2 2014. Dell slipped 9.9 percent, falling to $313 million compared to $414 in Q2 of 2014.
EMC’s revenues declined 4 percent to $1.7 billion compared to $1.764 million a year ago and Hitachi slipped 1.9 percent to $366 million. HP was the only of the top six vendors to increase year-over-year, and it barely went up. HP increased 0.2 percent to $597 million. The rest of the industry increased 9.3 percent year-over-year and grabbed 24.6 percent market share. IDC put the overall external storage revenue at $5.7 billion during the quarter.
Although all of its revenue comes from external storage, EMC also led the total worldwide enterprise storage systems market accounting for 19.2 percent of all revenues in 2Q15. HP held the number two position with 16.2 percent of spending during the quarter, and had the highest growth of eight percent. Dell accounted for 10.1 percent of global spending. Storage systems sales by original design manufacturers (ODMs) selling directly to hyperscale datacenter customers accounted for 11.5 percent of global spending during the quarter and server-based storage grew 10 percent to $2.1 billion.
“Revenue growth was strongest within the group of original design manufacturers that sell directly to hyperscale data centers,” IDC storage reseach director Eric Sheppard said in the press release. “This portion of the market was up 25.8 percent year over year to $1 billion.”
Formation Data Systems CEO Mark Lewis has strong opinions on the direction that storage needs to take.
He sees the adoption of on-demand, “as-a-service” cloud models as the future, in contrast to the traditional networked storage model that “so many players out there in startup storage land” continue to follow.
Lewis founded Formation Data Systems in September 2012 after failed attempts to create a “ubiquitous data virtualization layer” at EMC with Invista and at Compaq/Hewlett-Packard with VersaStor. Formation raised $24.2 million in Series A funding in December 2013 from Pelion Ventures, Third Point Ventures, Dell Ventures and Mayfield.
The FormationOne Dynamic Storage Platform is data and storage virtualization software that runs on commodity x86 server hardware, whether bare metal or virtual machines (VMs), at multiple service levels, from archive to tier 1. The objective was to create a “consistent data layer” to enable capabilities such as snapshots, replication and deduplication across blocks, files and objects.
Lewis contrasts Formation’s approach to the model followed by EMC, which he said must write management code separately for siloed platforms such as Data Domain, VNX, Isilon, Symmetrix and XtremIO.
In the following interview excerpts, Lewis addressed some of the hottest technologies:
What is your strategy on hyperconvergence?
Lewis: My belief is that, from a market framework, the storage market in aggregate is going to go through two disruptions. At the entry level, we see hyperconverged, and I would characterize that as Nutanix, SimpliVity, et al, which has been going on for four or five years now. We’ll do very, very well at the entry to mid-tier and what I call single application, VDI frameworks because it’s very economical. It will replace a lot of low-end SANs, iSCSI, low-end NFS clusters, stuff like that because at that end, why do you need even storage and servers separated?
We believe at the high end that hyperconverged is not that interesting. When you’re going to need an elastic system that may operate against hundreds of applications, many, many use cases, the idea of converging the ratios of servers, network and storage and having to have a one all in box actually is economically suboptimal. So we believe that with larger scale systems, you really do want to consolidate as you have around networking, compute and storage in elastic deliverable pools because you might start out with a small amount of storage and a large amount of compute and then have to grow the storage or change the networking. And when you have hundreds and hundreds of potentially scale-out applications, those ratios aren’t the same. We believe that the new unified platform storage – we call it dynamic storage – becomes the disruptor for the mid- to high-end market vs. legacy large SANs and what not.
Which vendors or technologies have you gone up against with pilot customers?
Lewis: We’ve gotten most of our deal flow through people who have tried Ceph and been unable to be successful there or found that that it was far too much work . . . Other than that, we have some people that were presently on Amazon or [Amazon Web Services] AWS, and for scaling and other flexibility reasons want to build some or all of their own data centers. These would be startups software-as-a-service companies.
Then again, it’s less competition and more selection of alternatives. Some will say, ‘Well, I’m just not ready to do anything different.’ And so the alternative is to do nothing. We’ll see how it shakes up.
How do you differ from Ceph and vendors that claim to be software-defined, with the ability to run on any server hardware?
Lewis: By any definition that I’ve seen of the word, we are software defined. I believe that’s kind of like saying we’re defined as being a car or something. It’s accurate but not descriptive or helpful. It’s been so overused. I see people rebranding their old arrays saying, ‘We’re software, and we run an Intel processor in there,’ even though it’s unique, and ‘We’re going to be software-defined.’
We’re different in both technology and customer enlightenment and focus. We are trying to build something that will ultimately get categorized as modern enterprise storage – not technology, not open source.
Ceph started its life as open source software. Really cool stuff. Really technical. But really not very usable within enterprise storage . . . We looked at Ceph as the potential framework for Formation, but it didn’t have the enterprise-type technology we felt was needed. We are trying to appeal to people that need enterprise storage features and still would like to have it done within a private cloud. You have to be able to snapshot, to have quality of service guarantees, multi-tenancy, policy-based management, things like that.
Copy data management (CDM) is a relatively new term for many in Information Technology. At first literal consideration, its meaning seems self-evident. However, it is really a topical area that vendors address with new products and terminology.
Making copies of data for IT applications is a fundamental task. The how and why have been evolutionary processes. New developments have come from vendors to deliver solutions to manage and automate CDM.
The “why” of making copies starts with the basic function of data protection. Protection is from a disaster (which also includes an orchestrated recovery process) or from corruption or deletion due to application, user, or hardware error. The copy can also be used to create a point-in-time record of information for business or governance reasons.
Another reason for making a copy is to use that data for more than just the primary application. This could be for test/development, analytics, or just because the application owner or administrator just feels safer having another copy. Especially in the case of test/development and analytics, another copy insulates the primary application from problems. Besides corruption and deletion, these problems can include potential performance impacts to the primary application.
Making data copies comes at a cost. The different types of copy mechanisms (the “how” of making copies) include making full copies of data or making snapshot copies where only changed data is represented along with the snapshot tables/indexes. The copies can be local, remote or both. Full copies will take time to create and require additional storage capacity. Snapshot copies can grow in capacity over time. All copies not only eventually consume storage space for usage but also consume time and space in backup processes. Copies of data must also be managed, especially snapshots which tend to proliferate.
This sets the stage for copy data management with the goal to orchestrate and automate the management of copies of data and to minimize the impacts on capacity utilization and copy actions. There have been two approaches to address CDM: software to manage copies/processes and a combination of software and hardware to create a “golden” copy to leverage for other needs. The details and merits of each require a more involved evaluation. Managing copies has the potential to improve IT operational processes (including disaster recovery) and minimize costs.
There are a number of considerations, however. CDM crosses responsibility areas from an IT perspective.
- The first area to consider is the backup administrator. The administrator often uses deduplication software or hardware to reduce the size of copies, and no longer sees the proliferation of copies as the problem it once was. Why there are multiple copies being created does not concern backup administrators, and they do not need to be the champion of making changes.
- A storage administrator will manage the storage system and that usually includes managing the snapshot, copy and replication functions. A storage administrator is concerned with the amount of space consumed and will utilize snapshots as a means to reduce space requirements without challenging the application owner/administrator on the need for copies.
- Application owner/administrators sometimes make complete copies of data (databases for example) rather than snapshots to fit their usage. Usually, they will not inform the storage administrator about usage as long as there is enough capacity available. Integration with applications for automation enhances the value of CDM.
Snapshot management with tools outside of storage system element managers is a relatively new task for storage administrators. A useful tool is critically important for effective adoption and to gain confidence for the administrator. The tool manages the lifecycle of a snapshot copy, but the administrator would not think of it in that way.
Consolidating administration of copies – complete or snapshots, local or remote, including cloud – to a single tool has potentially high value. The more difficult part is making changes in the operational and personnel responsibilities. Those who gain from consolidation of these functions may also influence budgeting for the solution.
CDM represents a new tool and embracing a new tool is sometimes difficult. It does not help that there have been inconsistent descriptions from vendors in their effort to market their solution as unique.
Looking forward, CDM could become part of the element manager for storage systems as an integrated function that works for that specific system. This method is probably too limiting in achieving overall value. The process needs to be applied across IT, inclusive of copies at remote or cloud locations. The best way is likely to integrate CDM with overall orchestration software .This will take a long time given the change required for IT. Meanwhile, we will continue to see individual products that provide value for copy data management.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
EMC says its XtremIO all-flash array has cracked the $1 billion cumulative bookings mark in 588 days – or roughly six quarters – while remaining on track to do $1 billion of business in calendar 2015.
Perhaps because EMC’s traditional storage systems aren’t exactly going like gangbusters these days, it celebrated its billion-dollar baby with a blog and provided updated 2014 market share numbers from Gartner. The Gartner numbers also stick a pin in Pure Storage’s planned IPO announcment. Pure was demoted from second to third in market share after its S-1 revenue figures came in below what it had led Gartner to believe they were.
EMC said it took VMware five years and Isilon scale-out NAS needed more than 11 years to hit $1 billion in sales. XtremIO is coming off a $300 million quarter, which included more than 40 $1 million-plus orders and 40 percent of the customers were repeat buyers.
Gartner puts EMC’s 2014 revenue at $443.6 million, giving it 34 percent of the $1.29 billion all-flash array market. IBM was second with $233.3 million and 18 percent share with Pure third at $149.4 million and 11.5 percent. Pure was listed at second with $276.3 million when Gartner released its numbers earlier this year, but Gartner edited its charts to reflect the reported revenue from Pure’s IPO filing.
EMC hails XtremIO’s scale-out performance, inline always-on data services, copy data management and application integration as reasons for its success.
Because you don’t hear copy data management cited often as an all-flash selling point, we asked senior director of XtremIO marketing Andy Fenselau to explain. He said XtremIO’s copy data management comes from its use of in-memory snapshots and metadata.
“For us, a copy is not a traditional back-end storage copy, it’s a simple in-memory copy,” he said. “It’s instant, takes no additional space until new blocks are written, and only unique compressed blocks are written. And it’s full performance. Customers present them to application teams for self-service.”
Fenselau said about 51 percent of XtremIO’s revenue comes from customers using it for databases, business applications and analytics. Server virtualization, private cloud and VDI are also common use cases.
“We were expecting lot of enterprise adoption, but we’re also seeing a wonderful amount of midmarket adoption,” he said.