Storage Soup

September 22, 2016  6:39 AM

Kaminario plans to add Clarity for K2 flash arrays

Dave Raffo Dave Raffo Profile: Dave Raffo

Kaminario is getting into cloud-based storage analytics with Clarity, which helps manage and monitor its K2 all-flash arrays.

Kaminario Clarity will be available to any K2 customer. Kaminario has a target of the first quarter of 2017 for delivering Clarity.

Kaminario Clarity features include a quality of service that lets customers set service levels for specific types of workloads. For instance, the K2 array could prioritize small reads and writes over large ones in a transactional database to match usage patterns.

Clarity will also include a new portal customers can use to see insights into K2 performance, as well as suggestions to improve performance and capacity.

Josh Epstein, Kaminario VP of global marketing, said a future step will be to automate the service levels for applications. Kaminario intends to add Clarity agents that will integrate into specific applications, such as Oracle and Microsoft SQL databases. The agents will provide more granular metrics for those applications.

“We’re gathering statistics about the K2 and the storage ecosystem – databases, servers, networking – and providing analytics, trends and insights from across our installed base,” he said. “The analytics tell customers how to configure and optimize their storage infrastructure.”

Kaminario Clarity continues the trend of vendors providing tools that collect data from storage arrays, upload it to clouds and provide analytics reports from customers. Other cloud-based analytics include Nimble Storage Nimble InfoSight,  PureStorage Pure1 Cloud Global Insight, EMC Unity CloudIQ, HPE StoreFront RemoteIBM Spectrum Control Storage Insights, and Tintri Analytics. These tools are gaining popularity with newer array models, specifically those incorporating flash.

September 21, 2016  6:50 AM

New SNIA Swordfish spec targets hyperscale, cloud environments

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

The Storage Networking Industry Association (SNIA) released Swordfish, a new specification that could ease the management of storage equipment and data services in converged, hyper-converged, hyperscale and cloud environments.

The SNIA Storage Management Initiative’s Swordfish 1.0 specification aims to simplify the provisioning, monitoring and management of block, file and object storage.

For instance, the Swordfish application programming interface (API) can associate different classes of service with storage gear of varying performance levels. An IT administrator would need only to specify the class of service to allocate storage to servers and virtual machines (VMs), rather than having to specify details on the most suitable storage array.

So far, the SNIA Swordfish specification offers extensive functionality only for block and file storage. Capabilities include the provisioning with class of service as well as replication and capacity and health metrics. Object storage support is on the Swordfish roadmap.

SNIA’s Swordfish is an extension of the server-focused Redfish API protocol and schema released about a year ago by the Distributed Management Task Force (DMTF). Swordfish uses the same RESTful interface over HTTPS, the JavaScript Object Notation (JSON) format and the Open Data Protocol (OData).

“One of the reasons SNIA’s so interested in doing Swordfish as an extension of Redfish is that this is an industry play to wind up with a unified approach for server, storage and fabric management,” said Don Deel, chair of SNIA’s SMI governing board and NetApp’s senior standard technology.

Swordfish can work across a variety of storage network fabrics including Fibre Channel, Ethernet, SAS and PCI Express (PCIe), according to Deel.

Swordfish will eventually replace SNIA’s Storage Management Initiative Specification (SMI-S) and possibly overcome SMI-S limitations. Deel said SMI-S is an “equipment-oriented” standard that exposes what the storage gear can do. By contrast, Swordfish is a “customer-centric interface” that focuses on use cases for “what IT administrators need to do with storage in a data center on a day-to-day basis,” Deel said.

“SMI-S has a ton of functionality but it does not scale well. That is a key for plugging and playing into all of these new models,” said Richelle Ahlvers, chair of SNIA’s SSM Technical Work Group and principal storage management architect at Broadcom.

Ahlvers said the tech industry has been shifting to REST-based interfaces. SNIA partners wanted to see standards updated with a more modern interface that could play in all environments, including the emerging hyperscale and cloud scenarios. They also wanted storage management APIs that are simpler to implement and consume and accessible via a standard browser, she said.

“SMI-S and other standards, even on the server side, have been very complicated. It’s a high learning curve,” Ahlvers said

SNIA’s Scalable Storage Management (SSM) Technical Work Group formed last October to scope out the Swordfish project and drew up a charter in December. Broadcom, Dell, EMC, Hewlett Packard Enterprise (HPE), Inova, Intel, Microsoft, NetApp, Nimble Storage, and VMware are among vendors that played key roles in developing Swordfish.

The SNIA Swordfish specification is publicly available for implementation. Ahlvers said anyone with a Redfish implementation could tack on Swordfish within a few months, but those starting from scratch would need to do more work. She expects to see products and early implementations start to show up in the middle of next year.

“The key here is really going to be the client drivers,” said Ahlvers, noting the work of Intel, Microsoft and VMware. “Between those three, that’s going to be helping to pull the vendors to add support for Swordfish.”

SNIA Swordfish team members and industry experts are presenting details on the new specification at this week’s Storage Developer Conference in Santa Clara, California.

September 19, 2016  4:45 PM

Nutanix IPO looks like a go

Dave Raffo Dave Raffo Profile: Dave Raffo

Nutanix took the last step before completing its initial public offering today when it set the target price range for its offering.

Nutanix filed an S-1 registration form with the Securities and Exchange Commission detailing plans to sell 14 million shares of Class A stock for between $11 and $13 per share. The hyper-converged market leader seeks to raise $209 million through the IPO. A Nutanix IPO price of $13 would make the company worth $1.8 billion. That falls below its $2 billion valuation at the time of its last funding round in 2015.

Nutanix first filed to go public last December, but the Nutanix IPO was stalled by a slow IPO market. There has only been a handful of tech IPOs in 2016.

One of Nutanix’s founders and its original CTO, Mohit Aron, said Nutanix executives and its investors likely were scared off by the poor IPO market. He said the current IPO market is less forgiving of a company still losing money despite strong revenue growth. Aron holds 10.7 million shares of shares of Nutanix common stock but no longer works for the vendor.

“Investors used to look at growth in past years,” Aron said. “This year, investor sentiment has turned and they’ve started looking for investors have started looking for profitability. Maybe Nutanix thought it would show a reduction in losses — which they’ve been showing — so investors would be more lenient towards looking at them.”

Aron said he expects Nutanix will do well in the long term. “Eventually, it’s about a technology that is ground-breaking, solves a real problem and customers are adopting it,” he said. “The technology makes sense. I see hyper-convergence getting adopted every day for primary and secondary storage. Markets go through temporary ups and downs. I think companies will do well when they have strong fundamentals.”

Aron calls Nutanix’s hyper-converged technology “my baby,” although he left in 2013 to start secondary data hyper-converged vendor Cohesity.

Nutanix investors will need patience if they want to see profit. In an SEC filing last week, Nutanix declared “we will continue to incur net losses for the foreseeable future.”

The company has lost a total of $442 million during its history, including losses of $84 million, $126 million and $169 million in the last three fiscal years. Nutanix lost $50 million last quarter after losing $49 million the previous quarter.

Those losses came despite impressive revenue growth. Its revenue increased 84% year-over-year to $445 million during the last fiscal year, which ended July 31. For the quarter that ended July 31, it reported $140 million in revenue – a 22% increase over the same quarter last year – and has $255 million of revenue in the first two quarters of this calendar year. Most of Nutanix’s expenses come from sales and marketing — $288 million of its $439 million in expenses last fiscal year and $88 million of its $133 million in the last quarter came from sales and marketing.

The Nutanix IPO filing indicated no plans to decrease that spending. Nutanix claimed: We intend to grow our base of 3,768 end-customers, which we believe represents a small portion of our potential end-customer base, by increasing our investment in sales and marketing, leveraging our network of channel partners and furthering our international expansion. One area of specific focus will be on expanding our position within the Global 2000, where we currently have approximately 310 end-customers.”

Aron said Nutanix needs to pursue a growth strategy if it is to hold off competitors such as Dell EMC, Hewlett Packard Enterprise and Cisco. That includes research and development as well as sales and marketing. Nutanix is expanding its technology to become a platform of choice for companies looking to build internal enterprise clouds.

“I think we all know no company can just rest on its laurels and milk a technology for a while,” Aron said. “Others catch up eventually. You have to keep innovating.

“I think if they want to become profitable, they can do it next year. If a company wants to, it can put a complete break on growth, but what’s the point of profitability if you’re not growing? So there’s a healthy balance a company has to juggle.”

September 16, 2016  1:46 PM

Veritas: Dell EMC’s no marriage made in heaven

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Data protection, Veritas

LAS VEGAS — In the pre-Symantec days when Veritas was an independent storage software company, its executives frequently bashed their primary competitor EMC. That rhetoric cooled after Symantec bought Veritas, and the chief EMC bashers (including current Dell EMC CMO Jeremy Burton) left Symantec.

With Veritas Technologies on its own again, its executives have resumed public attacks on EMC along with its new owner Dell. Veritas Vision in Las Vegas this week was filled with snarky pokes at both principals in the newly formed Dell EMC.

During a keynote given by Veritas CMO Lynn Lucas at the first Veritas Vision user conference in more than a decade, the company flashed a question for customers to ponder: “What is worse than a lifetime of hardware with EMC? An eternity in Dell.”

Veritas also took out an advertisement in the Wall Street Journal, stating “There is a special place in Dell for hardware.”

Mike Palmer, Veritas’ senior vice president and general manager of solutions for data insight and orchestration, said during his keynote address Tuesday that companies like EMC’s and Dell’s historical agenda was to sell more hardware in a world that is becoming more focused on software defined storage. Even when EMC acquired the successful VMware virtualization company, it “kept you in a walled garden of the VMware ecosystem.”

Palmer even compared EMC to convicted Los Angeles drug dealer Rick “Freeway” Ross.

“This is a guy that knew more about product lock-in than anyone. … He bought houses to store his cash,” Palmer said. “Rick Ross went to jail. Today, he sells T-shirts. EMC went to Hell, I mean to Dell.”

Palmer also took a swipe at Dell EMC’s Data Domain deduplication storage hardware, saying that for customers it’s like a 30-year-old kid living in the basement.

“He’s never moving out,” he said, while a bloated, man sitting on a sofa couch appeared on the screen.

In focusing on the Dell and EMC merger, Veritas tried to revive its own historical roots as a “no hardware agenda” that was its primary message before it was acquired by Symantec 10 years ago for $13.5 billion. On Aug. 2015 Symantec announced the sale of its Veritas information management business to The Carlyle Group. Veritas and Symantec achieved operational separation last Oct. 1, and the sale closed Jan. 30 when Veritas became a privately held company.

“Veritas is making the assumption that EMC separately or together with Dell cannot get away from their (hardware) past,” said Arun Taneja, founder of the Taneja Group storage consulting firm. “Veritas recognizes that they were considered a has-been company. They have been missing in action (under the Symantec ownership). So they needed to do something that was in-your-face and edgy. It’s their way of saying, ‘I’m back and you are going to pay attention to me.’

“The way to do that was to poke fun at the 800-pound gorilla. It was a gutsy choice.”

Taneja said Dell’s and EMC’s revenues still rely primarily on hardware. VMware, which gave EMC a beachhead in the virtualization software space, is also owned by Dell.

“Now with the merger, (Dell and EMC) have so many issues that this is the time for a pure software company, which is what Veritas has been from the beginning,” Taneja said. “Now, if they can pull it off, they will be back in the game. If they can’t, it will be a sad ending to the company.”

September 13, 2016  7:18 AM

IDrive cloud introduces ‘private’ property

Paul Crocetti Paul Crocetti Profile: Paul Crocetti

Backup vendor IDrive has added the private cloud to its repertoire, allowing customers to keep backup data on premises as well as in the public cloud.

The IDrive Private Cloud appliance features 6 TB of on-premises backup with the ability to access and manage from anywhere. The product also includes 6 TB of IDrive cloud backup space. The IDrive Private Cloud software is the same as the vendor’s public cloud software.

“The best part of the whole thing is it’s the exact same IDrive Client as the public cloud,” IDrive CEO Raghu Kulkarni said. “There’s no learning curve [for current IDrive public cloud users].”

Kulkarni said IDrive users and partners often requested a private cloud option. Businesses also wanted to store data locally and be able to access it through the cloud.

The IDrive software backs up data from multiple computers locally to the IDrive server. Data is encrypted in transit and in storage, with AES 256-bit encryption and an optional private key. The software protects servers and individual files.

Backing up  the data locally makes for faster restores than from a public cloud. Users can access data from anywhere online, according to the vendor. The product scales to hundreds of terabytes and can handle an unlimited number of users. Its dashboard can manage the backup of hundreds of users, and monitor reporting and data usage.

The client for the IDrive cloud can restore 10 previous versions of backed up files from an account. It also combines backups from multiple devices — PCs, Macs and mobile — into a single account.

The IDrive Private Cloud brings enterprise-level functionality to small businesses and some medium-sized companies, Kulkarni said.

IDrive claims 3 million users of its public cloud. Small businesses make up 30-40% of that figure, Kulkarni said.

Since the software is the same, any improvement IDrive cloud makes to its public option will be available almost immediately for the private product as well.

Pricing for IDrive Private Cloud starts at an introductory rate of $1,000 for the first year for the 6 TB appliance with 6 TB of cloud backup space. Regular pricing is $2,000 per year. More space is available upon request.

September 10, 2016  12:21 PM

Barracuda storage swims with hardware, software upgrades

Garry Kranz Garry Kranz Profile: Garry Kranz

Customers of Barracuda Networks just received some welcome news. In tandem with an upcoming software release, the backup specialist gave a 25% capacity boost to one of its midrange disk-based Barracuda storage appliances and lowered list prices on two high-end models.

Barracuda Backup software version 6.3 is slated for general availability later this year. The new agent adds multi-streaming replication and tools to export data to virtual tape storage in the Amazon Web Services (AWS) cloud.

The software enhancements will be available to customers that purchase Barracuda storage directly from the vendor, as well as backup storage products via its Intronis managed services provider (MSP).

On the hardware side, new Barracuda Backup 990 appliances have 48 TB of usable storage, up from 36 TB. Model pricing remains the same at $49,999. Customers with an active Instant Replacement subscription will receive the upgraded Barracuda 990 at no charge. Instant Replacement entitles eligible Barracuda customers to upgrade their hardware every four years.

Prices are reduced on Barracuda Backup 995 and Barracuda Backup 1090 devices. Barracuda’s 995 model starts at $67,000 for 72 TB of usable storage, down from $90,000.  The Backup 1090 device with 112 TB of usable Barracuda storage starts at $105,000,  reduced from $135,000.

Barracuda Backup software version 6.3 is in early availability. The existing software includes a feature called LiveBoot, which allows users to run image-based backups of virtual servers for instant recovery. The addition of multi-streaming is designed to improve backup and restore times by allowing a backup server to process several files simultaneously. Barracuda also added a replication-queuing system to address storage bottlenecks in high-transaction environments.

Barracuda Cloud-to-Cloud Backup is a software-as-a-service offering launched in 2015 for Microsoft Office 365. Barracuda said v6.3 reduces the time needed to complete incremental backups running a hosted online version of Microsoft Windows applications.

Barracuda’s storage cloud allows subscribers to place data in an offsite vault for up to seven years. To support longer retention times, the enhanced Barracuda software adds the AWS Storage Gateway-VTL for long-term retention in Amazon Simple Storage Service or Amazon Glacier.

Barracuda is best known for its network security products, but enterprise storage accounts for a growing part of its business, with offerings tailored for backup and data protection. The Barracuda storage portfolio expanded last September with its $65 million acquisition of Intronis Inc., which designs backup platforms used specifically by MSPs.

September 9, 2016  7:58 AM

Violin Memory’s latest quarter to forget

Dave Raffo Dave Raffo Profile: Dave Raffo
Violin Memory

With time and money running out, Violin Memory next week will take another shot at launching a successful all-flash array. It might be its last shot – Violin  doesn’t have enough money to last another year at its current losses.

Violin on Wednesday will launch its next family of Flash Storage Platform (FSP) arrays as it tries to stay relevant in the all-flash market it helped create. Full details of the product won’t be available until launch day, but CEO Kevin DeNuccio teased it Thursday night during Violin’s earnings call. The new FSP is a key piece of Violin’s survival strategy, along with reduced spending and a raising of capital.

DeNuccio said the new FSPs will double the IOPS (input/output operations per second) performance of its current FSP systems and cut latency by five times. Violin will allow customers to run its Concerto operating system in a public cloud, enabling customers to use the cloud for backup, disaster recovery and data that does not require flash performance.

Violin will also add encryption software across all of its arrays in the latest Concerto update.

DeNuccio said the cloud enhancements “will position the Violin’s Flash Storage Platform as the best product line for building private, hybrid, and public clouds. While enterprises have been migrating data to the cloud over the last several years, the coordination, management and retrieval challenges of data have been very difficult. The Concerto in the cloud solution will address this customer pain point.”

The new system follows Violin’s 2015 launch of its FSP 7300 and 7700 arrays, which added the data management and protection features missing from its earlier flash systems. Those FSP arrays never caught on with customers.  Violin reported revenue of $7.5 million last quarter, down from $9.7 million the previous quarter and $15.3 million in the same quarter last year.

Product revenue of $2 million last quarter came at the cost of $7.5 million that Violin spent on sales and marketing.

FSP sales have been a big disappointment, failing to come close to Violin’s projected revenue growth of 25% to 35%.

To put Violin’s revenue in perspective, all-flash competitor Pure Storage raked in $163 million in revenue last quarter. Nimble Storage sold 133 all-flash arrays in their first full quarter on the market. Among legacy vendors, Dell EMC’s XtremIO all-flash array will hit close to $2 billion in revenue this year and NetApp sold $194 million in all-flash arrays last quarter.

So while the all-flash market is booming, Violin Memory is going bust.

“This quarter’s performance is obviously frustrating and disappointing,” DeNuccio said. “We still have many challenges to return to growth and complete our turnaround.”

Violin lost $20.6 million last quarter, burned through $13 million and has $36 million in cash remaining. Violin executives said they are cutting expenses and looking for outside financing to stay afloat.

“We believe our existing cash balance is insufficient to operate the business for the next 12 months even as we continue to restructure our operations and reduce spending even further,” CFO Cory Sindelar said.

To cut expenses, Violin is outsourcing much of its development work to GlobalLogic. Sindelar estimated the outsourcing will save Violin $5 million a year. Violin is also putting COO Ebrahim Abbasi in charge of sales and marketing to take “a layer out of management out of our senior rank,” according to DeNuccio. DeNuccio said his goal is to reduce quarterly expenses to under $11.5 million by the start of 2017, which means it would require $70 million to $80 million in annual revenue to break even.

Violin executives have not said how the changes will affect headcount. Violin Memory has already gone from 318 employees at the start of the year to 235 at the end of last quarter, and now stands about 200.

“We learned a lot over the last couple of years,” DeNuccio said. “We are learning from our mistakes and making the necessary adjustments.”

September 7, 2016  11:20 PM

Diablo flash DIMM partnership woos server makers

Garry Kranz Garry Kranz Profile: Garry Kranz

Diablo Technologies and American Megatrends Inc. (AMI) partnered on memory channel flash storage servers for more than a year. The first fruits of the collaboration bloomed this week when AMI started shipping server systems that integrate Diablo’s flash-based Memory1 storage modules.

AMI integrated Memory1 in its proprietary Aptio V unified extensible firmware interface (UEFI) specification. AMI sells the UEFI BIOS to original equipment makers to build next-generation Intel Xeon servers capable of using idle memory capacity as server-side flash storage.

Memory1 is a server-side Diablo flash storage product that slides in a standard server slot.  Aptio V UEFI modular architecture is configurable for x86-based and non-x86 servers, as well as Linux and Windows environment. The UEFI spec was developed to be an eventual replacement for Basic Input Output System (BIOS).

Diablo’s dual inline memory module (DIMM) Memory1 technology allows flash storage to be placed closer to the server motherboard.  Memory1 incorporates NAND flash and the DDR4 memory specification in a DIMM card.

Each Diablo flash DIMM provides 128 GB of flash in a standard server, without requiring changes to applications, hardware or the operating system.  A dual-socket server can accept up to 16 Memory1 devices for 2 TB of persistent memory.  Diablo flash DIMMs are expected to be available in 256 GB capacities in 2017.

Diablo executives expressed confidence that the AMI partnership will lead OEMs to build and market server hardware based on Memory1 flash.  If so, it would provide a big boost to Diablo. Flash Memory1 DIMM modules have replaced ULLtraDIMM memory-channel storage as its flagship product.

“Collaborating with AMI allows for fast and seamless integration of Memory1 into OEM servers. It saves us months of development work with each OEM,” said Kevin Wagner, Diablo Technologies’ vice president of marketing.

Inspur Systems is the lone server vendor to announce a branded line of Memory1-based servers.  Wagner said additional server OEM deals will be announced “shortly.”

September 6, 2016  10:29 AM

Storage operational analytics tools add value

Randy Kerns Randy Kerns Profile: Randy Kerns

New features and functionality are constantly being added to storage systems. As operations have come to depend on functions such as remote replication and snapshots embedded in the storage systems, the features have become competitive requirements for products.

We’ve recently seen Storage Resource Management (SRM) software functions move into storage systems. These software functions, generally called operational analytics, work as Software as a Service (SaaS) offerings. They collect data from storage arrays, upload it to public cloud services or storage vendor sites, and provide analytics reports to users.

Access and analytics through SaaS makes information more broadly available – access from anywhere is permitted with the correct credentials. Using service providers or vendor sites as the collection point allows multiple storage systems, possibly from different geographic locations, to be monitored and information aggregated. Vendors also use the information for monitoring system health, performing proactive maintenance, and controlling updating of system software. They can accumulate data across their entire product base to detect anomalies and other event commonalities to research and develop remedies before problems affect customers.

Operational analytics provided by storage systems and accessed using SaaS is a valuable development for managing storage. The tools can be understood and utilized without the expertise required when using more comprehensive tools such as SRM software.

The operational analytics functions most commonly introduced into storage systems with SaaS analysis include:

Capacity planning – reporting that shows past consumption and a prediction showing the expected needed additions. Capacity reporting not only may be on a per system basis but also could be grouped or aggregated across multiple systems.
• Health status – notifications, log events, and drill downs on the systems for monitoring of systems.
• Performance – historical reporting of performance data, isolation to help identify performance issues, and projections on performance needs in the future.
• Dashboards – customizable by LUN, filesystem, group and so on, for use by IT generalist to provide operational information.
• Vendor support – notifications, log analysis, machine status information for support actions.

These programs collect information by allowing the storage system to send data directly to the service provider/vendor site over the network, or by adding software on a server or in a virtual machine that pulls data from the storage system and then sends it to the collection site. Most vendors offer the basic operational analytics information and processing as part of the support contract for the storage system. Some vendors have an advanced offering with advanced capabilities that requires an additional license that is usually priced on capacity. Using external links for the data through a network is a problem for some operations but the operational analytics does enable improved storage management.

Here are a few of the vendors’ operational analytics offerings, listed to give an idea of the availability but not intended to be a complete list. The availability for other products not listed here should be checked with the vendor.

EMC Unity CloudIQ is the operational analytics feature that requires ESRS monitoring to be enabled. The basic level provides reporting and management with an option to report into VMware vRealize. An advanced version adds more analysis and requires local software to collect additional information.
HPE StoreFront Remote for 3PAR, StoreOnce and StoreVirtual is the operational analytics solution included with the systems without additional software required onsite. Reporting into VMware requires additional software.
IBM Spectrum Control Storage Insights requires software to collect and report information for capacity and performance reporting along with a real-time dashboard and future trending. Both file and block storage from IBM are supported. Analytics for optimization are included with recommendations for data movement to different tiers. Software is licensed by capacity.
Nimble InfoSight predicts performance (IOPS), capacity, and bandwidth needs. “What if” modeling is included for effects of potential changes. Problem analysis from sensor data, dashboards, and device management is also included.
PureStorage Pure1 Cloud Global Insight includes management and operational analytics. Capacity and performance predictions for systems in addition to monitoring are basic functions. Management includes controlling upgrades and log analysis by support.
Tintri Analytics provide capacity, performance and throughput analytics and predictions at the VM level. Modeling of “what if” changes are included with the VM view. Isolation to the VM granularity allows for application level view with the analytics.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

September 1, 2016  2:19 PM

Datrium storage update boosts server flash cache capacity

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Datrium this week released a software update that enables a doubling of the per-server capacity of the flash cache used with its DVX storage system for VMware virtual machines.

Datrium’s Hyperdriver 1.1 software can enable customers to boost the maximum raw capacity of the flash cache from 8 TB to 16 TB per host. Datrium storage executives claimed the “effective” flash capacity would range from 32 TB to 100 TB after data deduplication and compression.

Data reduction tends to range from 2x to 4x with databases and 5x to 10x with virtual desktop infrastructure (VDI) with the DVX system, according to Craig Nunes, vice president of marketing at Datrium. He claimed that addressing 100 TB of flash at about $100 per TB on the server would provide an “orders of magnitude difference” for users accustomed to traditional storage arrays.

The Datrium storage system consists of Distributed Execution Share Logs (DiESL) Hyperdriver software, which runs on host servers, and NetShelf disk-based appliances for durable storage on the backend. Customers supply the servers, the flash for the server-based cache and the VMware virtualization software. They manage the Datrium storage through VMware vSphere.

“People want shared storage for consolidation, but flash really ought to be in the host,” said Datrium’s CEO and founder Brian Biles, who also founded Data Domain. “If you do it right, then reads never have to leave the host to get on the SAN. It’s cheaper and faster to buy flash that way [for the server], and it’s much lower latency. Then you don’t need as much in the backend repository, so it could also be lower cost.”

Datrium DVX became generally available in January. Datrium added an “Insane Mode” feature in May that enabled users to increase the number of CPU cores applied for I/O on any given host to boost performance.

With the new DVX 1.1, customers need to get higher capacity flash drives and additional RAM to boost the per-server cache raw capacity from 8 TB to 16 TB. The DiESL Hyperdriver software continues to support 32 hosts and eight solid-state drives (SSDs) per host.

“Typically our customers don’t use more than two [SSDs], but SSDs are definitely getting bigger, so we wanted to make sure that we could facilitate migration of the bigger workload,” Biles said.

Biles said Datrium would continue to increase the maximum capacity for the flash cache over time. Datrium DVX supports any type of flash drives on the VMware compatibility list.

“We access the drives through [VMware] ESX, so it just looks like a drive to us, and then we install our file system magic on top of that,”  Biles said. “Normally we recommend lower-priced drives because our software’s very friendly to spreading the load and making them last a long time. Then it’s just lower cost. But we can do whatever the customers wants to do.”

Biles said workloads that might need the extra flash cache capacity to keep all data hot include analytics with data warehouses and large file servers shared across multiple hosts.

“What happens with these powerful servers today and all the cores, each core is basically running a VM, and each VM is asking for I/O,” Nunes said. “And when all those VMs ask for all that I/O in a roughly simultaneous way, it creates that delay across the SAN. In effect, flash has gotten too fast for the SAN. Multicore servers have gotten too fast. And it’s driving the need for the relocation of flash from the array right to the server to deal with that.”

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: