The CEO of solid state drive (SSD) manufacturer STEC said storage vendors are charging customers too much for SSDs.
STEC CEO Manouch Moshayedi said during the vendor’s earnings call Tuesday that the largest storage vendors – STEC’s OEM partners – are marking up SSDs from around $2 per gigabyte to $4 gigabyte to around $50 per gigabyte to $70 per gigabyte, and that is slowing the adoption of SSDs in enterprise storage.
“Frankly, from where I see it, the pricing that they’re charging is a little unsustainable on the SSD side because its 30 times what is out there available [from the SSD manufacturers],” Moshayedi said. “I think that pricing has to come down in order to really get SSDs out into the market through the major data storage system builders.”
He said SSDs are being more widely adopted by cloud computing service providers and data centers that use one SSD per server because they are getting flash at a lower price than by buying it in storage arrays.
“So when you look at it, it’s a big price differential [between what storage vendors pay and what they charge customers],” he said. “Therefore people in data centers that don’t have to buy big data storage systems can use SSDs a lot more than data storage [users] can.”
EMC, IBM, Hitachi Data Systems and Hewlett-Packard all sell STEC SSDs in storage arrays.
STEC has a lot more to worry about than its partners’ pricing these days. The vendor’s second quarter revenue of $40.7 million was less than half of the $82.5 million in revenue from the same quarter last year. Revenue also dropped 19.2% between the first and second quarters this year. STEC lost $50 million, marking its third straight quarter in the red.
Moshayedi blamed STEC’s problems on its transition to next-generation SSD drives as well as new PCIe SSD cards and EnhanceIO caching software. Its largest storage vendor partners are still qualifying those products. The near-term outlook isn’t rosy with the forecast calling for revenue of $40 million to $42 million and another big loss this quarter.
On top of that, the Securities and Exchange Commission (SEC) has charged Moshayedi with insider trading. The SEC claims Moshayedi and his brother Mark — a STEC founder — failed to disclose information that could have lowered the stock price at the time they were selling shares that brought them $134 million.
Moshayedi called those allegations “unsubstantiated” during the Tuesday call. “I intend to vigorously defend myself against unsubstantiated allegations, and we expect that through an independent evaluation of facts we’ll find the complaint is without merit,” he said.
SolidFire, which sells all-solid state storage arrays to cloud service providers, revealed its first customer today. Calligo, based in the U.K.’s Channel Islands, is running a series of cloud services using SolidFire storage.
Calligo CEO Julian Box said Calligo went live with SolidFire about a month ago. Calligo uses SolidFire storage as the back end for its CloudSafe (Disaster Recovery), CloudDesk (virtual desktop), CloudNet (virtual network), and CloudCore (Infrastructure as a Service) services.
Box cited SolidFire’s performance, automated reporting and monitoring, and the ability to scale capacity and performance independently as reasons why he chose the newcomer’s array.
Calligo has one SF3010 array with 10 300GB SSDs for 3 TB of raw capacity, and Box said SolidFire’s compression and deduplication gives him about 50 TB of effective capacity.
Before choosing SolidFire, Box said he looked at traditional storage arrays from Hewlett-Packard 3PAR and Dell Compellent. He said 3PAR would require hundreds if not thousands of disks to get the throughput he gets from SolidFire.
“We have oodles and oodles of power and throughput,” he said. “I have five nodes for 250,000 IOPS and it takes up 5u.”
He said he expects to add about one storage node a month to keep up with capacity demand. As for performance, he said “We have more IOPS now than we could ever consume.”
Box also likes that he can use SolidFire’s quality of service to guarantee performance on a volume basis.
“It’s broken the link for capacity and performance,” he said of the SolidFire array. “We can control capacity and performance like a dial.”
He said the one thing missing from SolidFire is replication, a feature that could facilitate disaster recovery.
Quantum is upgrading its DXi8500 enterprise data deduplication disk target appliance to support more capacity and end-to-end encryption.
The capacity boost comes from larger drives and the security features were added through a software enhancement, self-encrypting drives and support for the Key Management Interoperability Protocol (KMIP) industry encryption standard. Quantum supports KMIP in its disk and tape libraries.
The DXi8500 now supports 3 TB drives for a maximum of 330 TB of usable pre-deduped capacity in a 19-inch rack. The DXi8500 supported 320 TB before the upgrade, but required two racks to hit that maximum capacity. Quantum also improved performance to 11 TB per hour by making tweaks on the backplane and in software.
With DXi Accent performance enhancement software, the DXi8500 supports AES 256-bit encryption of data in transit and a Secure File Shred feature. The file shredding feature is part of the latest DXi software, version 2.2. Customers can mark files for deletion and then start the secure shred to overwrite those files.
A media server with DXi Accent software will encrypt data sent to the DXi8500 target, which then encrypts data at rest and can encrypt replicated data. Encryption at the target is enabled by self-encrypting hard drive, and encryption of data in transit enabled by DXi 2.2 software. Quantum’s Scalar tape libraries also support the KMIP encryption standard.
Eric Bassier, Quantum director of product marketing, said having the drives perform encryption can speed the process more than 30% over using software encryption on the target device.
The DXi8500 competes mainly with EMC’s Data Domain DD990, the largest of the Data Domain family. According to Quantum’s earnings report last week, revenue from the DXi platform declined year-over-year last quarter but CEO Jon Gacek said Quantum had more than a 50% win rate from the DXi8500. Gacek said he’s happy with Quantum’s product compared to Data Domain, but EMC’s size is a concern.
“On DXi, I think our product portfolio is good,” he said during the earnings call last week. “The thing that makes me nervous about it is we’re small compared to the reference competitor. We do not mind competing with them at all and when I see them talking to customers about our financial results and our size of our company, it just reaffirms the fact that I think we’ve got them when it comes to product solutions. But they’re a big formidable competitor and we just have to be on our game.”
Major storage vendors constantly acquire technology through acquisitions and partnerships, and these deals often show a lot about where the industry’s technology is headed. They also show that large vendors often can’t afford to take the time to develop valuable technologies on their own.
The latest storage deal was Oracle’s acquisition of Xsigo for software-defined networking (SDN). Or maybe it is Xsigo’s IO virtualization in a different form that Oracle wants. Either way, the deal is a portent of the coming competition for customers with new technology and solutions, and a focus on what is called converged infrastructures or converged systems.
Recent acquisitions to bear this out have been Dell acquiring Quest Software and EMC acquiring XtremIO. Relationships are also key to delivering solutions and major announcements, including Dell with Rainstor, NetApp and Hortonworks, and Quantum with Amplidata.
And don’t forget the relationships between small vendors and investors — Tintri, Panzura, and Avere recently scored big funding rounds, proving that venture capitalists see value in their technologies. This is happening at a time that used to be the summer doldrums where people in the storage industry could take vacations and not have the world change while they were gone. .
There are massive technology changes underway, and few if any companies can develop or integrate everything on their own any more. Acquisitions and partnerships are the way to quickly fill the arsenal, which is necessary to remain competitive. Storage is one area where a premier vendor cannot afford to fall behind the competition.
Partnerships are being exploited more than ever before. Converged systems with servers, storage, networking, and orchestration software are being offered as solutions to customers to simplify the purchase, deployment, and operation of their environments.
Storage vendors have wisely looked at value added distributors and system integrators as the delivery vehicle with reference configurations and partner-led integrated solutions with the vendor’s storage and selections of servers, server virtualization software, and networking. NetApp Flexpods and EMC VSPEX are good examples here. Server vendors also offer complete solutions storage such as Hewlett-Packard VirtualSystems and IBM PureSystems. There is even an entire company, VCE, formed specifically to sell integrated products from EMC, VMware, and Cisco.
Large storage vendors today are showing results from strategic investments made in recent years – HP and 3PAR, Dell and Compellent/EqualLogic, IBM and XIV, EMC and Data Domain/Isilon, and many others.
The acquisition of companies probably will not slow down, at least through this year. What storage vendor doesn’t have a need for more flash technology, or perhaps a missing piece of its cloud puzzle? It may take several years to judge whether an acquisition was strategic and ultimately profitable. The crystal ball used by many may be a bit cloudy or tainted by previous experience but opportunity and successes are out there.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Fusion-io this week revealed plans to launch ION Data Accelerator software later this year, giving the vendor the ability to turn a server with a Fusion-io ioMemory flash card into a shared storage appliance.
Fusion-io calls this technology software-defined storage, a play on the software-defined networking (SDN) term being commonly used these days. The software turns Fusion-io’s PCIe-based flash cards into competition for EMC’s yet-released “Project Thunder” flash-based shared storage appliance. Project Thunder builds on EMC’s VFCache, which places PCIe cards in a single server and competes with Fusion-io’s current products. ION’s ability to turn servers with flash into networked storage systems also makes Fusion-io more competitive with all-flash storage arrays.
According to Fusion-io, ION lets customers move entire mission-critical databases to shared ioMemory for better and more reliable performance.
We won’t know for sure how the software performs until its general availability release in October, but Fusion-io is promising impressive numbers: more than one million IOPS with up to 6 GBps throughput and 56 microsecond latency from one 1U server.
ION also allows administrators to create RAID sets and LUNs and monitor performance. It handles high availability by synchronously replicating data between ION systems and has a Power Cut Safety feature that protects data in a power failure without requiring UPS systems or battery backups.
The ION software supports Fusion-io’s ioMemory flash technology in its ioDrive, ioDrive Duo, ioDrive2 Duo and ioDrive Octal flash cards. ION works with Fusion-io’s ioTurbine and directCache caching software and can be managed through the vendor’s ioSphere GUI or command line interface. ION supports 8 Gbps Fibre Channel, quad data rate (QDR) InfiniBand, and 10-Gigabit Ethernet iSCSI block storage protocols.
Out of the gate, ION software will be available bundled with Hewlett-Packard ProLiant DL370G6 and SuperMicro 1026GT-TRF servers with ioDrive 2 Duo multi-level cell (MLC) flash cards. Fusion-io said the software has been tested with other HP ProLiant servers as well as Dell PowerEdge and Cisco UCS servers.
Fusion-io said it has more than 12 early access customers for the ION software, which will have a suggested retail price of $3,900.
Fusion-io also this week said it is working with NetApp on ways to use its flash technology and caching software with NetApp’s Flash Cache and Flash Pool flash tiers.
Rather than build an all-flash array from the ground up or buy a startup that already has done that, Hewlett-Packard (HP) is offering its existing storage arrays with all solid-state drives (SSDs) as a high-performance option.
HP this week started shipping an all-SSD version of its 3PAR P10000 Storage System. This follows HP’s launching of an all-SSD version of its LeftHand iSCSI SAN in February.
The 3PAR all-flash system holds 512 SSDs, which can be 100 GB or 200 GB single-level cell (SLC) drives. That would make the maximum capacity 1 PB of flash with a fully loaded 200 GB SSD system. HP said the pricing for such a system would “depend upon the configuration.”
HP did disclose a starting price of $350,000 for an all-SSD P10000. That includes 16 100 GB drives for 1.5 TB of capacity, two controllers and drive chassis and six 4-port 8 Gbps Fibre Channel adapters.
3PAR has supported hybrid flash systems mixing small amounts of SSDs with hard drives since 2010, before HP acquired the vendor.
HP this week also announced the HP Smart Cache for ProLiant Gen8 servers. Smart Cache is server-side caching similar to PCIe cards from Fusion-io and EMC’s VFCache. HP will also have software that places hot data on SSD drives. When used with 3PAR storage, Smart Cache will copy data from the 3PAR arrays to the Smart Array Cache on a ProLiant Gen8 server for increased performance.
Smart Cache is an alternative to Fusion-io cards, which HP also sells with its servers. Smart Cache is expected later this year.
Like most storage vendors, HP’s flash strategy will likely evolve. Its rival EMC also has all-SSD options for its storage arrays and acquired all-flash array startup XtremIO earlier this year, although it won’t have an XtremIO system in the market until next year. Many storage insiders expect all the top vendors will eventually have all-SSD systems built specifically for flash, and there is no shortage of startups they can acquire to get that technology.
Software-defined networking (SDN), the hottest emerging networking technology, is also spilling into storage. That spillover accelerated today when Oracle acquired startup Xsigo, which allows servers to connect to any storage or network devices.
Oracle did not disclose the price, but it obviously was less than the $1.2 billion VMware paid for SDN startup Nicira Networks last week. Cisco also has a $100 million investment in Insieme to deliver similar technology.
Xsigo didn’t use the term SDN to describe itself, but Oracle did when announcing the deal. Xsigo marketed itself with the simpler term of I/O virtualization. Maybe that’s because using its software required customers to purchase a fabric director switch, and sometimes expansion switches and I/O cards.
But Oracle may choose to deploy the IP differently. Oracle isn’t likely to go into much detail for its plans until after the deal closes, which probably won’t be before October.
The first priority for Oracle when it makes an acquisition is to use the new technology to optimize Oracle products. But in a Q&A document Oracle released today, it said it would continue to support Xsigo customers and the cloud as well as the Oracle stack.
“While we expect to optimize Xsigo’s performance with the Oracle stack, Xsigo’s products will continue to support all heterogeneous environments and benefit any cloud deployment,” was Oracle’s answer to whether Xsigo products will continue to operate with non-Oracle systems.
Like VMware and Cisco, Oracle recognizes that server virtualization is changing the way those servers connect to storage and networks. Oracle will use Xsigo to address those changes.
“Oracle recognizes that achieving revolutionary improvements in both performance and efficiency requires a paradigm shift in the way compute and storage systems are interconnected and how that system interconnection is managed,” Oracle said in its release. “Xsigo simplifies cloud infrastructure and operations by allowing customers to dynamically connect any server to any network and storage, resulting in increased asset utilization and application performance while reducing cost. Because Xsigo consolidates and virtualizes the physical resources utilized to interconnect servers and storage, Xsigo is uniquely positioned to simplify the management of virtualized server and storage connectivity.”
Along with its directors and switches, Xsigo’s technology includes Fabric Accelerator software that connects virtual machines to storage and networks through software links that Xsigo calls Private Virtual Interconnects. It also has a Fabric Manager application to create, monitor and manage connections between servers and storage/networks.
Xsigo, founded in 2004, claims more than 300 customers including eBay and CarFax. Oracle said it expects Xsigo management and employees to join Oracle after the acquistion closes.
As with practically any industry, product names are crucial when selling storage.
Storage products sit in the heart of data centers and protect business information. lAn individual storage system may stay in use for four or five years, and there is a great likelihood that a successor to that system will be purchased at least once to minimize the risks and operational changes of moving to another platform.
This is where the name becomes important. The identity of the system is associated with that name and the vendor. A recognizable name can be a major factor in sales.
Naming is a complex exercise for vendors when brining out a new storage system. There may be great value in association with the preceding product. Sometimes there is greater value in not having the same name as the predecessor. But the name needs to be memorable so that the customer can immediately associate the name with the product. What is the most suitable yet memorable name? Is it part of an overall product naming convention by the vendor? Why do that? These are questions vendors must answer.
Sometimes vendors make up names by taking words or syllables that seem to convey information about the product. Other times, they jam together two words with no space but with the second word capitalized to make it recognizable. The latter is an unimaginative way to identify something. It could be better to go with a unique name. These sometimes stick for a long time. An example would be EMC’s Symmetrix.
Probably the worst way to create memorable names is through using multiple descriptive words as the product identity. There have been many of these cases in the past and some are laughable. When you add the company name in front of a string of descriptive words, most people can’t remember that name in a cadence. Obviously, someone has worked hard at making it so customers won’t remember the product.
Another issue to address when naming products is whether the name should describe what the product does, describe its capabilities, or allude to something in the industry. The last option is tough because it’s difficult to find descriptive words that are not already overused in the industry. And something that sounds clever today might seem out-of-date in the future.
Because the branding and naming exercise usually happens late in the process of delivering a product, developers tend to know their products by their internal code names. Sometimes vendors publicly refer to products by their code names in development, and the media and customers follow suit. Then, the vendor will change the name when delivering the product. EMC’s Project Lightning (called VFCache at release) and Project Thunder (not yet named) are examples of this.
Branding is an inexact science and most vendors need to pay more attention to it. Memorable names help sales, and should be a big focus of a product launch. Of course, sound products also help.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Veeam Software this week continued its “freemium” strategy of offering free features from its virtual machine backup software in hopes of gaining publicity and new users.
The newest freebie is Veeam Explorer for Microsoft Exchange, which lets virtual machine admins search and retrieve items inside Exchange without an agent. The admins can browse Exchange databases from a compressed backup file. Veeam claims the databases will be searchable in less than two minutes. Items can be exported to PST or MSG files.
Veeam Explorer for Exchange is now an “exclusive beta,” which means it is available for what product strategy specialist Rick Vanover calls “our largest fans.” That group consists of large customers as well as frequent Veaam tweeters and bloggers who will spread the word.
The Exchange feature requires the full Veeam Backup & and Replication application now, but will be added to the Veeam Backup Free Edition that Veeam launched in June. That free edition does ad hoc and limited backups of VMware and Microsoft Hyper-V but lacks support for deduplication, replication, incremental backups and a backup scheduler.
“The free version was light, but we gave it legs by adding this tool,” Vanover said.
The Exchange feature will also be built into the next version of Backup & Replication, due before the end of the year.
Vanover said the “freemium model helps us reach people, and in some cases is an eye opener for them. We’re banking on a lot of interest for Explorer for Exchange. Exchange is a beast. A lot of people have their own personal ‘big data’ in Exchange. This tool lets them work with it right from their backups.”
Veeam’s competitive situation changed earlier this month with Dell’s $2.4 billion acquisition of Quest Software. Quest owned Veeam’s major VM-only backup rival vRanger. Dell hasn’t said much about its plans for Quest’s backup products, but it can pump more development and distribution resources into vRanger than Quest did.
Vanover said the immediate impact for Veeam is that its close partnership with Dell will end.
“That changed our relationship with Dell,” he said. “We’ll still go the distributor route with them, but in terms of joint promotions, that’s non-existent now.”
Dell’s strategy for dealing with “big data” is to shrink it.
The shrinking tool Dell is using is partner RainStor’s database packaged with the Dell DX Object Storage platform. Dell will sell the combination under the Big Data Retention brand. The RainStor database is also certified to work with Dell’s EqualLogic and Compellent SANs, but the reseller deal is limited to DX for now.
The RainStor database, which can work as a standalone repository or as an analytics platform with Hadoop, has its own patented form of deduplication that Dell claims can provide an average data compression ratio of up to 40 to 1. The RainStor database dedupes data and writes a file to any type of storage, such as the DX Object Storage. Dell already owns deduplication technology it acquired via its Ocarina Network acquisition. The Ocarina deduplication has been built into the Dell DR4000 disk backup system and also is expected to be integrated into the Dell Fluid File system.
“We consider [RainStor’s deduplication] to be a complementary technology to Ocarina,” said Amy Price, a manager at Dell’s Storage Data Management Group.
Big Data Retention Solution customers can add capacity from as small 2 TBs while scaling to petabytes and billions of objects without the need to manage LUNS and RAID groups that is part of tradiational storage.