SanDisk became the PCIe flash market leader with its 2014 acquisition of server-side flash pioneer Fusion-io. Less than a year later, there is a lot less to that market and that is having a negative impact on SanDisk’s bottom line.
SanDisk’s $1.32 billion in revenue last quarter was below its initial forecast of from $1.4 billion to $1.45 billion, and down 12 percent from last year and 23 percent from the previous quarter. It cut its full year guidance for 2015 to a range of $5.4 billion to $5.7 billion from previous guidance of $6.5 billion to $6.8 billion. The new annual forecast would bring SanDisk revenue below its $6.6 billion from 2014. To compensate for the loss of revenue, the flash vendor announced it will reduce its workforce by around five percent.
Two factors hit SanDisk hard last quarter. It sold a lot less SAS solid state drives (SSDs) than expected, and larger customers started moving off PCIe cards onto 2 TB SATA SSDs. SanDisk CEO Sanjay Mehrotra blamed the SAS product problems on “execution” issues largely related to product qualifications. He attributed the PCIe problem and companies finding SATA drives good enough at a lower price. SanDisk hopes to change that with a new PCIe platform coming this month.
“Our results as well as 2015 revenue estimates for our Fusion-io PCIe solutions are significantly below our original plan,” Mehrotra said on the vendor’s earnings call Wednesday. “We are seeing a substantial portion of the PCIe [market] moving to lower cost solutions using enterprise SATA SSDs.”
“We expect to fully remain the market share leader [in PCIe], and we plan to address that in the 2016 timeframe with solutions using our captive memory as well as when we introduce NVMe solutions,” he said.
In the meantime, there will be pain and cuts at the company.
There’s a lot of data out there, and there’s going to be even more in the future, and businesses need to figure out better ways to use and deliver it because the always-connected “information generation” of digital customers will increasingly expect to have it at their fingertips.
That’s the upshot of a report, entitled “The Information Generation: Transforming the Future, Today,” that EMC released today. The storage giant tapped the Institute for the Future to create a global study to identify and forecast the top “business imperatives” for the next decade. The Institute consulted more than 40 “influential global decision-makers and experts” across multiple industries and passed the baton to Vanson Bourne to survey 3,600 business leaders from 18 countries.
Half or more of the survey’s 3,600 “director and C-Suite” respondents said their customers want access to services faster than ever, 24/7 access and connectivity, and access on an increasing number of multi-channel platforms. Vanson Bourne didn’t survey any of their customers to see what they expect.
The six “attributes for success” the report identified – and only 22% to 31% of the business leaders said they’re addressing “extremely well” – are:
1) Predictively spot new opportunities in markets
2) Demonstrate transparency and trust
3) Innovate in agile ways
4) Deliver unique and personalized experiences
5) Operate in real time
6) Pursue continuous learning
While the report’s findings may not be so revelatory, they will undoubtedly set the stage for the presentations and announcements at EMC World during the first week of May in Las Vegas.
“We’re big into information. So, the more information that’s out there, the more information that needs to be stored and managed and analyzed,” said Jeremy Burton, president of products and marketing at EMC, when asked how the report’s findings tie into the company’s grand plan. “But I think over time, the unit cost of managing information today is several orders of magnitude less than it was 10 years ago. And in the future, it will be several orders of magnitude less again.”
Burton said the architecture of next-generation distributed applications will differ from the traditional architecture of applications such as SAP and Oracle, and the new apps will “much more happily run on commodity infrastructure.” He said the value that EMC can bring increasingly will extend into areas such as building applications and analyzing data, through the company’s Pivotal startup (in which GE is an investor), and securing information, through the company’s RSA acquisition.
EMC’s product direction also includes the upcoming server-based DSSD flash to handle the ultra-fast processing required by the new wave of in-memory databases and analytics applications. Burton said, after processing, the data will dump off to software-defined object stores that can run on cheap commodity hardware.
The company knows it has to compete on price with public cloud behemoths such as Amazon, Google and Microsoft and offer a compelling alternative for the do-it-yourselfers testing the open source waters. Burton claimed EMC has one exabyte-scale customer that paid about 30% less than it would have spent with Amazon Web Services, and he predicted many more exabyte-scale customers in the future.
“You’re going to see a lot of interesting things this year that EMC five years ago never would have done,” said Burton.
In a sign that more companies are moving important applications to the cloud, SIOS Technology Corp. achieved Microsoft Azure certification to provide customers with high-availability clusters in Azure using Microsoft Windows Server Failover Clustering software.
SIOS targets companies that need to protect business-critical Windows environments from data loss in the cloud. It sells clustering software for both SAN and SANless configurations, providing high-availability for Windows applications using Microsoft Windows Server Failover Clustering. SIOS DataKeeper Cluster Edition software does real-time block level data replication of Windows server environments without the need for additional hardware accelerators or compression devices.
Jerry Melnick, SIOS’ chief operating officer, said the vendor is seeing more customers deploying mission-critical applications in the cloud. These applications include SQL, SAP, and Oracle.
“A year ago, I would have said it is a trickle but that is changing quickly,” Melnick said. “Customers are quickly getting serious in moving these applications in the cloud. What makes us different is the completeness and flexibility of our solution. It provides protection across fault domains transparently. There is no need for changes to your applications or operation process.”
Mission critical applications depending on SAN-based Windows server failover clusters for protection can be moved to Azure and achieve the more comprehensive high availability protection they need. Typically if customers want clustering, the configurations needs one instance of the application in the cloud to guarantee the configurations store data across multiple fault domains.
Melnick said cloud providers offer high availability but not automated failover.
IBM is working on putting 220 terabytes of storage in a palm-sized tape cartridge. The company has developed breakthrough technology that shrinks the size of bits for magnetic tape and offers ability to store 123 billion bits of uncompressed data on a square inch of magnetic tape.
“Many people are surprised that there is still a vibrant tape industry,” said Mark Lantz, manager for exploratory tape at IBM Research. “This new technology will enable future products. The aerial density in disks is slowing down and tape is becoming more attractive.”
Lantz said IBM has developed key technologies that will be used in future tape product. IBM spent about 13 years in collaboration with Fujifilm to develop an enhanced write field head technology that uses finer barium ferrite particles for higher density. The new particulate Nanocubic BaFe magnetic tape decreases the particle volume that is necessary for high-density recording.
Other innovations that will be used ub new tape products include signal-processing algorithms for the data channel based on noise-predictive detection principles, a high bandwidth head actuator, and advanced servo control that allows head positioning by less than 5.9 nanometers.This allows a track density of 181,300 tracks per inch. That is more than a 40 percent increase over an industry standard LTO-6 tape cartridge, which can hold 2.5 TB of uncompressed data on a four-by-four inch cartridge.
“The new write head technology produces stronger magnetic fields and allow us to take advantage of the new barium ferrite,” Lantz said. “With the new signal processing algorithms means we can work with much noisy signals and still achieve a high reliability of tape systems.”
The 220 TB of data can store roughly 1.37 trillion mobile test messages. IBM demonstrated a Fujifilm prototype at the National Association Broadcasters Show in Las Vegas this week.
“People who say tape is dying are typically people who do not have tape in their storage portfolio,” Lantz said. “The technology is not dying. In fact, use cases are growing, particularly in low-cost archiving storage tiers.
Quantum made great progress last quarter in its quest to become not-just-a-tape-company.
Quantum said Thursday night that it exceeded its forecast for revenue last quarter, with the upside coming from its disk products. It also turned a $12 million profit, but that was due to its investment in object storage vendor Amplidata.
Quantum said its revenue for the quarter exceeded $145 million, well above its guidance range of $130 million to $135 million and up from $128 million last year. Scale-out storage – mainly StorNext – more than doubled to over $30 million and DXi disk backup revenue increased nearly 30 percent to approximately $25 million.
Although the vendor still drives most of its revenue from tape, no tape products were mentioned in the release. Quantum’s tape sales have been on a steady decline, like most tape vendors.
Quantum’s net income of $12 million included a $13 million payout from Western Digital’s acquisition of Amplidata. Quantum helped fund Amplidata, which is also a strategic partner. Quantum OEMs Amplidata’s Himalaya software in its Lattus appliances.
Quantum’s overall revenue as well as its StorNext and DXi numbers all increased in the first calendar quarter from the fourth quarter last year. That’s unusual in the storage business because the fourth quarter is usually the best quarter for sales.
Scality this week made version 5 of its Ring object storage software generally available, with support for the Server Message Block (SMB) protocol to access Microsoft Windows clients and servers, a new user interface and a simpler installation.
Scality originally previewed Ring 5.0 last September, emphasizing extra VMware integration through vStorage API for Array Integration (VAAI) and VMware Storage API for Storage Awareness (VASA) support.
Leo Leung, Scality’s vice president of corporate marketing, said the vendor added SMB support because many of its enterprise customers have large Microsoft deployments.
Part of the reason for the easier installation is to allow partners to set up a system with Ring in less than 15 minutes, Leung said. Scality signed a reseller deal with HP last October. The Scality object storage software runs on HP ProLiant servers.
“We have simplified the installation so you don’t have to touch every server or node,” Leung said. “Before you had to configure and install for each node, doing it with Linux.”
Scality uses a decentralized distributed architecture, providing concurrent access to data stored on x86-based hardware. Ring’s core features include replication and erasure coding for data protection, auto-tiering and geographic redundancies inside a cluster.
The new user interface is designed for “point-and-click” to provision resources, while the Central Installation Platform that is included with Ring 5.0 uses the SALT open source configuration management framework to initiate commands, even for cloud deployments.
Scott Sinclair, an analyst with Enterprise Strategy Group, said the content delivery space is an area where Scality has a good fit.
“Scality is very good at creating a large massive repository to hold a lot of digital assets,” Sinclair said.
Diablo Technologies claimed a “decisive victory” in the lawsuit brought by Netlist over technology used in Diablo’s Memory Channel Storage architecture. However, Netlist vowed to fight on with its patent suit against Diablo and partner SanDisk.
Netlist charged Diablo with patent infringement, and gained an injunction in January preventing Diablo from shipping chips used in SanDisk ULLtraDIMM storage products.
A federal jury in the U.S. District Court for the Northern District of California Wednesday ruled that there was no breach of contract or misuse of trade secrets. Both sides agree on that. But while Diablo maintains the jury confirmed its sole ownership and inventorship of the 917 patent, Netlist says the patent issue is not settled. The 917 patent was issued for a method of programming a load reduction dual inline memory module (LRDIMM).
Netlist Claims the jury ruled in its favor on two trademark counts Wednesday. It also may appeal the parts of the verdict that went in Diablo’s favor.
Even without an appeal, the issue remains unsettled. A briefing is scheduled to be completed by April 3 with an oral hearing before a judge tentatively scheduled for April 10 to determine the effect of the jury verdict on the preliminary injunction granted Netlist in January.
Diablo is expected to file a motion to remove that injunction because of Wednesdays’ verdict.
“We are extremely pleased with the jury’s verdict today,” Diablo CEO Riccardo Badalone said in a statement released by the vendor. “We look forward to getting back to serving our customers and delivering on our exciting Memory Channel Storage roadmap.”
Netlist released a statement indicating it still intends to win the patent case:
“Netlist intends to vigorously pursue its patent suit against SanDisk and Diablo related to the ULLtraDIMM. The verdict in the trade secret case has no effect on the patent case. The company also intends to request that the court correct the verdict as to the breach of contract count, and to pursue all available appeals.”
Backup appliance revenue hit $1 billion worldwide for the first time in the fourth quarter of 2014 as Symantec and Quantum made huge gains and market leader EMC slipped a bit, according to IDC’s quarterly numbers.
The market grew three percent over the fourth quarter of 2013, and 924 PB of capacity shipped in the quarter increased 40.3 percent year over year. EMC’s revenue of $640 million slipped 1.3 percent from the previous year and its share fell from 66.6 percent to a still dominant 63.8 percent.
No. 2 Symantec continued to make gains with its NetBackup appliances. Its revenue grew 23.1 percent to $115.7 million in the quarter as its market share went from 9.6 percent to 11.5 percent. No. 5 Quantum revenue grew 33.5 percent from $17.6 million to $23.5 million and its share rose from 1.8 percent to 2.3 percent. No. 3 IBM’s revenue fell 10.4 percent to $67.2 million and No. 4 Hewlett-Packard slipped 5.8 percent to $40 million. All other vendors combined grew revenue 21 percent to $116.6 million.
For the full year, backup appliance revenue increased four percent to $3.259 billion. EMC grew share 3.6 percent to $2.03 billion and held 62.3 percent of the market. Symantec gained the most, increasing revenue 16.2 percent to $417.5 million and 12.8 percent share. IBM dropped 12.3 percent to $205.9 million and 6.3 percent share, HP fell 10.6 percent to $134.5 percent and 4.1 share , and Quantum increased 4.9 percent to $73.3 million and 2.2 percent share. Others combined for 10.6 percent growth and 12.2 percent market share.
IDC defines a purpose-built backup appliance as a standalone disk-based product such as EMC’s Data Domain or a system tightly integrated with the backup software such as the Symantec NetBackup Appliance.
Red Hat today rebranded its storage software platforms in hopes of clarifying their intended use cases.
Inktank Ceph Enterprise becomes Red Hat Ceph Storage and Red Hat Storage Server becomes Red Hat Gluster Storage. The scale-out open source platforms fall under a Red Hat Storage umbrella. Both storage platforms came to Red Hat through acquisitions. Red Hat picked up file-based Gluster in 2011 and added block and object storage vendor Inktank in May 2014. Since the Inktank acquisition, there have been questions about whether Red Hat would keep the two platforms separate or merge them.
Both will live on, Red Hat Storage direct of product marketing Ross Turk said. Red Hat positions its Gluster Storage as best suited for enterprise virtualization (VMware), analytics (Hadoop and Splunk), and sync and share workloads. Ceph Storage is designed for OpenStack, object storage (Amazon S3-compatible) and other cloud infrastructure workloads. They both can play in archiving and rich media markets.
“This represents a sea change in how we talk about our products, and we’re starting with the workloads,” Turk said. “We’re trying not to think of Gluster as file and Ceph as object and block. We’re trying instead to think of what the best architectural fit is for each of them.”
Put another way, Gluster is more apt for traditional file-out storage needs while Ceph is more for public cloud storage developers and enterprises with large development teams.
“Ceph was built for the cloud and for building infrastructure,” Turk said. “Red Hat Gluster Storage is a completely different beast. It’s purpose built as a scale-out file store. Gluster was built to consolidate storage resources on a bunch of servers under a single namespace.”
Turk said the two platforms are licensed separately, and Red Hat has customers using both.
Condusiv Technologies this week launched Diskeeper Server software to defrag disks connected to SANs.
Wait, is fragmentation even a thing for SANs? You never hear SAN vendors or even customers talk about it. What are all those RAID schemes and expensive SAN controllers for anyway?
Condusiv insists fragmentation is a problem at the logical disk layer, whether the SAN vendors admit it or not. And that fragmentation is impacting performance of applications on physical servers.
“We’ve expanded our fragmentation technology below local storage to include SAN storage,” said Brian Morin, Condusiv’s senior vice president of global marketing.
That means Condusiv is expanding its focus from desktop and laptop PCs to servers. Condusiv has been around for 33 years, but was known as Diskeeper until 2012. It claims to have sold 45 million licenses, but with the PC market shrinking and flash becoming popular, the company is targeting SANs as its new frontier.
It’s not defragging disks in SAN arrays, but preventing files from being broken into pieces before being written to hard disk drives or solid-state drives non-sequentially. That way, it prevents fragmentation before it becomes an issue.
Morin said Diskeeper Server can improve the way the Windows file system writes files to disk. “We monitor the system and see how data is being created,” he said. “Whenever a file is created or extended, Windows looks for a fixed size allocation because it doesn’t know how big the file is going to be. That causes extra I/O, or fragmentation. We give it the intelligence to say ‘This file is going to be this big.’ It finds the correct allocation and prevents that fragmentation from occurring in the first place.”
There can be a performance hit from the fragmentation prevention process, but Morin said that is less of a drag than the results of fragmented files.
“If Windows splits a file into 20 pieces, it’s writing 20 different I/O streams,” Morin said. “Windows sees that file as 20 separate pieces and issues I/O operations for every piece of that file. That’s a lot of I/O overhead to the physical server.”
Condusiv claims 73 percent of physical servers – about 2 million of them — are attached to SANs and fragmentation dampens their performance by 25 percent or more.
Diskeeper Server has a list price of $399.95 per server.