Jay Kidd is retiring as NetApp CTO, although not necessarily ending his career.
In a blog post today, Kidd said he is leaving NetApp after 10 years. Kidd wrote that he is leaving corporate life and “will shift my time to pursue more personal interests and delve more deeply into the areas of advising and investing.” That hints he might become involved with startup companies.
NetApp will not replace Kidd as CTO, and it sounds like the vendor will take a CTO by committee approach. Kidd wrote that the job has likely become too big for one person.
“The role of the CTO in a company the size of NetApp is too broad to be done by a single individual, and a ‘CTO Community’ evolves which includes people both in the CTO office as well as in other parts of the organization,” he wrote. “It includes product architects, technology researchers, technical community leaders, market and industry researchers, technical spokespeople and a host of other disciplines.”
He added that he could not “imagine a better place to work” than NetApp and will always be an advocate for the company. Before joining NetApp in 2005 as SVP of emerging products, Kidd spent six years at Fibre Channel switch vendor Brocade as VP of product management and CTO.
Kidd’s departure comes with NetApp in a long sales slump, with a string of year-to-year revenue declines. The vendor laid off more than 1,000 workers since 2013. Financial analyst firm Piper downgraded NetApp’s stock last month, claiming flash storage and cloud storage providers are cutting into its sales.
LAS VEGAS — Hitachi Data Systems opened its HDS Connect conference today with smaller versions of its flagship Virtual Storage Platform (VSP) storage array and two hyper-converged systems. The hyper-converged systems include one based on HDS technology and the other on VMware’s EVO:RAIL.
HDS is extending the VSP that it launched a year ago downwards with dual-controller, smaller arrays. The first VSP – the G1000 – can support 16 controllers and scale to 4.5 PB in a 10U rack. It continued HDS’ tradition of high-end enterprise arrays.
The new arrays include a 3u VSP G200 and 5u VSP G400, G600 and G800 models. They range in capacity from 1 PB on the G200 to 5.7 PB on the G800.
Like the G1000, the new systems can use Hitachi’s proprietary flash module drives (FMDs), which come in 1.6 TB and 3.2 TB capacities. The G200 supports 264 FMDs, the G400 480, the G600 720 and the G800 1,440. The G1000 holds 578 FMDs.
The new VSP arrays use the same Hitachi Storage Virtualization Operating System (SVOS) as the G1000. SVOS allows other vendors’ storage to be virtualized behind the VSP arrays, handles management features such as non-disruptive data migration and includes a native global active device feature that provides active-active clusters across data centers without requiring a separate storage appliance.
The VSP arrays support Fibre Channel and iSCSI storage natively, and connect to Hitachi NAS file arrays for unified storage.
While the G1000 is aimed at large enterprises, HDS will market the smaller systems as ways for midmarket companies to consolidate virtual server block and file workloads.
“The VSP high end was always unreachable for some customers,” said Bob Madaio, HDS senior director of product marketing. “You have to understand the needs of the application and not assume it’s one-size fits all.”
Madaio said while the software is the same across all VSP arrays, the new systems use Intel-based controllers instead of an HDs proprietary architecture.
The G200, G400 and G600 arrays are available today with the G800 expected later this year.
The hyper-converged systems are part of Hitachi’s UCP convergence family. Madaio described the new Hitachi Scale-Out Platform (HSP) as “hyper-convergence for the analytics world.” HDS positions HSP as a scale-out platform for Hadoop environments, allowing customers to analyze data in place so they don’t have to move large data sets to perform analytics functions. It uses Hitachi servers along with KVM hypervisors and HDS storage management software.
HDS also launched the Hitachi UCP 1000 for VMware EVO:RAIL that uses VMware’s Virtual SAN (VSAN) hyper-converged software. HDS said it would be an EVO:RAIL partner last year, but the UCP 1000 is its first product based on the VMware partnership. HDS sees the UCP 1000 and the new UCP 2000 – which combines Hitachi servers with the VSP G200 – as SMB or remote office storage.
We’ll have more on these and other news from HDS Connect over the next several days on SearchVirtualStorage.com.
EMC came up $75 million short of its projected storage revenue last quarter, according to its Wednesday earnings report.
No big surprise there. EMC and other large storage vendors have struggled to increase revenue and hit their forecasts over the past year or so. And as always is the case whenever that happens, EMC executives gave reasons for the failure. They pointed to customers waiting for the new Data Domain backup system, internal problems caused by recent layoffs, and geopolitics in Russia and China that dampened sales.
There is another reason, though, that has nothing to do with EMC’s products or sales force, or geopolitics. There is a fundamental shift in the type of storage people are buying that has a negative effect on legacy storage vendors. So while EMC, NetApp, IBM, Hewlett-Packard and Hitachi Data Systems have struggled to grow revenues, smaller vendors such as Nutanix, Nimble Storage, Pure Storage and Tintri have picked up the slack with the types of storage systems that didn’t exist five years ago.
This trend is not lost on EMC, which has moved faster than other large vendors to adjust to the changes. It has acquired or developed all-flash, hyper-converged, cloud, and big data systems that are replacing its legacy arrays in deals. A drill-down on its sales last quarter shows how that transformation is playing out.
EMC’s high-end storage – its flagship VMAX platform – fell seven percent year-over-year to $871 million. That follows at 13 percent year-over-year drop in the previous quarter.
Backup and recovery products – a large growth area for years after EMC’s 2009 acquisition of Data Domain – dropped 11 percent to $1.3 billion last quarter. But other storage platforms — including Xtremio, Isilon scale-out NAS, ScaleIO converged infrastructure, ViPR software-defined storage and Elastic Cloud Storage — combined to grow more than 15 percent to $1.5 billion to help pick up the slack. Overall, EMC’s storage revenue of $3.7 billion for the quarter was down less than 1 percent from last year.
“We were disappointed that we a bit short of our revenue plan,” EMC federation CEO Joe Tucci said of the quarter.
We’re seeing the newer technologies replacing the old in many cases. David Goulden, CEO of EMC’s Information Infrastructure (storage) group, said revenue from XtremIO all-flash storage more than doubled from a year ago. Goulden said XtremIO is on track for more than $1 billion in bookings in 2015. About one-third of its revenue comes from replacing workloads that customers previously ran on other EMC storage, with another third from previous EMC customers running new workloads and the other third from customers new to EMC.
Goulden blamed EMC’s drastic backup revenue drop on customers waiting for the new high-end Data Domain disk backup system, which will launch during EMC World next month. That should help but there is a good chance EMC’s heady backup growth is over. EMC already offers customers a way to back up directly to arrays with its ProtectPoint software, and it may bring out a virtual appliance version of Data Domain that will sell for less money than the large disk appliances. These are more examples of how new technologies are changing the way people buy storage.
“There is a huge inextricable linked business and IT transformation taking place right now,” Tucci said.
The challenge for EMC and other large storage vendors is to keep up with that change. Tucci and Goulden pointed to new other product launches coming at EMC World including XtremIO enhancements, software defined storage, converged infrastructure, and data lake solutions involving EMC storage and its Pivotal cloud and analytics division. The vendor will also fill in some of the blanks around its DSSD in-memory flash storage, expected to ship in late 2015. Unlike the Data Domain backup target, none of these are traditional storage products.
Las Vegas – The storage presence at the National Association of Broadcasters (NAB) show last week continued to grow due to the increasing demand to store files in the media and entertainment market. The increase in camera resolution to 4K and beyond expands the size of captured video data. The trend to retain that data on disk rather than on reels of video tape makes the digital inventory even greater.
To meet this demand, storage vendors have created specialized offerings to optimize use in media and entertainment. These are on display at the NAB show, and the Evaluator Group met with many of them there. Here is a summary of those meetings:
The Crossroads StrongBox gateway is integrated by storage vendor partners into their products for moving data to tape using the Amazon S3 protocol. The system can also replicate data to another StrongBox. Additional capabilities include performance tuning to decide what data needs to remain on primary storage.
DataDirect Networks (DDN)
DDN sells high performance storage for post-production and large capacity scaling object storage for retention of video data At NAB, DDN demonstrated its new MediaScaler system with front end connectivity that includes Fibre Channel and InfiniBand in addition to file and object (S3) connections over Ethernet . The MediaScaler, which can handle multiple data streams, was developed by DDN in collaboration with Red Bull Media. DDN also sells Web Object Scaler (WOS), an object storage system with support for files. WOS is used as a repository for the M&E market.
Dot Hill has video editing and processing system partners that integrate the vendor’s high performance block storage systems in their offerings. At the NAB conference, Dot Hill announced a new distributor agreement with Quantum, which was already a Dot Hill OEM parnter.
The EMC Isilon system has been effectively used as a large scaling NAS system in the M&E market. As a market leader in storage, EMC continues to invest in Isilon R&D and has delivered on new features and capabilities. EMC will add capabilities at EMC World next month for Isilon, including features that enhance M&E usage.
HGST offers the Active Archive System as a packaged solution with HGST 8TB helium-filled disks and the Amplidata object software HGST acquired recently. HGST will continue to support previous OEMs of Amplidata – including Quantum — and will deliver new packaged solutions using HGST storage devices. The current Active Archive System has 4.7PB in a single rack and can scale in multi-rack configuration as an object storage system with an optional file gateway device.
Hitachi Data Systems (HDS)
HDS has become a major player in the media and entertainment space with HNAS and Hitachi Content Platform (HCP). HNAS has been used for post-production and transcoding and HCP as the media content repository. In many cases with large media companies, HNAS and HCP are used in concert for a complete system for editing, transcoding, and broadcast.
Imation focuses on safeguarding data throughout the lifecycle with the Secure Data Movement Architecture. Elements in the architecture include the Assureon archiving system and Ironkey encryption lock and key. Additions include administration security and a dual authentication option. Imation has a focused solution to handle data movement in the M&E space and the notable security breaches that have occurred with leaking of videos, Imation has a focused solution.
The LTO Consortium, which promotes the standard LTO tape format (drives and media), uses NAB to showcase the LTO roadmap and use cases. This year, the discussion was around the upcoming release of LTO-7, scheduled for later this year. The specifications include 6TB per cartridge and 300MB per second bandwidth.
NetApp’s StorageGrid WebScale has long had a presence in healthcare as a large-capacity object storage system. Now NetApp is also targeting the system for M&E with its file interfaces. Part of that strategy is a partnership with Spectra Logic that involves using Spectra’s Black Pearl archiving appliance as a target for data from a StorageGrid WebScale storage system.
Object Matrix, based in Cardiff, Wales, has an object storage system that includes the vendor’s software running on commodity hardware. The system’s built-in search and metadata capabilities integrate into existing workflows. Tape can serve as a deep archive with Object Matrix. Notably, Object Matrix works with Avid asset management software for broadcasters.
Promise’s media and entertainment products include removable media interconnect devices, Fibre Channel to Thunderbolt linkage devices, and enterprise storage systems to store file, object, and block (iSCSI) data. The VSky enterprise systems can scale out and up to large capacity for the M&E market as well as video surveillance. A gateway NAS system is also available. The Vess R2600 Pro system is focused on post production editing.
Quantum has been aggressively building out its StoreNext file system that has been successful in the M&E space. It recently added the Q-Cloud Archive was that allows StorNext to serve as the front tend for storing information on a public cloud. A primary value for Quantum is that StorNext gives customers a single interface for storage. Quantum this month released the Artico NAS system that acts as a local NAS device and a gateway to cloud storage for large capacity.
Qumulo offers software to provide a scale out file system (Qumulo Scalable File System) that can handle the large number of files required in many market segments. The vendor demonstrated storage of 10 billion files with a minimal hardware configuration at its booth. As with most vendors that offer their software as “software defined storage,” Qumulo sells its solution with hardware. The value message for Qumulo is about storing information about the data stored as part of the system for more effective management and access.
Signiant provides software for high performance file movement over long distances. The software is licensed and can be resident or used as software as a service. The M&E market traditionally moves large amounts of video data for editing and other operations, making it a good opportunity for Signiant.
The media and entertainment market historically has used digital magnetic tape for interchange and in large libraries as media repositories. Spectra Logic sells tape libraries in various sizes across the M&E market, but used this year’s NAB conference to introduce a new feature for the Black Pearl gateway device. The Black Pearl system allows data to be written to it using the Amazon S3 protocol and then stored on tape in a library. This year Spectra added a prototype offering of the Black Pearl as an S3 target for NetApp’s StorageGrid WebScale.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
SanDisk became the PCIe flash market leader with its 2014 acquisition of server-side flash pioneer Fusion-io. Less than a year later, there is a lot less to that market and that is having a negative impact on SanDisk’s bottom line.
SanDisk’s $1.32 billion in revenue last quarter was below its initial forecast of from $1.4 billion to $1.45 billion, and down 12 percent from last year and 23 percent from the previous quarter. It cut its full year guidance for 2015 to a range of $5.4 billion to $5.7 billion from previous guidance of $6.5 billion to $6.8 billion. The new annual forecast would bring SanDisk revenue below its $6.6 billion from 2014. To compensate for the loss of revenue, the flash vendor announced it will reduce its workforce by around five percent.
Two factors hit SanDisk hard last quarter. It sold a lot less SAS solid state drives (SSDs) than expected, and larger customers started moving off PCIe cards onto 2 TB SATA SSDs. SanDisk CEO Sanjay Mehrotra blamed the SAS product problems on “execution” issues largely related to product qualifications. He attributed the PCIe problem and companies finding SATA drives good enough at a lower price. SanDisk hopes to change that with a new PCIe platform coming this month.
“Our results as well as 2015 revenue estimates for our Fusion-io PCIe solutions are significantly below our original plan,” Mehrotra said on the vendor’s earnings call Wednesday. “We are seeing a substantial portion of the PCIe [market] moving to lower cost solutions using enterprise SATA SSDs.”
“We expect to fully remain the market share leader [in PCIe], and we plan to address that in the 2016 timeframe with solutions using our captive memory as well as when we introduce NVMe solutions,” he said.
In the meantime, there will be pain and cuts at the company.
There’s a lot of data out there, and there’s going to be even more in the future, and businesses need to figure out better ways to use and deliver it because the always-connected “information generation” of digital customers will increasingly expect to have it at their fingertips.
That’s the upshot of a report, entitled “The Information Generation: Transforming the Future, Today,” that EMC released today. The storage giant tapped the Institute for the Future to create a global study to identify and forecast the top “business imperatives” for the next decade. The Institute consulted more than 40 “influential global decision-makers and experts” across multiple industries and passed the baton to Vanson Bourne to survey 3,600 business leaders from 18 countries.
Half or more of the survey’s 3,600 “director and C-Suite” respondents said their customers want access to services faster than ever, 24/7 access and connectivity, and access on an increasing number of multi-channel platforms. Vanson Bourne didn’t survey any of their customers to see what they expect.
The six “attributes for success” the report identified – and only 22% to 31% of the business leaders said they’re addressing “extremely well” – are:
1) Predictively spot new opportunities in markets
2) Demonstrate transparency and trust
3) Innovate in agile ways
4) Deliver unique and personalized experiences
5) Operate in real time
6) Pursue continuous learning
While the report’s findings may not be so revelatory, they will undoubtedly set the stage for the presentations and announcements at EMC World during the first week of May in Las Vegas.
“We’re big into information. So, the more information that’s out there, the more information that needs to be stored and managed and analyzed,” said Jeremy Burton, president of products and marketing at EMC, when asked how the report’s findings tie into the company’s grand plan. “But I think over time, the unit cost of managing information today is several orders of magnitude less than it was 10 years ago. And in the future, it will be several orders of magnitude less again.”
Burton said the architecture of next-generation distributed applications will differ from the traditional architecture of applications such as SAP and Oracle, and the new apps will “much more happily run on commodity infrastructure.” He said the value that EMC can bring increasingly will extend into areas such as building applications and analyzing data, through the company’s Pivotal startup (in which GE is an investor), and securing information, through the company’s RSA acquisition.
EMC’s product direction also includes the upcoming server-based DSSD flash to handle the ultra-fast processing required by the new wave of in-memory databases and analytics applications. Burton said, after processing, the data will dump off to software-defined object stores that can run on cheap commodity hardware.
The company knows it has to compete on price with public cloud behemoths such as Amazon, Google and Microsoft and offer a compelling alternative for the do-it-yourselfers testing the open source waters. Burton claimed EMC has one exabyte-scale customer that paid about 30% less than it would have spent with Amazon Web Services, and he predicted many more exabyte-scale customers in the future.
“You’re going to see a lot of interesting things this year that EMC five years ago never would have done,” said Burton.
In a sign that more companies are moving important applications to the cloud, SIOS Technology Corp. achieved Microsoft Azure certification to provide customers with high-availability clusters in Azure using Microsoft Windows Server Failover Clustering software.
SIOS targets companies that need to protect business-critical Windows environments from data loss in the cloud. It sells clustering software for both SAN and SANless configurations, providing high-availability for Windows applications using Microsoft Windows Server Failover Clustering. SIOS DataKeeper Cluster Edition software does real-time block level data replication of Windows server environments without the need for additional hardware accelerators or compression devices.
Jerry Melnick, SIOS’ chief operating officer, said the vendor is seeing more customers deploying mission-critical applications in the cloud. These applications include SQL, SAP, and Oracle.
“A year ago, I would have said it is a trickle but that is changing quickly,” Melnick said. “Customers are quickly getting serious in moving these applications in the cloud. What makes us different is the completeness and flexibility of our solution. It provides protection across fault domains transparently. There is no need for changes to your applications or operation process.”
Mission critical applications depending on SAN-based Windows server failover clusters for protection can be moved to Azure and achieve the more comprehensive high availability protection they need. Typically if customers want clustering, the configurations needs one instance of the application in the cloud to guarantee the configurations store data across multiple fault domains.
Melnick said cloud providers offer high availability but not automated failover.
IBM is working on putting 220 terabytes of storage in a palm-sized tape cartridge. The company has developed breakthrough technology that shrinks the size of bits for magnetic tape and offers ability to store 123 billion bits of uncompressed data on a square inch of magnetic tape.
“Many people are surprised that there is still a vibrant tape industry,” said Mark Lantz, manager for exploratory tape at IBM Research. “This new technology will enable future products. The aerial density in disks is slowing down and tape is becoming more attractive.”
Lantz said IBM has developed key technologies that will be used in future tape product. IBM spent about 13 years in collaboration with Fujifilm to develop an enhanced write field head technology that uses finer barium ferrite particles for higher density. The new particulate Nanocubic BaFe magnetic tape decreases the particle volume that is necessary for high-density recording.
Other innovations that will be used ub new tape products include signal-processing algorithms for the data channel based on noise-predictive detection principles, a high bandwidth head actuator, and advanced servo control that allows head positioning by less than 5.9 nanometers.This allows a track density of 181,300 tracks per inch. That is more than a 40 percent increase over an industry standard LTO-6 tape cartridge, which can hold 2.5 TB of uncompressed data on a four-by-four inch cartridge.
“The new write head technology produces stronger magnetic fields and allow us to take advantage of the new barium ferrite,” Lantz said. “With the new signal processing algorithms means we can work with much noisy signals and still achieve a high reliability of tape systems.”
The 220 TB of data can store roughly 1.37 trillion mobile test messages. IBM demonstrated a Fujifilm prototype at the National Association Broadcasters Show in Las Vegas this week.
“People who say tape is dying are typically people who do not have tape in their storage portfolio,” Lantz said. “The technology is not dying. In fact, use cases are growing, particularly in low-cost archiving storage tiers.
Quantum made great progress last quarter in its quest to become not-just-a-tape-company.
Quantum said Thursday night that it exceeded its forecast for revenue last quarter, with the upside coming from its disk products. It also turned a $12 million profit, but that was due to its investment in object storage vendor Amplidata.
Quantum said its revenue for the quarter exceeded $145 million, well above its guidance range of $130 million to $135 million and up from $128 million last year. Scale-out storage – mainly StorNext – more than doubled to over $30 million and DXi disk backup revenue increased nearly 30 percent to approximately $25 million.
Although the vendor still drives most of its revenue from tape, no tape products were mentioned in the release. Quantum’s tape sales have been on a steady decline, like most tape vendors.
Quantum’s net income of $12 million included a $13 million payout from Western Digital’s acquisition of Amplidata. Quantum helped fund Amplidata, which is also a strategic partner. Quantum OEMs Amplidata’s Himalaya software in its Lattus appliances.
Quantum’s overall revenue as well as its StorNext and DXi numbers all increased in the first calendar quarter from the fourth quarter last year. That’s unusual in the storage business because the fourth quarter is usually the best quarter for sales.
Scality this week made version 5 of its Ring object storage software generally available, with support for the Server Message Block (SMB) protocol to access Microsoft Windows clients and servers, a new user interface and a simpler installation.
Scality originally previewed Ring 5.0 last September, emphasizing extra VMware integration through vStorage API for Array Integration (VAAI) and VMware Storage API for Storage Awareness (VASA) support.
Leo Leung, Scality’s vice president of corporate marketing, said the vendor added SMB support because many of its enterprise customers have large Microsoft deployments.
Part of the reason for the easier installation is to allow partners to set up a system with Ring in less than 15 minutes, Leung said. Scality signed a reseller deal with HP last October. The Scality object storage software runs on HP ProLiant servers.
“We have simplified the installation so you don’t have to touch every server or node,” Leung said. “Before you had to configure and install for each node, doing it with Linux.”
Scality uses a decentralized distributed architecture, providing concurrent access to data stored on x86-based hardware. Ring’s core features include replication and erasure coding for data protection, auto-tiering and geographic redundancies inside a cluster.
The new user interface is designed for “point-and-click” to provision resources, while the Central Installation Platform that is included with Ring 5.0 uses the SALT open source configuration management framework to initiate commands, even for cloud deployments.
Scott Sinclair, an analyst with Enterprise Strategy Group, said the content delivery space is an area where Scality has a good fit.
“Scality is very good at creating a large massive repository to hold a lot of digital assets,” Sinclair said.