Storage Soup

October 25, 2012  12:34 PM

Quantum spins up disk sales as tape withers

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum might have turned the corner with its disk backup and storage software products last quarter, just as its tape sales took a big dip.

Quantum reported $42.4 million in revenue from disk and software last quarter, topping the $40 million it needs to break even on those products for the first time. Disk and software revenue grew 18% year-over-year, and CEO Jon Gacek said it could hit $50 million this quarter.

However, a steep decline in tape sales caused Quantum to lose $4.9 million on its $147.3 million in overall revenue. Gacek blamed the tape sales drop on customers waiting for the transition from the LTO-5 to LTO-6 format. Quantum’s overall revenue fell 3% from last year, mainly because of a $13.6 million drop in OEM tape automation sales.

Quantum’s disk and software category consists of its DXi disk deduplication target appliances, vmPro virtual machine backup and StorNext archiving for large files. Revenue from those products increased 18% year-over-year and 38% from the previous quarter. Gacek said the DXi8500 enterprise platform increased 30% year-over-year and 129% sequentially, the midrange DXi6700 slipped 6% and the entery-level DXi4000 was up slightly.

Gacek also said the DXi win rate was 55% against the competition, which in almost every case is EMC Data Domain. He said the win rate was even higher for the DXi8500 despite EMC’s attempts to throw its weight around.

“EMC is not trying to compete based on products,” Gacek said. “They’re trying to play the big-company gain of saying ‘We’re the market share leader, we’re so much bigger than [Quantum], look at [Quantum’s] market share, they don’t even make money.’ Sometimes that works, but sometimes it backfires with customers looking to make a technology buy.”

Quantum added 120 new DXi customers and 65 StorNext customers in the quarter. It sold the first of what Gacek called a “wide area storage” product combining OEM object-storage technology from Amplidata with StorNext.

“That’s not even generally available yet, but one customer was super excited and took a pre-GA system,” said Gacek, adding the customer was a government agency.

Quantum forecasted an uptick to $160 million in revenue this quarter. Gacek said besides a possible tape rebound, he’s looking forward to continue increases in disk and software and early sales in Quantum’s fledgling Q-Cloud backup and disaster recovery offerings.

“If we’re going to be a specialist in backup, we have to give the customer different than the competition,” he said. “EMC doesn’t offer anything like [Q-Cloud}, and I don’t think they will. I don’t thik the revenue piece is as important as our ability to engage with the customer in a provocative way. “

October 23, 2012  2:30 PM

Seagate updates Savvio, Constellation hard drives

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Seagate Technology has refreshed three of its enterprise hard disk drives, the Savvio 10K.6 for enterprise-level performance, the Constellation ES.3 for bulk data storage and Constellation CS for replicated bulk storage in the cloud.

The company has split its Constellation 3.5-inch family of hard drives into capacity-optimized devices and cost-optimized hard disk drives. Seagate’s Constellation ES.3 drives, also called the Seagate Enterprise Capacity 3.5 HDD, are high-capacity drives for bulk data center applications. The ES.3 enterprise drives have an increased capacity of 4 TB in a 3.5-inch form factor for tier two storage.

The ES.3 HDDs run at 7200 RPMs and they are optimized for replicated storage in cloud systems, cloud storage servers, cloud storage arrays and cloud backup storage. They are available in 500 GB, 1 TB, 2 TB, 3 TB and 4 TB capacities and targeted for high-workload, multi-drive data centers with SAN, NAS and direct-attached storage arrays. The devices come with 64 MB or 1 28MB cache and feature 6 Gb per second SAS or SATA interfaces, while sustaining 1.4 million hours MTBF compared to the previous 1.2 million.

“The 7200 drives store lots of data that is not immediately available. It’s more of a workhorse of the storage system,” said Barbara Craig, Seagate’s senior product marketing manager.

Seagate’s low-powered, entry-level Constellation CS drives, also called the Seagate Enterprise Value HDD, are designed for high capacity, bulk storage needs specifically for cloud service providers who build replicated environments that handle replicated cloud storage, cloud storage servers, cloud storage arrays, cloud backup storage in DAS and NAS systems. The devices, which have an instant secure erase option, come in 1 TB, 2 TB and 3 TB capacities with 6 Gbps SAS interface. The 7200 RPM drives can handle 0.8 million MTBF.

Seagate’s third new drive is the Savvio 10K, 2.5-inch drive, also called the Seagate Enterprise Performance 10k hard drive. It comes in a smaller form factor and it has a faster performance compared to the previous Savvio 10K.5 version. The new 10.6K drives come in a 2.5 inch form factor and are available in 300 GB, 450 GB, 600 GB and 900 GB capacities. The drives are designed with 6 Gbps SAS or 4 Gbps Fibre Channel interfaces, and the 900 GB capacity drive has a self-encryption Drive (SED) feature. The Savvio 10K.6 also has a sustained data rate of 204 MB per second.

“It has up to 50 percent more capacity and it’s in a smaller form factor,” said Craig. “It is 21 percent faster to the prior generation and it is equal to a 3.5 inch, 15K-RPM sequential performance. We also added a RAID rebuild feature. We do more of a copy function. The good data is copied to reduce the time to rebuild by 80 percent.”

October 23, 2012  7:16 AM

HP no longer plays duet with Violin

Dave Raffo Dave Raffo Profile: Dave Raffo

While preparing to go public, solid-state array vendor Violin Memory’s relationship with Hewlett-Packard (HP) is cooling.

Violin was the subject of two Bloomberg stories last week. Last Wednesday, Bloomberg reported that Violin had quietly filed its initial public offering (IPO) to become a public company. No surprise there. Violin is heavily funded with more than $150 million, and CEO Don Basile has talked of going public for months. Bloomberg followed that on Friday by reporting that HP is ending a reseller deal with Violin that has been in place for Violin Memory Arrays (VMAs) since 2010. HP indicated it doesn’t need Violin because its sells all-flash models of its flagship 3PAR storage array.

Losing the HP stream of revenue could damage Violin’s IPO plans. Violin has not commented on the IPO filing but a Violin spokeswoman released a statement about “rumors and speculation floating around” concerning the HP deal.

According to Violin:

“The current HP Violin relationship remains unchanged. The VMA product family (the Violin 3000 and vSHARE software) continue to be available to customers via HP as per the announced relationship. HP engineering continues to certify the VMA with additional servers, operating systems and joint selling and promotions. POC (proof of concepts) are currently active as are additional HP certifications.

“HP has stated 3PAR is the long term strategic direction for their company. Violin offers other products like the Violin 6000 through both our direct sales and our global reseller network as well as other software and system vendors which have been announced over the past 12 months.”

HP’s response was not exactly warm and friendly towards Violin. An HP spokesman answered Violin’s claim by saying “HP 3PAR is our strategic platform for solid-state storage.” That was the same statement that appeared in the Bloomberg story Friday. If HP wanted to back track, its response would have been more elaborate.

Another source familiar with HP’s strategy said the original reseller deal is still in place but HP will not extend it. It will, however, honor the deal if customers want to buy a Violin array from HP.

Reading between the lines tells me HP will strongly pitch a 3PAR solid-state array before selling anything from Violin. The reseller deal remains in place, but a reseller deal on paper means nothing if the company that is supposed to do the reselling ignores it.

October 19, 2012  4:00 PM

SNW notebook: Fujitsu, Avere strike up a match

Dave Raffo Dave Raffo Profile: Dave Raffo

SANTA CLARA, Calif. – News and notes from this week’s Fall Storage Networking World (SNW):

Avere and Fujitsu America have forged a “meet in the channel” partnership matching Avere’s NAS acceleration device with Fujitsu storage arrays.

The vendors and their channel partners are bundling a two-node Avere FXT 3100 Edge filer cluster with Fujitsu Core filer with UDS NAS controllers and Eternus DX80 S2 Disk Storage System. Avere and Fujitsu call it the “100/100/100” bundle because it provides 100 TB of capacity and 100,000 IOPS for $100,000. Larger bundles are available, up to 2 PB and 2.5 million IOPS.

Avere CEO Ron Bianchini said the idea for the bundles came about because Avere and Fujitsu had common media and entertainment customers using their products. “We’ve been meeting often in customer sites,” he said. “They [Fujitsu] do data management well, and we do off-load well.” …

Former LeftHand Networks CEO Bill Chambers has taken over the CEO role at Starboard Storage. Chambers was LeftHand’s CEO when Hewlett-Packard bought the iSCSI vendor for $360 million in 2008. He joined Starboard as executive chairman shortly before the vendor came out of stealth earlier this year as a re-launched version of Reldata. He replaces Victor Walker as CEO. Walker had been CEO of Reldata since early 2011 and stayed on through the re-launch. Starboard hasn’t announced the CEO change, but Chambers is listed as CEO on the company website. …

Sepaton began shipping its S2100-E3 virtual tape library (VTL) with Hitachi Data Systems HUS 100 storage on the backup and its latest software version.

The system can scale to 2 PB and the new software supports DBeXstream technology that speeds deduplication of multistreamed and multiplexed enterprise databases. Sepaton has used HDS storage in its VTLs since 2010, but the HUS platform hit the market in August.

Pricing for the S2100-ES3 Series starts at $335,000, and S2100-ES2 customers can add new HUS 110 storage to their libraries. …

Imation has kept busy this year integrating data security acquisitions into its disk and removable drive storage, establishing its CyperSafe brand of encryption, identity and authentication, and key management capabilities.

Next up is moving the data deduplication it acquired from Nine Technology last December into its backup products. Brian Findlay, executive director of Imation’s storage product management, said the vendor will integrate dedupe into its DataGuard appliances that use hard drive and RDX removable storage. Imation is also working on an integrated storage appliance using Nine backup technology.

The 2013 roadmap also includes a private cloud backup offering that Imation will either host or sell software to server providers to host. Imation now supports public clouds through cloud seeding and replication between sites.

“The cloud is coming,” Findlay said. “SMBs are still comfortable with onsite backup. It’s one thing to get your data up there, but another thing to restore. But you can move a lot of data to the cloud with RDX.”

October 18, 2012  8:09 AM

When organizational issues inhibit IT progress

Randy Kerns Randy Kerns Profile: Randy Kerns

Information technology (IT) must continue to adapt and change as new demands arise and new technology is introduced. The new demands include more capacity for storing information as well as changes in procedures such as security and compliance.

The introduction of new technology presents the opportunity to obtain greater value from IT investments. Deploying server virtualization technology and increasing the number of servers virtualized has brought economic value and IT agility. New technology is a competitive issue, helping businesses handle information more effectively and faster.

Still, many IT operations take longer than they should to introduce and embrace new technology. So what is holding back companies from taking an obvious advantage? Why does it take a major reboot of IT to make changes for some organizations? Looking at many IT operations, there are common reasons that delay seizing the opportunities.

The most common reason is that the organizational structure for IT inhibits transformational changes. The structure creates a natural resistance to change for several reasons:

• There are many people involved in direction setting and approvals. Some may be other business unit owners or related organizations.

• Stakeholders brought in to participate in decision processes need to be informed and educated on technology and requirement changes.

• With more people, the parochialism can result in new demands that disrupt any efficient process.

To illustrate this problem, I will go through one of many examples that I’ve dealt with recently. In this case, the IT organization had been compartmentalized over time after individuals were promoted and functions separated.

The result was a number of IT directors that had equal authority and covered areas of specialization in IT. Other IT directors were given responsibilities to be the advocates for specific business units, again with equal weight. These directors could negate any change in IT that they did not agree with, and the CIO could not force change without consensus.

This means that all substantive decisions would require the cooperation or endorsement of all internal IT directors and the business units represented by the other IT directors. Education for a technology change required large group meetings, which were hard to schedule because of limited availability of the parties.

Compounding the problem, various vendors called on the individual IT directors and created internal competition and confusion. That caused delays in needed changes and frustration that it took more work to educate and convince others than to do actual implementation. The IT organization kept falling behind in technology and other advances. It was perceived as having archaic operations. Ultimately, an examination of outsourcing was seen as a means to implement change.

Business structures for IT need to match the requirements and pace of change for IT. They must allow for change as a natural process for competitive improvement. The decision-making process must be effective and timely and not mired in the inclusiveness of every possible person. The structure must include strategic planning as part of an organizational process. The process should include technology evaluation, education, and understanding industry best practices. Without a structure that matches the change rate required with IT, IT will periodically have to do a major reset.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

October 16, 2012  4:40 PM

Microsoft strengthens cloud play with StorSimple acquisition

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Microsoft Corp. today announced it is acquiring StorSimple, a cloud integrated storage (CIS) provider that uses its appliances to consolidate primary storage, archiving, backup and disaster recovery into the cloud. The terms of the deal were not disclosed.

The cloud appliance company has been at the forefront of designing its technology so companies can converge on-premise primary storage, backup and archiving to the cloud. Its appliances provide full primary storage capabilities, with up to 100 TB of on-premise storage capacity for enterprise applications while pushing data into the cloud.  StorSimple’s software 2.0 version, which does automatic tiering on solid-state drives (SSDs), SAS and the cloud, has a volume prioritization feature for moving data between local and cloud tiers.

“This tells me Microsoft is serious about getting into primary storage,” said Arun Taneja, founder, president and consulting analyst for the Taneja Group. “They can use StorSimple as an on-ramp to their (Azure) cloud, but they don’t need StorSimple for that. StorSimple goes way beyond an on-ramp. Amazon built their own gateway for their cloud, so Microsoft must have more in mind for StorSimple.”

Mike Schutz, Microsoft’s general manager of the server and tools business division, would not comment on whether the Santa Clara, CA.-based StorSimple will be folded into Microsoft. He also declined to discuss any other specific plans for its new acquisition.

“We just signed an agreement. The deal is not done (and) we will share more details after we close,” he said. “(But) StorSimple’s solution and technology is tightly aligned with our strategy of what we call Cloud OS. It’s a hybrid cloud focus. This is a perfect match for our cloud strategy.”

StorSimple’s systems are optimized for Microsoft applications such as Exchange and SharePoint, user files and virtual appliances. It uses Microsoft Volume Shadow Copy Service (VSS) to take snapshots of Microsoft applications and the Windows file system for backups. It also is certified with VMware.

“StorSimple started from the ground up doing Microsoft applications,” said Steve Duplessie, founder and senior analyst of Enterprise Strategy Group (ESG). “It was really specific around Microsoft, Microsoft, Microsoft for applications. This is not about Microsoft trying to be a storage company. It’s trying to be a cloud-enabled company.”

StorSimple also has a number of cloud provider partnerships, including Microsoft Azure, Amazon Web Services, Rackspace, EMC Atmos and Nirvanix. But Microsoft’s Schutz said there are “no plans to change the current partners StorSimple has today.”

October 16, 2012  7:02 AM

Gridstore adds $12.5M to funding grid

Dave Raffo Dave Raffo Profile: Dave Raffo

Startup Gridstore today closed a $12.5 funding round to build out its sales channel and accelerate development of its scale-out NAS system.

Gridstore uses virtual controllers that install on client devices and spreads capacity among 1TB or 2 TB nodes. Customers scale by adding virtual controllers and nodes to the grid. Gridstore stripes data across the nodes for fault tolerance, so customers can replace failed nodes by attaching new nodes and the storage pool can survive the loss of multiple nodes.

Gridstore CEO Kelly Murphy said the vendor’s goal is to “turn storage into a simple set of building blocks that you can add on to, and pay as you go.”

Murphy said Gridstore has about 40 customers, about half of those in education and another quarter of them service providers. He said the startup is ready to build out its channel and improve its visibility. Geoff Barrall, who founded high-end NAS vendor BlueArc and consumer/SBM file storage startup Drobo, joined Gridstore as chairman earlier this year.

He said the funding will also be used to drive further product development, with the addition of solid-state drives (SSDs) among its roadmap items. “That will be an excellent fit in time,” he said. “You can look for some things early next year.”

Gridstore originally started in the SMB market, and has also moved up to small enterprises. Its main competitors are lower-end NAS systems from EMC and NetApp, although Murphy said his company rarely competes with EMC’s Isilon enterprise clustered NAS.

GGV Capital led the Series A funding round with Onset Ventures participating.

October 15, 2012  7:41 AM

Amplidata adds denser, faster object storage nodes

Dave Raffo Dave Raffo Profile: Dave Raffo

Fresh off of a CEO change and funding round, object storage vendor Amplidata today added a larger capacity storage node and an operating system upgrade that supports 16 TB object sizes.

The AmpliStor AS36 is Amplidata’s densest, highest-capacity node. It holds 12 3 TB drives – up from 10 on the AS30 – for 36 TB per node and can scale to 1.4 PB in a rack. Amplidata also gave the AS36 a performance boost over its predecessors through addition of the Intel E3 processor and the option to add a 240 GB multi-level cell (MLC) Intel SSD to the storage node. Amplidata previously used SSD in its controllers but not in the storage nodes.

Paul Speciale, Amplidata’s VP of products, said the SSDs are included for routing small files. He said the Sandy Bridge CPUs result in a 40% speed increase over the AS30 because they can sustain full line-rate performance to each node.

The biggest improvement in AmpliStor 3.0 software is the ability to support larger files. The previous version supported 500 GB files, but 3.0 is enhanced for big file customers. Future versions will likely support even larger objects than 16 TB, but Amplidata has to make sure the larger files work with its erasure coding.

“We think our architecture can go higher as far as object sizes, but we have to put it into the test cycle,” Speciale said. “We also have to be able to repair these drives in a reasonable amount of time.”

AmpliStor 3.0 also can rebalance storage on nodes automatically after adding capacity. Previous versions allowed customers to add storage on the fly, but did not automatically rebalance.

Last month Amplidata named former Intel executive and Atempo CEO Mike Wall as its new chief, replacing founder Wim De Wispelaere. De Wispelaere remains with the company as chief technology officer.

Amplidata also received $6 million in funding from backup and archiving vendor Quantum at the time, bringing its total funding to $20 million. Quantum has an OEM deal with Amplidata to sell AmpliStor technology under the Quantum StorNext archiving brand.

AmpliStor products are used in cloud storage as well as for archiving. Speciale said he expects the Quantum deal to drive AmpliStor more into media/entertainment, genomics and government markets where StorNext has most traction.

October 12, 2012  10:29 AM

Astute takes early lead in VDI benchmark scores

Dave Raffo Dave Raffo Profile: Dave Raffo

Astute Networks this month became the second vendor to publish VDI-IOmark benchmarking numbers, and the vendor promptly proclaimed itself the lowest-cost-per-virtual-desktop-storage option in the industry.

VDI-IOmark was developed by the Evaluator Group analyst firm to test storage systems performance running virtual desktop infrastructure (VDI) workloads. The benchmark replicates a storage workload running multiple VMware View VDI instances, and measures the number of VDI users the system supports.

Astute’s ViSX  VM storage appliance supported 400 standard users with a configuration priced at $30,600, which comes to $76.50 per user. The benchmark results showed that a 2U ViSX can provision 140,000 sustained random IOPS that can be shared by all VMs on all hosts over an Ethernet network. Astute ran the benchmark on a ViSX appliance with 2.1 TB of usable capacity, all solid-state drives (SSDs).

Evaluator Group senior partner Russ Fellows said other vendors – fewer than 10 – have run the benchmark but have not made their results public. Hitachi Data Systems published the first set of numbers in January for its BlueArc Mercury 110 NAS array, which came out to $146.19 per user (1,536 users for a $224,546 system). None of the unpublished benchmark results matched Astute’s price per desktop, which is the main reason they remain unpublished.

“Some of the big vendors are afraid,” Fellows said. “They don’t want to publish unless they’re the best.”

Fellows said some benchmarked systems are all flash, while others are hybrids using SSDs and hard drives. He said Astute balanced the performance of flash with a lower price than many other all-flash systems.

“Being all-flash helps a lot,” he said. “Some flash in high-end systems is incredible pricey. Not only does Astute have flash, but it has a competitive price.”

Len Rosenthal, Astute’s senior VP of marketing, said the TCP protocol accelerator chip Astute uses for its iSCSI appliance also plays a big part in performance. “The big difference we have is our Data Pump engine,” he said. “That’s our accelerator for protocol processing, and it allows us to drive up performance in a way that no other storage can. We have dedicated offload technology.”

Rosenthal said he’s confident that no other vendor can beat Astute’s performance at its price. “We wanted to put a stake in the ground,” he said. “If others want to shoot at us, that’s fine. Dollars per VDI is the best thing about our system. Others can throw a $300,000 system at us and beat our system, but [ViSX] is a $30,000 system.”

Astute has set the bar, but the benchmark numbers will become more valuable when we have many more systems to compare. Some vendors and users run the long-standing Iometer benchmark for VDI, but Fellows said that benchmark is useless for VDI workloads.

“It’s a completely fabricated benchmark for VDI,” he said. “Iometer does not produce a workload anything like a VDI user would. It’s not realistic, and it’s misleading to users. There are only a few tools that can generate VDI workloads.”

Other VDI benchmarks include VMware View Planner, Citrix Desktop Transformation Accelerator and Login VSI.

October 10, 2012  9:54 AM

NetApp, Cisco steer reference architectures into ‘express’ lane

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp and Cisco are expanding their FlexPod reference architecture concept to SMBs with the introduction of ExpressPod.

The best way to think of ExpressPod is as FlexPod’s little brother. FlexPod, which has been on the market for just under two years, uses enterprise storage from NetApp and servers and switching from Cisco. ExpressPod includes NetApp’s FAS2000 SMB storage and Cisco low-end Unified Computing System (UCS) servers and Nexus switches.

The first two ExpressPod architectures come in small and medium sizes. Both include Cisco UCS C220 M3 servers and Cisco Nexus 3048 switches. The small version uses NetApp FAS2220 storage with 32 server cores and the medium includes NetApp FAS2240 arrays and 64 server cores. Like FlexPods, ExpressPods are pre-validated by NetApp and Cisco and include an implementation guide. The reference architectures are sold by NetApp channel partners.

Adam Fore, NetApp director of solutions marketing, said ExpressPod architectures are designed for companies with fewer than 500 employees. ExpressPods are tested with VMware virtualization software, but Fore said the configurations also support Microsoft Hyper-V and other hypervisors.

NetApp and Cisco cite ease of use and lower cost as drivers for implementing ExpressPod, but they won’t give pricing information. They refer all pricing questions to their channel partners.

NetApp is taking a different reference architecture strategy on the lower end than its main rival EMC. While Cisco is the preferred server partner for EMC’s Vspex reference architecture on the high end, it will push Lenovo channel partners to build Vspex architectures with Lenovo servers at the SMB level.

NetApp also added clustering capabilities from its latest Data Ontap operating system (8.1.1) to FlexPods, allowing them to scale to 24 nodes. And NetApp and Cisco have added a validated FlexPod design for customers running Oracle RAC databases with VMware vSphere and vCenter.

NetApp and Cisco claim they have 1,300 FlexPod customers – up from 175 a year ago.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: