Storage Soup


October 29, 2012  7:35 AM

Per-TB licensing changes behavior, impacts efficiency

Randy Kerns Randy Kerns Profile: Randy Kerns

Once again I’ve run into an information technology director faced with acquiring software for storage that was licensed on a per-terabyte basis. Like others I’ve talked to in that situation, he made his decision based on that charge and not by taking into full consideration what he needed. The cos-per-terabyte charge can be so large that it has an impact on efficient storage operations.

The cost-per-terabyte charge applied in storage varies depending on the product and the vendor. Vendors are not even consistent from product to product on the charges. A few of the different ways they are represented to their customers illustrate this frustrating point:

Per terabyte of managed capacity is a common charge for storage management software. Unfortunately, vendors definite managed capacity differently.

  • Terabytes presented to the host is another measure that requires reading the fine print to understand. It usually means the capacity that that the operating system can see from the storage system. That method of measuring does not reduce the license cost for data reduced with deduplication or compression by the storage system.
  • “Terabytes used” is a broad brush charge specific to the application. It usually means the amount of actual data being stored.
  • Total capacity is the amount of raw terabytes of the storage system, regardless of the efficiency of utilization.
  • Replicated terabytes is a common measure for remote replication software that charges by the amount of data moved to the replicated storage system. Usually this is raw terabytes, and is charged if the data is compressed or not.

Charging by the terabyte often causes unnatural behavior by IT. There can be big efforts to move data around to isolate it from the per terabyte charge of software. Another behavioral change is that IT makes a decision to not use the best management software based on evaluations but go with one that has a more favorable (in their terminology, “a less ridiculous”) pricing model. These actions mean that the best features of the storage software are not used, and the best product does not always win out.

Vendors have reasons for charging by the terabyte. It produces a continuing revenue stream for them, and they argue that customers continue to get value from their product so they should continue to pay. They usually add a maintenance charge to pay for support and updated versions.

There is a stark contrast in the way the storage system is priced compared to the software. The storage system is purchased with a single price in most cases, and there is a warranty period for several years. The charge per terabyte software licensing appears to be gold mine compared to the payback from the storage hardware.

Maybe the charge per terabyte is not really equitable for customers, and their dislike of the practice (much stronger terminology would be appropriate) is justified.  It certainly gives validation to the open source movement.

Licensing charges do affect product and management decisions, and lead to less than optimal solutions. They also lead to a product not being as pervasive (number of accounts using) as would be expected from the product’s value. Making the customers change their behavior because of the pricing model is an example of vendors not listening to the customers and inviting competition.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

October 25, 2012  6:52 PM

Apple, Facebook still love Fusion-io

Dave Raffo Dave Raffo Profile: Dave Raffo

Fusion-io CEO David Flynn said the server flash vendor is selling into more enterprises who want to speed database performance. But the bulk of its business continues to come from standbys Apple and Facebook.

Fusion-io reported $118.1 million in revenue last quarter, up 59% year-over-year and up 11% from the previous quarter. The company had net income of $14.9 million, up 52% from the previous quarter.

And it had Facebook and Apple largely to thank for its success. They have been Fusion-io’s largest customers almost from the start, and last quarter combined for 56% percent of its revenue. One of the two – Fusion-io didn’t disclose which one – placed an unexpected $10 million order that put Fusion-io over its forecast for the quarter. Another 14% percent of the revenue came from sales through Hewlett-Packard, which sells Fusion-io cards in its servers.

Flynn said traditional enterprises are using Fusion-io products, particular its ION Data Accelerator software, to improve database performance and virtual desktop infrastructure (VDI) is also becoming a key application for flash.

Flynn said Fusion-io’s recently announced partnerships with NetApp and Cisco will not produce substantial sales until early next year. He also said Fusion-io is a better partner than Violin Memory, whose reseller deal with HP for its all-flash arrays is under fire.

“Our products and go-to-market strategy are designed to complement our partners’ existing storage and server businesses,” Flynn said. “But contrast, the proprietary flash appliances offered by Violin Memory are not designed to complement but instead to attempt to replace.

“The Violin relationship with HP has been strained for a long time. [Violin] hadn’t really made a good partner because its prodcuts are more in confliction with HP’s.”

 


October 25, 2012  12:34 PM

Quantum spins up disk sales as tape withers

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum might have turned the corner with its disk backup and storage software products last quarter, just as its tape sales took a big dip.

Quantum reported $42.4 million in revenue from disk and software last quarter, topping the $40 million it needs to break even on those products for the first time. Disk and software revenue grew 18% year-over-year, and CEO Jon Gacek said it could hit $50 million this quarter.

However, a steep decline in tape sales caused Quantum to lose $4.9 million on its $147.3 million in overall revenue. Gacek blamed the tape sales drop on customers waiting for the transition from the LTO-5 to LTO-6 format. Quantum’s overall revenue fell 3% from last year, mainly because of a $13.6 million drop in OEM tape automation sales.

Quantum’s disk and software category consists of its DXi disk deduplication target appliances, vmPro virtual machine backup and StorNext archiving for large files. Revenue from those products increased 18% year-over-year and 38% from the previous quarter. Gacek said the DXi8500 enterprise platform increased 30% year-over-year and 129% sequentially, the midrange DXi6700 slipped 6% and the entery-level DXi4000 was up slightly.

Gacek also said the DXi win rate was 55% against the competition, which in almost every case is EMC Data Domain. He said the win rate was even higher for the DXi8500 despite EMC’s attempts to throw its weight around.

“EMC is not trying to compete based on products,” Gacek said. “They’re trying to play the big-company gain of saying ‘We’re the market share leader, we’re so much bigger than [Quantum], look at [Quantum’s] market share, they don’t even make money.’ Sometimes that works, but sometimes it backfires with customers looking to make a technology buy.”

Quantum added 120 new DXi customers and 65 StorNext customers in the quarter. It sold the first of what Gacek called a “wide area storage” product combining OEM object-storage technology from Amplidata with StorNext.

“That’s not even generally available yet, but one customer was super excited and took a pre-GA system,” said Gacek, adding the customer was a government agency.

Quantum forecasted an uptick to $160 million in revenue this quarter. Gacek said besides a possible tape rebound, he’s looking forward to continue increases in disk and software and early sales in Quantum’s fledgling Q-Cloud backup and disaster recovery offerings.

“If we’re going to be a specialist in backup, we have to give the customer different than the competition,” he said. “EMC doesn’t offer anything like [Q-Cloud}, and I don’t think they will. I don’t thik the revenue piece is as important as our ability to engage with the customer in a provocative way. “


October 23, 2012  2:30 PM

Seagate updates Savvio, Constellation hard drives

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Seagate Technology has refreshed three of its enterprise hard disk drives, the Savvio 10K.6 for enterprise-level performance, the Constellation ES.3 for bulk data storage and Constellation CS for replicated bulk storage in the cloud.

The company has split its Constellation 3.5-inch family of hard drives into capacity-optimized devices and cost-optimized hard disk drives. Seagate’s Constellation ES.3 drives, also called the Seagate Enterprise Capacity 3.5 HDD, are high-capacity drives for bulk data center applications. The ES.3 enterprise drives have an increased capacity of 4 TB in a 3.5-inch form factor for tier two storage.

The ES.3 HDDs run at 7200 RPMs and they are optimized for replicated storage in cloud systems, cloud storage servers, cloud storage arrays and cloud backup storage. They are available in 500 GB, 1 TB, 2 TB, 3 TB and 4 TB capacities and targeted for high-workload, multi-drive data centers with SAN, NAS and direct-attached storage arrays. The devices come with 64 MB or 1 28MB cache and feature 6 Gb per second SAS or SATA interfaces, while sustaining 1.4 million hours MTBF compared to the previous 1.2 million.

“The 7200 drives store lots of data that is not immediately available. It’s more of a workhorse of the storage system,” said Barbara Craig, Seagate’s senior product marketing manager.

Seagate’s low-powered, entry-level Constellation CS drives, also called the Seagate Enterprise Value HDD, are designed for high capacity, bulk storage needs specifically for cloud service providers who build replicated environments that handle replicated cloud storage, cloud storage servers, cloud storage arrays, cloud backup storage in DAS and NAS systems. The devices, which have an instant secure erase option, come in 1 TB, 2 TB and 3 TB capacities with 6 Gbps SAS interface. The 7200 RPM drives can handle 0.8 million MTBF.

Seagate’s third new drive is the Savvio 10K, 2.5-inch drive, also called the Seagate Enterprise Performance 10k hard drive. It comes in a smaller form factor and it has a faster performance compared to the previous Savvio 10K.5 version. The new 10.6K drives come in a 2.5 inch form factor and are available in 300 GB, 450 GB, 600 GB and 900 GB capacities. The drives are designed with 6 Gbps SAS or 4 Gbps Fibre Channel interfaces, and the 900 GB capacity drive has a self-encryption Drive (SED) feature. The Savvio 10K.6 also has a sustained data rate of 204 MB per second.

“It has up to 50 percent more capacity and it’s in a smaller form factor,” said Craig. “It is 21 percent faster to the prior generation and it is equal to a 3.5 inch, 15K-RPM sequential performance. We also added a RAID rebuild feature. We do more of a copy function. The good data is copied to reduce the time to rebuild by 80 percent.”


October 23, 2012  7:16 AM

HP no longer plays duet with Violin

Dave Raffo Dave Raffo Profile: Dave Raffo

While preparing to go public, solid-state array vendor Violin Memory’s relationship with Hewlett-Packard (HP) is cooling.

Violin was the subject of two Bloomberg stories last week. Last Wednesday, Bloomberg reported that Violin had quietly filed its initial public offering (IPO) to become a public company. No surprise there. Violin is heavily funded with more than $150 million, and CEO Don Basile has talked of going public for months. Bloomberg followed that on Friday by reporting that HP is ending a reseller deal with Violin that has been in place for Violin Memory Arrays (VMAs) since 2010. HP indicated it doesn’t need Violin because its sells all-flash models of its flagship 3PAR storage array.

Losing the HP stream of revenue could damage Violin’s IPO plans. Violin has not commented on the IPO filing but a Violin spokeswoman released a statement about “rumors and speculation floating around” concerning the HP deal.

According to Violin:

“The current HP Violin relationship remains unchanged. The VMA product family (the Violin 3000 and vSHARE software) continue to be available to customers via HP as per the announced relationship. HP engineering continues to certify the VMA with additional servers, operating systems and joint selling and promotions. POC (proof of concepts) are currently active as are additional HP certifications.

“HP has stated 3PAR is the long term strategic direction for their company. Violin offers other products like the Violin 6000 through both our direct sales and our global reseller network as well as other software and system vendors which have been announced over the past 12 months.”

HP’s response was not exactly warm and friendly towards Violin. An HP spokesman answered Violin’s claim by saying “HP 3PAR is our strategic platform for solid-state storage.” That was the same statement that appeared in the Bloomberg story Friday. If HP wanted to back track, its response would have been more elaborate.

Another source familiar with HP’s strategy said the original reseller deal is still in place but HP will not extend it. It will, however, honor the deal if customers want to buy a Violin array from HP.

Reading between the lines tells me HP will strongly pitch a 3PAR solid-state array before selling anything from Violin. The reseller deal remains in place, but a reseller deal on paper means nothing if the company that is supposed to do the reselling ignores it.


October 19, 2012  4:00 PM

SNW notebook: Fujitsu, Avere strike up a match

Dave Raffo Dave Raffo Profile: Dave Raffo

SANTA CLARA, Calif. – News and notes from this week’s Fall Storage Networking World (SNW):

Avere and Fujitsu America have forged a “meet in the channel” partnership matching Avere’s NAS acceleration device with Fujitsu storage arrays.

The vendors and their channel partners are bundling a two-node Avere FXT 3100 Edge filer cluster with Fujitsu Core filer with UDS NAS controllers and Eternus DX80 S2 Disk Storage System. Avere and Fujitsu call it the “100/100/100” bundle because it provides 100 TB of capacity and 100,000 IOPS for $100,000. Larger bundles are available, up to 2 PB and 2.5 million IOPS.

Avere CEO Ron Bianchini said the idea for the bundles came about because Avere and Fujitsu had common media and entertainment customers using their products. “We’ve been meeting often in customer sites,” he said. “They [Fujitsu] do data management well, and we do off-load well.” …

Former LeftHand Networks CEO Bill Chambers has taken over the CEO role at Starboard Storage. Chambers was LeftHand’s CEO when Hewlett-Packard bought the iSCSI vendor for $360 million in 2008. He joined Starboard as executive chairman shortly before the vendor came out of stealth earlier this year as a re-launched version of Reldata. He replaces Victor Walker as CEO. Walker had been CEO of Reldata since early 2011 and stayed on through the re-launch. Starboard hasn’t announced the CEO change, but Chambers is listed as CEO on the company website. …

Sepaton began shipping its S2100-E3 virtual tape library (VTL) with Hitachi Data Systems HUS 100 storage on the backup and its latest software version.

The system can scale to 2 PB and the new software supports DBeXstream technology that speeds deduplication of multistreamed and multiplexed enterprise databases. Sepaton has used HDS storage in its VTLs since 2010, but the HUS platform hit the market in August.

Pricing for the S2100-ES3 Series starts at $335,000, and S2100-ES2 customers can add new HUS 110 storage to their libraries. …

Imation has kept busy this year integrating data security acquisitions into its disk and removable drive storage, establishing its CyperSafe brand of encryption, identity and authentication, and key management capabilities.

Next up is moving the data deduplication it acquired from Nine Technology last December into its backup products. Brian Findlay, executive director of Imation’s storage product management, said the vendor will integrate dedupe into its DataGuard appliances that use hard drive and RDX removable storage. Imation is also working on an integrated storage appliance using Nine backup technology.

The 2013 roadmap also includes a private cloud backup offering that Imation will either host or sell software to server providers to host. Imation now supports public clouds through cloud seeding and replication between sites.

“The cloud is coming,” Findlay said. “SMBs are still comfortable with onsite backup. It’s one thing to get your data up there, but another thing to restore. But you can move a lot of data to the cloud with RDX.”


October 18, 2012  8:09 AM

When organizational issues inhibit IT progress

Randy Kerns Randy Kerns Profile: Randy Kerns

Information technology (IT) must continue to adapt and change as new demands arise and new technology is introduced. The new demands include more capacity for storing information as well as changes in procedures such as security and compliance.

The introduction of new technology presents the opportunity to obtain greater value from IT investments. Deploying server virtualization technology and increasing the number of servers virtualized has brought economic value and IT agility. New technology is a competitive issue, helping businesses handle information more effectively and faster.

Still, many IT operations take longer than they should to introduce and embrace new technology. So what is holding back companies from taking an obvious advantage? Why does it take a major reboot of IT to make changes for some organizations? Looking at many IT operations, there are common reasons that delay seizing the opportunities.

The most common reason is that the organizational structure for IT inhibits transformational changes. The structure creates a natural resistance to change for several reasons:

• There are many people involved in direction setting and approvals. Some may be other business unit owners or related organizations.

• Stakeholders brought in to participate in decision processes need to be informed and educated on technology and requirement changes.

• With more people, the parochialism can result in new demands that disrupt any efficient process.

To illustrate this problem, I will go through one of many examples that I’ve dealt with recently. In this case, the IT organization had been compartmentalized over time after individuals were promoted and functions separated.

The result was a number of IT directors that had equal authority and covered areas of specialization in IT. Other IT directors were given responsibilities to be the advocates for specific business units, again with equal weight. These directors could negate any change in IT that they did not agree with, and the CIO could not force change without consensus.

This means that all substantive decisions would require the cooperation or endorsement of all internal IT directors and the business units represented by the other IT directors. Education for a technology change required large group meetings, which were hard to schedule because of limited availability of the parties.

Compounding the problem, various vendors called on the individual IT directors and created internal competition and confusion. That caused delays in needed changes and frustration that it took more work to educate and convince others than to do actual implementation. The IT organization kept falling behind in technology and other advances. It was perceived as having archaic operations. Ultimately, an examination of outsourcing was seen as a means to implement change.

Business structures for IT need to match the requirements and pace of change for IT. They must allow for change as a natural process for competitive improvement. The decision-making process must be effective and timely and not mired in the inclusiveness of every possible person. The structure must include strategic planning as part of an organizational process. The process should include technology evaluation, education, and understanding industry best practices. Without a structure that matches the change rate required with IT, IT will periodically have to do a major reset.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


October 16, 2012  4:40 PM

Microsoft strengthens cloud play with StorSimple acquisition

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Microsoft Corp. today announced it is acquiring StorSimple, a cloud integrated storage (CIS) provider that uses its appliances to consolidate primary storage, archiving, backup and disaster recovery into the cloud. The terms of the deal were not disclosed.

The cloud appliance company has been at the forefront of designing its technology so companies can converge on-premise primary storage, backup and archiving to the cloud. Its appliances provide full primary storage capabilities, with up to 100 TB of on-premise storage capacity for enterprise applications while pushing data into the cloud.  StorSimple’s software 2.0 version, which does automatic tiering on solid-state drives (SSDs), SAS and the cloud, has a volume prioritization feature for moving data between local and cloud tiers.

“This tells me Microsoft is serious about getting into primary storage,” said Arun Taneja, founder, president and consulting analyst for the Taneja Group. “They can use StorSimple as an on-ramp to their (Azure) cloud, but they don’t need StorSimple for that. StorSimple goes way beyond an on-ramp. Amazon built their own gateway for their cloud, so Microsoft must have more in mind for StorSimple.”

Mike Schutz, Microsoft’s general manager of the server and tools business division, would not comment on whether the Santa Clara, CA.-based StorSimple will be folded into Microsoft. He also declined to discuss any other specific plans for its new acquisition.

“We just signed an agreement. The deal is not done (and) we will share more details after we close,” he said. “(But) StorSimple’s solution and technology is tightly aligned with our strategy of what we call Cloud OS. It’s a hybrid cloud focus. This is a perfect match for our cloud strategy.”

StorSimple’s systems are optimized for Microsoft applications such as Exchange and SharePoint, user files and virtual appliances. It uses Microsoft Volume Shadow Copy Service (VSS) to take snapshots of Microsoft applications and the Windows file system for backups. It also is certified with VMware.

“StorSimple started from the ground up doing Microsoft applications,” said Steve Duplessie, founder and senior analyst of Enterprise Strategy Group (ESG). “It was really specific around Microsoft, Microsoft, Microsoft for applications. This is not about Microsoft trying to be a storage company. It’s trying to be a cloud-enabled company.”

StorSimple also has a number of cloud provider partnerships, including Microsoft Azure, Amazon Web Services, Rackspace, EMC Atmos and Nirvanix. But Microsoft’s Schutz said there are “no plans to change the current partners StorSimple has today.”


October 16, 2012  7:02 AM

Gridstore adds $12.5M to funding grid

Dave Raffo Dave Raffo Profile: Dave Raffo

Startup Gridstore today closed a $12.5 funding round to build out its sales channel and accelerate development of its scale-out NAS system.

Gridstore uses virtual controllers that install on client devices and spreads capacity among 1TB or 2 TB nodes. Customers scale by adding virtual controllers and nodes to the grid. Gridstore stripes data across the nodes for fault tolerance, so customers can replace failed nodes by attaching new nodes and the storage pool can survive the loss of multiple nodes.

Gridstore CEO Kelly Murphy said the vendor’s goal is to “turn storage into a simple set of building blocks that you can add on to, and pay as you go.”

Murphy said Gridstore has about 40 customers, about half of those in education and another quarter of them service providers. He said the startup is ready to build out its channel and improve its visibility. Geoff Barrall, who founded high-end NAS vendor BlueArc and consumer/SBM file storage startup Drobo, joined Gridstore as chairman earlier this year.

He said the funding will also be used to drive further product development, with the addition of solid-state drives (SSDs) among its roadmap items. “That will be an excellent fit in time,” he said. “You can look for some things early next year.”

Gridstore originally started in the SMB market, and has also moved up to small enterprises. Its main competitors are lower-end NAS systems from EMC and NetApp, although Murphy said his company rarely competes with EMC’s Isilon enterprise clustered NAS.

GGV Capital led the Series A funding round with Onset Ventures participating.


October 15, 2012  7:41 AM

Amplidata adds denser, faster object storage nodes

Dave Raffo Dave Raffo Profile: Dave Raffo

Fresh off of a CEO change and funding round, object storage vendor Amplidata today added a larger capacity storage node and an operating system upgrade that supports 16 TB object sizes.

The AmpliStor AS36 is Amplidata’s densest, highest-capacity node. It holds 12 3 TB drives – up from 10 on the AS30 – for 36 TB per node and can scale to 1.4 PB in a rack. Amplidata also gave the AS36 a performance boost over its predecessors through addition of the Intel E3 processor and the option to add a 240 GB multi-level cell (MLC) Intel SSD to the storage node. Amplidata previously used SSD in its controllers but not in the storage nodes.

Paul Speciale, Amplidata’s VP of products, said the SSDs are included for routing small files. He said the Sandy Bridge CPUs result in a 40% speed increase over the AS30 because they can sustain full line-rate performance to each node.

The biggest improvement in AmpliStor 3.0 software is the ability to support larger files. The previous version supported 500 GB files, but 3.0 is enhanced for big file customers. Future versions will likely support even larger objects than 16 TB, but Amplidata has to make sure the larger files work with its erasure coding.

“We think our architecture can go higher as far as object sizes, but we have to put it into the test cycle,” Speciale said. “We also have to be able to repair these drives in a reasonable amount of time.”

AmpliStor 3.0 also can rebalance storage on nodes automatically after adding capacity. Previous versions allowed customers to add storage on the fly, but did not automatically rebalance.

Last month Amplidata named former Intel executive and Atempo CEO Mike Wall as its new chief, replacing founder Wim De Wispelaere. De Wispelaere remains with the company as chief technology officer.

Amplidata also received $6 million in funding from backup and archiving vendor Quantum at the time, bringing its total funding to $20 million. Quantum has an OEM deal with Amplidata to sell AmpliStor technology under the Quantum StorNext archiving brand.

AmpliStor products are used in cloud storage as well as for archiving. Speciale said he expects the Quantum deal to drive AmpliStor more into media/entertainment, genomics and government markets where StorNext has most traction.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: