SuperComputing 2009 is underway this week at Portland, and that means lots of InfiniBand news around products and OEM deals.
QLogic said today it has scored a string of OEM deals for its 40 Gbps InfiniBand devices, bringing its 7300 Series host channel adapters (HCAs) and 1200 Director switches to the IBM System Cluster 1350, the HP Unified Cluster Portfolio, Dell PowerEdge servers and Precision workstations and SGI CloudRack Servers.
Voltaire disclosed its Vantage 8500 10-Gigabit Ethernet switch, Grid Director 4700 40 Gbps InfiniBand switch, and Unified Fabric Manager software are also available from HP as part of the HP Unified Cluster Portfolio.
Mellanox Technologies said its ConnectX-2 40 GBps InfiniBand HCAs are available now for HP ProLiant BL, DL and SL series servers, as well as HP BladeSystem c-Class enclosures. Mellanox also said it would have a 120 Gbps InfiniBand switch platform in 2010.
There’s a reason so much InfiniBand news is concentrated around SuperComputing – InfiniBand remains a niche interconnect for high performance computing due to its low latency and high bandwidth. That doesn’t mean the InfiniBand market won’t grow, but it’s unlikely to encroach on the turf of Fibre Channel, Ethernet or any converged networks in the data center.
QLogic director of corporate communications Steve Zivanic said his vendor has customers using InfiniBand, Fibre Channel and Ethernet, but InfiniBand stands on its own while the other protocols are beginning to converge around Fibre Channel over Ethernet (FCoE).
For instance, he says, financial institutions are looking to save power in the data center by consolidating with VMware, high density blade servers and FCoE. Some of those same firms use InfiniBand to run their Monte Carlo simulation programs on a separate network.
“People using InfiniBand don’t want to share anything,” Zivanic said. “They want dedicated bandwidth, 40 gigs per second. For high performance applications, people are increasing their presence of InfiniBand. On the other side of the house, for general business applications, that’s where we see FCoE brought in to reduce costs.”
(6:08) Spectra Logic looks to leapfrog high-end tape storage market with T-Finity tape library
***PLEASE NOTE: Correction to story — base pricing for the T-Finity starts at $218,500, not $162,800 as originally reported***
(8:52) HP buys 3Com, not Brocade
Claiming its approach to enterprise data security key management will assure users of reliability, CA this week launched a new Encryption Key Manager (EKM) software offering that runs on z/OS mainframe and can manage keys for CA Tape Encryption as well as IBM tape formats.
Stefan Kochishan, director of storage product marketing for CA, said a lack of key management standards for encryption at the various points it’s deployed in the enterprise has hindered encryption adoption. But, he argued, many customers are also concerned with the reliability of open-systems based encryption key managers, since without keys to access it, encrypted data can be lost.
The new z/OS based product will manage IBM and CA tape encryption instances and automatically mirror keys among mainframes at up to three sites, including replication over SSL and digital certification for data integrity. This method allows keys to be re-created from an alternate location should the primary key manager fail, a key is accidentally deleted, or if the primary site is lost in a disaster. Users can also backup the key store to mitigate the threat of rolling corruption in the replication system.
“This is the first step in a strategy where we want to be the key manager for other encryption solutions,” Kochishan said. CA is considering managing Sun/StorageTek tape encryption next, thoiugh it doesn’t have plans for LTO.
But isn’t the mainframe and IBM focus making another silo for enterprise key management? What about non-mainframe shops? Stefan argues the enterprises most likely to be concerned with advanced key management are financial services companies and banks, which tend to still be running mainframes. Mainframe is also in CA’s DNA.
“It has to be mainframe based,” Kochishan said. “Some companies take distributed systems data and upload it to the mainframe, and have it backed up and tracked through mainframe applications…the mainframe has great reliability and availability which will address customer concerns for high availability and eliminating a single point of failure.”
What about business partners of mainframe-having customers who want to receive encrypted data? Kochishan said customers have a choice of methods to send public keys to business partners. They can send keys on a tape encrypted by CA Tape Encryption, on a natively-encrypted IBM TS1130 tape, or over SSL via replication from the mainframe.
Why not use IBM’s Enterprise Key Manager if you’re already running a z/OS mainframe and an IBM tape library? “IBM EKM has key management in the name but it’s not truly that,” Kochishan. He says IBM “doesn’t perform auditing, tracking, backup, recovery and expiration” of keys. IBM also has Tivoli Lifecycle Key Manager, but it’s “an extra cost item.” Speaking of cost items, CA’s starting price is $16,377 and an unlimited usage license starts at $54,590.
Kochishan acknowledged key management standards will still be, er, key to encryption adoption, even if CA’s approach has succeeded in allaying users’ reliability concerns. One of CA’s technical architects is on the board of the OASIS standards body working on a standard as we speak. “That is a complaint among customers,” Kochishan said.
Although Hewlett-Packard spent $2.7 billion to buy 3Com Wednesday largely to make it more competitive with Cisco on the Ethernet switching front, the deal will also have implications for Cisco’s main storage competitor Brocade.
First, the deal means HP won’t be buying Brocade – at least not any time soon. HP was considered the most likely company to buy Brocade after word leaked that Brocade was looking for a buyer last month. It appears that HP did consider it – HP executive VP Dave Donatelli said on a webcast explaining the 3Com acquisition that HP looked at all networking options – but decided 3Com’s Ethernet switches and routers were a better fit than the products that Brocade picked up from Foundry.
The 3Com acquisition also means HP won’t follow the lead of IBM and Dell and sign an OEM deal with Brocade for its Ethernet switches. With 3Com’s products and its own ProCurve platform, HP should have enough to fill out its Ethernet lineup.
The deal won’t impact Brocade’s core business – selling Fibre Channel switches. With Cisco and Brocade as its only options, HP will likely continue to lean heavily on Brocade for storage connectivity.
Still, the HP-3Com deal is seen as bad news for Brocade. Several Wall Street analysts downgraded its stock price today, and its shares dropped more than a dollar in early trading from Wednesday’s closing price of $9.25.
While talk of Brocade getting acquired has diminished, Wedbush Securities analyst Kaushik Roy raised the possibility that networking vendor Juniper might want Brocade to create “an even more formidable competitor to Cisco.”
“We think that neither Dell nor IBM would be interested in buying Brocade due to Brocade’s OEM model,” Roy wrote today in a note to clients. “Any purchase by one of the server vendors would lead to loss of revenue streams from the other server OEM vendors. We, however, think that Brocade might be a good acquisition target for Juniper.”
3PA actually beefed up its storage virtualization and provisioning features today without using the word “thin.” The vendor that pioneered thin provisioning rolled out three applications for automating storage management that work in connection with its thin provisioning but are not directly involved with making or keeping arrays “thin.”
Those applications are 3PAR Autonomic Groups, 3PAR Scheduler, and 3PAR Host Explorer. The first two are part of 3PAR’s InForm Operating System and Host Explorer runs on Windows, Linux and Solaris 10 hosts – all without charge.
Autonomic Groups lets administrators automatically provision clusters. When an administrator adds a volume to an Autonomic Group, the volume gets exported automatically to all hosts in the group. 3PAR vice president of marketing Craig Nunes says Autonomic Groups let administrators provision clusters in three clicks: one click to add a cluster of hosts, a second click to create and group volumes, and the third to provision a volume group to a host group. The application automatically provisions all LUNs on the volumes in the group. Using Autonomic Groups, admins can also create a Virtual Copy snapshot of all volumes with one command.
3PAR Scheduler automates creation and deletion of Virtual Copy snaps, and 3PAR Host Explorer is an agent that sites on a server and discovers information such as Fibre Channel World Wide Name (WWNs) and host multipath data to 3PAR InServer arrays.
“Clusters are knocking on the door of every data center now with virtual deployments,” Nunes says. “If we’re going to get to the place where the data center is a shared virtualization resource as opposed to a bunch of boxes, you have to start hiding some of the underlying technology.”
Enterprise Strategy Group analyst Tony Palmer says 3PAR has gone farther than just adding wizards for provisioning storage. “It’s really all about virtualizing the storage,” he said. “Thin provisioning is a piece of that, but if you virtualize storage out from under the servers and hosts that use it, you can do anything you want with it. You can thin provision it, create snapshots, and re-provision on the fly while applications are still running.”
Meanwhile, IBM made a “thin” addition today in pursuit of 3PAR. IBM added instant space reclamation to its XIV arrays as part of a package of upgrades that includes general availability of asynchronous mirroring that Big Blue announced in July.
XIV already supported space reclamation – the ability to detect, zero out and release unused storage to the storage pool of volumes that were thin provisioned. What’s new is this can be done instantly without physically scrubbing. But this requires Symantec’s Veritas Storage Foundation, which has an API for vendors to write to.
This is similar to the 3PAR Thin Reclamation for Veritas Storage Foundation capability added last month for InServ arrays. 3PAR and IBM are the only vendors to take advantage of Symantec’s Thin Reclamation API so far.
SSD supplier STEC’s stock price has taken a dive since the vendor reported last Tuesday that EMC will carry over its 2009 inventory of Flash drives into 2010. Shares have fallen almost $9.00 to $13.18 at today’s close. According to a report from MarketWatch:
Much of the carryover involves STEC’s Zeus IOPS SSD products. EMC makes up about 90% of STEC’s business for the Zeus IOPS drives, and had placed an order for $120 million of the drives for the second half of this year.
STEC officials said that about $55 million of that order has been delivered, and the rest would be shipped before the end of year.
A flurry of class action lawsuits have been filed accusing STEC executives of misleading investors before making their revelation last Tuesday. This all leads me to wonder if the industry has been wrong about SSD adoption overall.
All EMC would say in a statement released through a spokesperson was “EMC is pleased with its SSD demand and growth. In Q4, EMC will introduce unique FAST (fully automated storage tiering) capabilities, which are expected to increase SSD growth and demand even further.”
Does this inventory carryover send a signal about wider SSD adoption in the market, given how dominant STEC’s share is (and EMC dominates its business)? I asked a couple of analysts for their opinions.
“Well, what I have been hearing is that EMC is giving SSD away for free to try to spur adoption, but so far it doesn’t seem to be working — it’s too costly, and too wasteful without some type of FAST capability,” Forrester Research analyst Andrew Reichman responded in an email. “SSD as a performance add-on is not popular in this economy… It’s interesting to see that STEC can’t make a go of this business even though they have a number of the major storage vendors signed up as partners. That says to me that it’s not competition, but the whole category being slow so far.”
Added Taneja Group analyst Jeff Boles, in another email, “while we’re in the midst of an unusual market that likely over-penalizes STEC for perceived risk, while over-endorsing other companies for perceived value, I remain cautious about the speed of SSD adoption.”
But, he added, the newness of SSD could be creating a vicious cycle of perceived risk.
“What the market needs is a good round of commoditization, brought on by integration of some of this intelligence into the storage system itself,” Boles wrote. “At that point, obsolescence will start to look a bit more unusual, and the roadmap for future devices a little more predictable. After all, if your XYZ array had solid state intelligence in it, and you were buying highly commoditized drives that only changed with the density and performance of the flash memory itself, then there seems to be less risk that your flash investment could be rapidly outdated by the next rev of a drive controller.”
As always, the peanut gallery is invited to weigh in.
HP made some storage updates today as part of a larger announcement aimed at SMBs looking to cut costs, including new Hyper-V bundles.
Storage-related updates include:
- New application-integrated snapshot option for LeftHand iSCSI SANs. LeftHand already had application-integrated snapshots that supported VSS for snapping the LeftHand SAN, but not with the ability to run inside and quiesce an application. This update fills that gap; competitors like Dell EqualLogic and NetApp already offer this capability.
- New NAS interface for D2D data deduplication products. SMBs no longer need to license virtual tape drives to use HP’s low-end data deduplication product.
- New DAT 320 tape drive. SMBs still using tape, on the other hand, have the option of doubling the capacity on DAT tape drives to 320GB. HP claims the new drives also offer up to a 75% performance increase with a 50% lower power draw. Tape, especially for SMBs, is frequently declared dead, but there are still pockets of tape use in this market, particularly in remote and branch offices.
- Six new HP Virtualization Smart Bundles for Microsoft Hyper-V. Full specs on the bundles, which range from an entry-level tower server form factor to rackmount servers with networked storage, are available at HP’s website. These bundles are similar to the ones HP previously rolled out this year for VMware environments.
LeftHand product marketing manager Chris McCall says the Hyper-V bundles are not a counter strike against the new joint venture between VMware, Cisco and EMC. He says HP is supporting both hypervisors, even though VMware is deeply aligned wiht HP HP competitors Cisco and EMC. “We’ve done bundles for VMware already because we’ve targeted market size — we did VMware first because they’re the market leader,” he said. “If Microsoft was, we would’ve done Hyper-V first. There’s nothing to read into there.”
QuorumLabs joined the storage world this week with a business continuity-in-a-box appliance for SMBs.
QuorumLabs claims its onQ boxes can help a company recover from any outage with one click, hence its tagline “One-Click Recovery.” It offers four models of the appliance in 1U or 2U configurations ranging from 2 TB to 8 TB of raw storage, beginning at $20,000. (Monthly subscription pricing is also an option).
The appliance takes full and incremental backups, creates virtual clones of systems it protects and does what QuorumLabs calls “file-chunk” deduplication – it only sends parts of files that don’t already exist on any protected server.
The vendor recommends having another system at a second site for DR, but is lining up managed service providers with data centers that customers can use to synchronize their data. The appliance does incremental backups and replicates them to the remote site.
QuorumLabs CEO Robert Habibi says customers can use onQ with their existing backup software, but it doesn’t require an additional backup application.
“We’re not selling ourselves as backup, what we provide is business continuity,” Habibi said.
The tricky part for the QuorumLabs gang could be selling themselves to a market that doesn’t know the company. QuorumLabs is a spinoff of Themis Computer, which sells servers to defense contractors. And while nearly every storage startup is headed by people who are known in the storage industry, QuorumLabs’ executives come from outside the storage world.
“They’re addressing the right market and the price is competitive, but they may have a bit of a challenge with market recognition,” IDC analyst Robert Amatruda said. “They have to gain more awareness in the SMB market.”
ZL Technologies Inc. sent out a notice today that its lawsuit against analyst firm Gartner has been dismissed.
In a statement, CEO Kon Leong said:
ZL believes that Gartner’s overwhelming influence on large corporations’ purchasing decisions, and its inaccurate ratings, including its bias in favor of large vendors, combine to pose major competitive hurdles that hurt smaller innovative vendors across all technology sectors. The harm falls not only on new and innovative companies like ZL, but on the enterprise customers who receive faulty purchasing advice, and as a result overspend on inferior technology.
While we are disappointed that the court has dismissed our lawsuit as filed, we are pleased that it has given us leave to amend our complaint, over Gartner opposition. We believe the market should take note that the defense on which Gartner prevailed was its argument that its reports contain “pure opinions,” namely, opinions which are not based on objective facts. In ZL’s view, that is directly contrary to the statements Gartner makes to its customers when selling its allegedly sound research. ZL intends to amend its complaint and refile within 30 days.
Whatever happens from here legally speaking, the lawsuit itself raises interesting points about analysts, purchasing decisions and ethics. Whether Gartner is protected by the First Amendment as speech or not won’t protect it from criticism such as StorageMojo blogger Robin Harris’s recent post, “Gartner’s Magic Hydrant.”
The Magic Quadrant has the analytical rigor of a beauty contest. Implicit and explicit assumptions about customers, markets, technologies, use cases and suppliers obscures more than it reveals. The MQ seeks to rank vendors not only by what their products do, but by what Gartner presumes an enterprise customer should want. They presume too much.
Customers aren’t idiots; they can see that a company isn’t very big. What they don’t know is how well their products work.
Gartner needs to start earning that $1.3 billion, not just collecting it. If the FTC can require lowly bloggers to report vendor freebies and payments, perhaps the day isn’t far off when mighty IT consulting shops will have to do likewise. Kudos to ZLT for noting the emperor’s scanty attire.
Few in the blogosphere or storage Twitterland have stepped forward to vigorously defend the Magic Quadrant, while those who base buying decisions on the Magic Quadrant also come in for criticism.
A sampling of that debate from Harris’s blog–
From a commenter calling himself “Thomas”
Between paying Gartner to think for them and vendors to do the infrastructure, it’s no wonder business execs want to adopt technologies that get rid of no-content vendor pass-throughs that walk the halls calling themselves “IT staff.” Filling out purchase orders and plugging in ethernet cables do not provide competitive advantages and if you cant get a competitive advantage from your IT, then why not outsource it?
From a commenter with the handle “jh”
Gartner is a tool to be used by knowledgeable professionals to help them understand the market, narrow focus and choose product. Nobody buys right off a list. Although you and others may not like the 2 x 2 chart, it provides a good shorthand of the overall market in ways that are easier to talk about than a stack of thousands of pages of product information. If a IT pro or a CIO is just looking for a defensible position, they could spend a lot more money getting a lot less value by hiring some of the consulting organizations I have dealt with…
When I informally polled my Twitter followers about this when the lawsuit was first announced, I also got some varying replies.
Navel-gazing like this about the role of industry analysts in the storage business has popped up before, but to little noticeable effect on how anybody — vendors, users, and analysts — seem to operate. I’m not sure this time will be any different, but the lawsuit may have raised the profile of the debate a bit.