Last Friday we published our annual “What to buy a geek for the holidays” story, which included advice from a panel of geek experts on what types of gadgets non-technical types can impress them with this holiday season.
As we were soliciting suggestions for consumer items, however, some enterprise selections also found their way in, many of them quite clever.
So, without further ado, the things storage pros are hoping for that you won’t find in any store.
“How about an [IBM] XIV that I can attach an Redundant Array of Solar Panels (RASP) to and heat my Ice Fishing house while storing some photos and videos of the fish that got away?” quipped StorageIO analyst Greg Schulz.
Added Wikibon’s David Vellante:
- A secure cloud
- A dumpster to haul all my backup tapes that I’ve converted to disk-based backup.
- A primary storage device that optimizes capacity without sacrificing performance.
- A virtualization performance guru … make it 5 gurus …
Wikibon security analyst Michael Versace had an even more ambitious gift idea:
I’d buy the RSA Security/VCE vision for how to secure the journey to the cloud. I see the current strategy as best-in-class. I’ve heard, read, and dug into the strategy for those that purport to have a cloud security framework, and other than RSA, they are all either as clear as mud, after-the-fact, completely proprietary, or marketware. Questions for RSA/VCE remain around delivery, strength, messaging, and priorities in their roadmap.
Finally, Enterprise Strategy Group’s Bob Laliberte treated us to a rendition of the 12 Days of Storage:
12 Virtual desktops
11 Snapshots of your data
10 Gig Ethernet
9 FCoE Converged Network Adapters Converging
8 Backups that actually work
7 Endpoint security devices
6 Types of de duplication
5 Token Rings
4 internal private clouds
3 tiers of storage
And a Policy engine with a decision tree
You can read the rest of our (perhaps more attainable) gift suggestions here.
P.S. Another holiday gem came our way via Quantum Corp. this morning — an original song they recorded to bolster their competitive claims. They’ve given us permission to post it here online — think of it as our gift to you. Click here to hear the Quantum song.
P.P.S. UK storage pro Martin Glassborow also has a series of posts at his Storagebod blog detailing wish lists for specific vendors, including HP and EMC, as well as a roundup of vendor bloggers’ wishes as well…
This morning, analyst David Ferris of Ferris Research sent out a note to subscribers of his Ferris News Service relating rumors that “something is rotten in Denmark” with Dell’s SaaS email archiving subsidiary MessageOne.
According to Ferris’s note–
We’re hearing a series of rumors that something is going badly wrong at Dell/MessageOne. Eg:
- They’ve lost a huge amount of customer archived email over the past couple of weeks
- Many customers are making inquiries about other vendors and their ability to ingest/absorb their historic archive data
- One vendor told us they had been asked to help customers move their emails back from Dell/MessageOne and the most efficient way to ingest large amounts of data (10 TB for example)
Our industry sources confirm that there are indeed MessageOne customers making such inquiries, though the exact severity or root cause of the problem has not been established. Several sources say there has been data loss, though it’s not known how much or how many customers are affected.
One thing all our sources agree on is that over the past month or so — and especially in the last week — customers have had difficulty accessing archived email in Dell/MessageOne’s cloud, either because the data has been lost or because it has been mis-indexed. On top of that, sources say customers have been frustrated with the support they’ve received so far in response to this problem and are looking for ways to move off the service without being penalized.
UPDATE: After business hours on the East Coast last Friday, Dell responded with the following statement through a spokesperson:
Dell is committed to delivering ongoing customer satisfaction – we are aware of the issue and are in contact with the customer who has expressed concern over Dell’s service. There are many factors associated with successful email archiving including email formatting, storage management and retention policy frequency, and we are working with the customer to isolate the root cause of this issue.
Note the use of singular nouns — particularly “customer.” Earlier reports seem to indicate a widespread problem, but Dell’s statement seems to imply otherwise. We’ve asked them for further clarification, but given it’s a holiday week, aren’t sure if we’ll hear back.
UPDATE 2: Dell has responded declining further comment.
After more than two years of trying to push out Adaptec CEO Sundi Sundaresh, minority investor Steel Partners finally succeeded Thursday. Sundaresh resigned and Steel Partners installed its managing directors John Quicke as acting CEO and president with the intention of selling Adaptec.
The question is, who will buy and what’s left of the company to sell off? It’s no secret that Adaptec’s business has gone south in recent years. Adaptec lost $14 million for its last fiscal year that ended in March. It lost $1.8 million last quarter, and revenue dwindled to $18.4 million – down from $32 million the previous year.
Maybe a vendor looking to beef up its SAS RAID business – perhaps PMC Sierra – will want Adaptec’s core business. Or Adaptec might spin off one of its new interesting products — the MaxIQ hybrid solid state/hard drive storage system.
But the most valuable part of Adaptec isn’t its technology. It’s cash. Adaptec ended last quarter with $386 million in cash, cash equivalents and marketable securities. That’s more than 90% of its market cap of $416 million. So even if a buyer steps up, don’t assume it will view Adaptec’s technology as a valuable asset.
Last podcast before the holiday break. Hope everybody has a safe and happy holiday and a great New Year!
Virtual Instruments, a spinoff from optic communication and test and measurement equipment maker Finisar, is packaging its VirtualWisdom and NetWisdom tools into an “emergency SAN troubleshooting service” called SOS-4-SAN.
Virtual Instruments created the NetWisdom software (along with VirtualWisdom, which accounts for virtual server traffic) that can be used standaone to report on FC SAN performance or combine with Finisar’s Traffic Analysis Point (TAP) network sniffer that sits between a switch and FC SAN. With or without a TAP, the software copies FC SAN traffic, strips away the payload, and dumps header information into a database for analysis and diagnostics.
VI has gotten these tools into some large shops, such as Unilever, the parent company of several household name brands, including Bertolli pasta, Lipton Tea, Slim-Fast diet drinks and Dove soap. Unilever’s SAN vendor Hewlett-Packard brought VI into that installation. After seeing it at work, Unilever’s staff installed NetWisdom permanently in its 5 PB UK data center.
Seems like a pretty good path to market for VI (similar to the one taken by Procedo, which performs compliant data archive migrations and is recommended to customers by major vendors in that market), but this week Virtual Instruments is offering its troubleshooting tools as a managed service, with its own website, www.sos4sans.com.
“Many Global 2000 customers are not satisfied with the SAN troubleshooting skills of their current SAN component vendors,” VI VP of marketing Len Rosenthal said. “These vendors really only have visibility into their components (HBAs, switches, cabling, arrays) and, unlike Virtual Instruments, they don’t have the ability to measure transaction traffic flows and error conditions across the entire SAN infrastructure.”
However, the partner business isn’t going away, Rosenthal added. “Some of the larger vendors are beginning to resell VI services as a part of their global service offerings,” he said. “We are planning to work with more of them in the near future to either resell our services or train them to offer SAN troubleshooting services themselves using the NetWisdom and VirtualWisdom products.”
According to a VI press release, customers put in a request via website form or phone, and “a qualified Virtual Instruments services professional will immediately respond at no charge to the customer.” The VI services pro runs through a checklist of common problems to try and determine the severity of the problem. If necessary, VI personnel will travel to the customer site and remain there until the issues are resolved. Fees include a daily rate plus expenses for the services engineers.
Broadcom, which failed in its bid to buy Emulex for its Fibre Channel over Etherenet (FCoE) stack earlier this year, is planning to demonstrate FCoE functionality on its 10GbE NetXtreme II controller Tuesday at its analyst day.
NetXtreme II is a single-ASCI card that Broadcom says can provide hardware-based acceleration and offload technology for FCoE and iSCSI. NetXtreme II cards are already shipping that support Layer 2 Ethernet and iSCSI. Burhan Masood, Broadcom’s senior manager of product marketing for high-speed controllers, says the FCoE capability was developed in-house.
“Broadcom is not new to storage, and we intend to be a serious player in this space,” Masood said.
QLogic went to market with the first single-ASIC converged network adapter (CNA) supporting FCoE, and has design wins with IBM, Dell and NetApp for its FCoE card. Emulex picked up its first FCoE design win for its OneConnect card last week with Verari Systems, which is restructuring amid rumors it is going out of business.
Masood says Broadcom has a leg up on QLogic’s single-ASIC 8100 series CNA because Broadcom supports iSCSI on its card. QLogic’s 8100 handle 10-Gig Ethernet and FCoE.
Hewlett-Packard, IBM, and Dell sell Broadcom NetExtreme II cards, and Masood says the card is available for server vendors to sample with FCoE functionality.
So, if Broadcom has FCoE capability on its unified adapters already, why did it need Emulex?
“Emulex may have helped with time to market, but one could argue about that,” Massoud said. “We tried to hedge our bets and had in-house development going and we were committed to that as well. This is a home-brewed solution.”
It’s that time of year again, the time when we take stock of the past 12 months and look ahead to what’s coming in the New Year. This week, a topic on many minds was automated tiered storage in the wake of EMC’s shipment of version 1 of its long-awaited FAST (Fully Automated Storage Tiering) feature for Clariion, Symmetrix and Celerra disk arrays.
EMC competitors Hitachi Data Systems and Hewlett-Packard say they will have something similar, and they deny EMC has beaten them to the punch because FAST will not have block-level access coming until the second half of 2010 at the earliest. IBM is working on similar data placement software.
HDS’s Hu Yoshida pointed out that HDS has had a Tiered Storage Manager offering available that moves volumes among tiers of storage. It’s not the sub-LUN level experts have said will be necessary to spur enterprise SSD adoption, but then again, Yoshida said, neither is FAST version 1.
“EMC has been doing this for some time,” Yoshida pointed out, citing the SymOptimizer software that was available for DMX3 and DMX4, which at one EMC customer said caused his array to lock when it performed migrations (EMC says FAST does not lock the entire array, just the devices where data is being swapped).
Yoshida also said he wondered why EMC bothered with FAST 1. “It’s usually easier to start with the smallest unit, the chunk or page, than the LUN,” he said. “There’s also more benefit to customers with dynamic provisioning, and moving less data around means the performance overhead on the system’s internal bandwidth is lower.”
Yoshida said HDS is working on page-level automated tiered storage movement for next year.
Kyle Fitze, marketing director for the HP storage division, said HP will also have something similar to FAST next year. “We think that having policy-based automated migration is a key component for driving adoption of solid-state drives, which are still a very small single-digit percentage of the overall volume of disk drives shipped,” Fitzke said. Like HDS, HP has offerings that can move data nondisruptively today, but not automatically according to policy.
Also, at the high-end, HP and HDS’s disk array are one and the same under the covers — the HDS USP-V, which HP OEMs as the XP. It will be interesting to see if the automation each exec was talking about will come from one place or two…
Dell is adding support for 10-Gigabit Ethernet and making other enhancements to its EqualLogic iSCSI SAN arrays as part of today’s launch of 10-GigE products throughout its networking and storage platforms.
Dell has turned its $1.4 billion 2008 acquisition of EqualLogic into the iSCSI market lead, with 34% share according to IDC’s Storage Tracker figures for the third quarter. Today it unveiled a larger capacity system, the EqualLogic PS6500X, and added 10-gigE controllers to its PS6010 and PS6510 arrays. Dell will also support 100 GB SSDs (it currently offers 50 GB SSDs) on EqualLogic PS6510S arrays.
The 6500X with 10,000 rpm SAS drives can scale to 16 nodes per group (up from 12 in previous systems) for 460TB of capacity per group. The 16-drive PS6010 and 48-drive 6510 systems support two 10-gigE ports per controller for a total of four 10-GigE ports per system. Current EqualLogic customers will be able to add a 10-gigE node to upgrade.
Along with 10-GigE iSCSI, Dell is adding Fibre Channel over Ethernet (FCoE) support through storage networking partners QLogic and Brocade. Dell will begin shipping the QLogic 8152 converged network adapter (CNA) for Fibre Channel over Ethernet (FCoE) for PowerEdge servers and the QLogic 8142 mezzanine card CNA for PowerEdge blades this month, and Brocade DCX Fibre Channel, 8000 series FCoE and RX Ethernet switches in February.
Despite making a splash around its 10-gigE support, Dell senior manager Travis Vigil said he expects a slow migration from gigabit Ethernet. He says early adopters will be in one of two groups: customers running heavy database workloads and those with many virtual servers.
“There’s a small vocal group of customers interested in 10-gig, and they tend to be customers running sequential workloads,” Vigil said. “Also, a lot of virtual environments need bandwidth for 10-gig servers. We think it will be a small but growing course of adoption over the next couple of years.”
Analysts say organizations want to know their vendor has 10-gigE available for an upgrade path, but won’t necessarily rush out to upgrade. iSCSI has grown substantially in recent years largely with GigE, although Hewlett-Packard reports 10-GigE adoption if its LeftHand Networks iSCSI SANs.
Forrester Research senior analyst Andrew Reichman said having 10-gigE capability paves the way for converged Fibre Channel and Ethernet networks. “Unified fabrics between the SAN and LAN will happen with 10-gig, not 1-gig Ethernet,” he said. “For iSCSI to be a viable contender in unified traffic, it has to support the jump to 10-gig. Because it runs at 1-gig now, it can be a gradual transition and less disruptive than FCoE.”
IDC research manager for storage systems Natalya Yezkhova says she doesn’t anticipate an immediate boost in adoption of 10-gigE but sees it implemented in high volumes by 2011.
“In two or three years, 10-gig will largely replace 1-gig,” she said. “I don’t expect iSCSI growth will accelerate with 10-gig, it will be more an an organic replacement [of GigE systems]. It’s no surprise Dell is doing this based on its focus on iSCSI the past two years.”
Weeks after rolling out solid-state drive (SSD) support in its SAN Volume Controller (SVC) based on SSDs from STEC, IBM has confirmed official support for Fusion-io — the partner it first worked with to preview SSDs in SVC.
However, rather than the SVC storage virtualization appliance, Fusion-io Flash storage will go inside System x servers in a “server-deployed storage tier,” according to the IBM press release. IBM calls the device IBM High IOPS SSD PCIe Adapter for System x. It’s available in 160 GB and 320 GB capacities.
When IBM disclosed its STEC partnership for SVC, it was a surprise, given the work it had already done with Fusion-io in a test bed it called Project Quicksilver. IBM officials said they went with STEC drives instead because Fusion-io’s devices would be a separate unit attached to the SVC, while STEC’s SSDs fit inside the SVC and require less rack space and power requirements for customers.
IBM is aiming the High IOPS Adapter based on Fusion-io at use cases such as “data-heavy graphics and 3-D renderings from medical research.”