Storage Soup

A SearchStorage.com blog.


July 6, 2010  3:09 PM

Coraid pulls Z-Series NAS after NetApp threatens lawsuit



Posted by: Dave Raffo
file systems, NAS

Coraid has suspended its recently launched EtherDrive Z-Series NAS appliance after receiving a legal threat from NetApp, which claims the Z-Series infringes on NetApp’s ZFS patents.

In a letter to customers notifying them of the situation, Coraid CEO Kevin Brown said he hopes to continue selling the ZFS-based NAS appliance after a long-standing legal dispute between NetApp and Sun (now Oracle) gets settled.

“We hope to reinstate our Z-Series offering in the coming months,” Brown wrote to customers, after noting that the ZFS file system has been downloaded nearly one million times by customers and vendors.

“We made the decision to suspend shipment after receiving a legal threat letter from NetApp Inc., suggesting that the open source ZFS file system planned for inclusion with our EtherDrive Z-Series infringes NetApp patents.”

NetApp filed a lawsuit against ZFS creator Sun in 2007 claiming patent infringement and Sun promptly countersued. The suits were still pending when Oracle acquired Sun, and now Oracle and NetApp are attempting to settle out of court.

Coraid launched its Z-Series May 19, adding the NAS device to its ATE over Etherenet (AoE) SAN platform. Brown said Coraid is still selling the SAN systems, which are not affected by the NetApp patent charge.

In his letter to customers, Brown included a letter dated May 26 that he received from Edward Reines, a patent litigation attorney from Weil, Gotshal and Manges LLP who represents NetApp in its ZFS litigation. Reines’ letter read in part: “Coraid must cease infringement of NetApp’s patents and we reserve our rights to seek all appropriate remedies for any infringement.”

The letter to Brown points out that Coraid uses the term “unified storage” to describe the Z-Series, and NetApp’s patents involved in the litigation with Sun “cover a host of features, including unified storage …”

Brown, who served as VP for NetApp’s Decru security platform for 18 months before joining Coraid, apparently took the NetApp threat seriously.

His letter to customers didn’t say how many if any have purchased the Z-Series, only that Coraid has received “dozens of customer inquiries.”

Compellent, which is more competitive with NetApp than Coraid is, launched its zNAS system based on ZFS in April. If Compellent has been threatend by NetApp, it hasn’t said so publicly.

July 6, 2010  2:18 PM

The Cloud goes to Washington



Posted by: Dave Raffo

Last week’s Congressional hearing on cloud computing served as a condensed version of the cloud debate that has been ongoing for about two years now. Congress heard definitions of different types of clouds, government representatives voiced concerns over security and other issues associated with cloud computing, and vendors extolled the cloud’s virtues while promising their technology can overcome all of its hurdles.

But the hearing made it clear that the federal government — which is forecasted to spend about $76 billion on IT this year – is serious about the cloud. Government agencies see the cloud as a method of data center consolidation. According to s federal CIO Vivek Kundra, the U.S. government has nearly tripled the number of data centers from 432 to 1,100 over the past decade while many corporations have reduced their data centers.

There wasn’t a lot of specific talk about storage during the hearing, although Nick Combs, CTO of EMC’s Federal division, was part of the vendor panel.

“There’s a whole lot of concern about the number of data centers out there in the federal government today, and what’s the right number,” Combs said in an interview after the hearing.

Much of Combs’ testimony focused on security, which EMC delivers through its RSA division. The security talk is also where the various types of clouds came in.

“There were lots of questions around security in the cloud and where clouds wouldn’t be appropriate for government information,” Combs said. “We talked about the multitenant cloud – are there sufficient protections to put information in the cloud and what level of risk are we talking? How do we provide compliance and meet government regulations? Only public-facing information should be placed on public clouds. Information that is sensitive in nature needs to be protected in more private-type clouds. That seemed to resonate pretty well.”

As part of his prepared remarks, Combs offered the NIST definitions of four types of clouds:

- Private Cloud is infrastructure deployed and operated exclusively for an organization or enterprise. It may be managed by the organization or by a third party, either on or off premise.
- Community Cloud is infrastructure shared by multiple organizations with similar missions, requirements, security concerns, etc. It also may be managed by the organizations or by a third party on or off premise.
- Public cloud is infrastructure made available to the general public. It is owned and operated by an organization selling cloud services.
- Hybrid cloud is infrastructure consisting of two or more clouds (private, community, or public) that remain unique entities but that are tied together by standardized or proprietary technology that enables data and application portability.


June 30, 2010  4:10 PM

Oracle gives Sun 7000 storage a Fibre Channel option



Posted by: Dave Raffo
Fibre Channel SAN, multiprotocol storage

Oracle upgraded its flagship disk storage platform this week, adding Fibre Channel host connectivity to the Sun Storage 7000 multiprotocol series while doubling down on its SAS disk interface support.

Sun originally launched the 7000 as a ZFS-based Ethernet platform, mainly focused on handling file data with iSCSI thrown in for block storage. That was in late 2008, more than a year before Oracle closed its acquisition of Sun. But Oracle’s senior director of storage products Jason Schaffer says customers wanted Fibre Channel to make the 7000 better suited for primary storage.

“When we first launched the 7000, we had a strong lineup of Fibre Channel with our 6000 series and the gap in our portfolio was NAS,” he said. “Early adopters used [the 7000] mainly for disk-to-disk-to-tape backup. Over time people started to trust it in other environments, like for virtual servers, and it was being brought in more as primary storage for consolidated workloads.”

Schaffer said current customers can download software for Fibre Channel support. He says about 15% of 7000 customers already downloaded software to use it for Fibre Channel over the past few months, even before Oracle officially announced FC support.

The 7000 also features built-in data deduplication, which Sun added to ZFS late last year. Another big part of the 7000 upgrade is support for 2 TB SAS drives, doubling the total capacity of the system to 576 TB. Schaffer says he sees no need for FC drives because the 7000 supports 6 Gbps SAS, solid state drives (SSDs) and SATA – especially with ZFS’ ability to use SSDs as high-speed disk cache.

“DRAM flash and SAS drives are more cost efficient than 15,000 RPM Fibre Channel drives,” he said.

The 7000 also takes advantage of data deduplication built into ZFS.

Oracle severed its OEM deal with Hitachi Data Systems to sell the Sun StorageTek 9000 enterprise SAN systems earlier this year, choosing to concentrate on the 7000 platform. But Schaffer said Oracle also remains committed to the Sun StorageTek 6000 series of Fibre Channel arrays, which consist of LSI Corp. controllers and Sun management software. “We’re still supporting and growing the 6000 platform,” he said, “although the bulk of our engineering will be on the 7000 series going forward.”


June 23, 2010  7:07 PM

HP beefs up convergence, solid state storage with partners



Posted by: Dave Raffo

After two days of trumpeting its own new product launches at HP TechForum, Hewlett-Packard today disclosed new deals with storage and networking partners. Besides its sworn enemy Cisco, all the major storage networking players had HP news to share.

Both Emulex and QLogic have ASICs involved with HP’s FlexFabric 10-Gigabit Ethernet adapters. The adapters are part of HP’s Virtual Connect platform for a converged network-storage infrastructure.

HP is using Emulex’s OneConnect Universal Converged Network Adpater (UCNA) in the ProLiant G7 server blades inside the HP NC551i Dual Port FlexFabric 10Gb Adapter. This includes the LAN on motherboard (LOM) that Emulex recently acquired from ServerEngines. Emulex UCNAs support hardware protocol offload support for TCP/IP, iSCSI and Fibre Channel over Ethernet (FCoE).

HP is also using QLogic’s new Bullet switching ASIC on its Virtual Connect FlexFabric 24-port 10-GigE BladeSystem module. QLogic’s Bullet supports Ethernet, Fibre Channel and iSCI, and lets administrators change protocol support on the fly.

While Brocade isn’t part of HP’s FlexFabric yet, HP launched several 8 Gbps Fibre Channel products from Brocade today. They include the 804 FC HBA for HP c-Class BladeSystem mezzanine card, the HP StorageWorks P2000 G3 MSA Virtualization SAN starter kit with six HBAs and two FC switches from Brocade, and the 64-port StorageWorks B-series Data Center SAN Director Blade. The Director Blade is based on a DCX Backbone blade Brocade launched earlier this month.

HP also added flash SSD products from Fusion-io and Samsung to its ProLiant servers. HP extended its OEM deal with Fusion-io, adding IO Accelerators for ProLiant DL and ML servers. The new PCIe form factor IO Accelerator will be available with ProLiant servers in 160 GB and 320 GB SLC capacities and 320 GB and 640 GB MLC capacities. The Fusion-io IO Accelerator was previously available for ProLiant server blades.

HP is also selling Samsung Enterprise SSDs with the ProLiant G6 and G7 servers. The Samsung SLC SSDs scale to 512 GB.


June 17, 2010  2:27 PM

FalconStor signs up Hitachi Data Systems as dedupe partner



Posted by: Dave Raffo
data deduplication, primary data deduplication, virtual tape library

FalconStor made its virtual tape library and data deduplication partnership with Hitachi Data Systems official today, disclosing that HDS will resell FalconStor’s VTL with dedupe and its File-interface Deduplication Software (FDS) integrated with the HDS Adaptable Modular Storage (AMS) 2000 platform.

During their last earnings report conference call in April, FalconStor execs hinted that they were working on partnerships with HDS. They didn’t disclose what products were involved, but there were rumblings around the industry that HDS had agreed to sell FalconStor’s FDS dedupe either through an OEM or reseller deal.

The reseller arrangement means HDS will sell the FalconStor products under the FalconStor brand rather than the HDS brand.

Nexsan and SpectraLogic also resell FalconStor deduplication software, but HDS is now the largest FalconStor dedupe partner as the software vendor looks to replace revenue lost from EMC and Sun over the past year. EMC sells a lot less FalconStor VTL software now that it has Data Domain deduplication boxes in its portfolio. After buying Sun, Oracle ended Sun’s reseller arrangement for FalconStor VTL and dedupe software.

“We have a very tight relationship with HDS now,” FalconStor marketing VP Fadi Albatal said. “There’s a lot of collaboration between the two companies.”

But FalconStor’s collaborator was strangely silent for this announcement. There were no HDS executives quoted in the press release, and requests I made to HDS for comment over the last two days went unanswered. The HDS deduplication strategy remains unclear. It sells CommVault’s backup software with dedupe through an OEM deal, and has a reseller deal for Diligent ProtecTier VTL and dedupe software dating to before IBM acquired Diligent in 2008. Sepaton uses HDS hardware as the backend storage for its VTLs with dedupe. Sepaton execs claim HDS sales people have financial incentives to sell Sepaton’s VTLs, but HDS hasn’t confirmed that.

If HDS has a preferred dedupe partner among those options, it isn’t saying.

Meanwhile, Albatal says FalconStor is considering extending its dedupe capabilities to primary storage. “We have the building blocks,” he said. “Primary deduplication has to be a post-process method, which is the nature of our solution. We won’t have something in the near future, but it’s something we will be looking at.”


June 16, 2010  1:55 PM

Gear6 caches out; Violin Memory scoops up assets



Posted by: Dave Raffo
NAS, solid state storage, storage vendors

Flash memory appliance vendor Violin Memory today said it acquired the assets of failed caching startup Gear6 for an undisclosed price, and plans to add Gear6’s NAS and Memcached software to Violin arrays.

Gear6 sold its NAS product on large appliances – the smallest was an 11u device that cost $150,000 when it launched two years ago, and larger systems cost more than twice as much. Violin’s 3u arrays range from $30,000 for 700 GB to $200,000 for 10 TB of single-level cell (SLC) solid state capacity.

Violin Memory CEO Don Basile said Gear6 and Violin both set out to eliminate I/O bottlenecks in the data center.

“Instead of using a Gear6-type appliance, we’ll bring the software of both solutions on top of Violin devices,” he said.

Basile said the NAS caching product is the more interesting of the two Gear6 offerings, adding “we need a little more study on Memcached,” a web caching product based on open source software.

“The NFS piece aligns with Violin’s mission and its vision for the data center evolution,” he said. “Violin’s array is far denser and far faster than what Gear6 was able to do. We’ll take expensive complicated hardware from Gear6 and make it more scalable. We can extend our footprint by offering NFS caching in front of NAS devices, and solve performance problems without people needing to replace their NAS infrastructure.”

Violin last month launched the Violin 3200 SSD with plans to eventually scale it to 100 TB. The 3200 holds 84 128 GB SLC memory modules for 10 TB of total capacity, and can have up to 500 GB of RAM cache.

Gear6 filed to liquidate its assets earlier this year after burning through $24 million in venture funding and failing to get more. Basile said Violin bought its technology and patents, and will hire some Gear6 engineers. Violin identified at least 30 Gear6 customers, and Basile suspects there could be as many as 60. He said Violin is “sorting through contracts” to determine its support obligation. He says some of those Gear6 customers are also using Violin products.

Analyst Greg Schulz of StorageIO says Gear6 likely aimed too high with its products and ignored the mainstream NAS market.

“Gear6 was trying to create a market, but I think they focused on higher-end customers as opposed to making it more viable for general purpose NAS,” he said. “They were also going after read-intensive NFS-type environments that may be looking to use deduplication as opposed to an accelerator.”

Schulz says the Gear6 technology will expand the Violin offering.

“They’re adding additional personality to their solid state system,” he said. ‘The real secret to Gear6 was its caching algorithm and its ability to support files. Now Violin has a NAS solution.”


June 14, 2010  8:21 PM

Let the primary reduction deals begin



Posted by: Dave Raffo
primary data reduction

IBM is looking to grab primary data reduction vendor Storwize for $140 million, according to Israeli financial news websites Globes and TheMarker. Whether that deal comes off or not, you can expect a series of either OEM deals or outright acquisitions involving large storage vendors and suppliers of primary reduction technology – which now includes Permabit Technology, Ocarina Networks and Storwize.

Permabit and Ocarina each say they have one large OEM primary deduplication deal nailed down and are working with more storage vendors to secure others. Dell, Hewlett-Packard, Hitachi Data Systems, IBM, and LSI are all believed to be on the prowl for the technology and it’s a matter of whether they will forge OEM deals or acquire the technology outright. NetApp and EMC already have primary reduction capabilities.

Ocarina has been the subject of acquisition rumors, and Ocarina director of marketing Mike Davis says an IBM-Storwize deal would raise his company’s value.

“There’s demand for this technology, and we’ve had contact and serious conversations with all the OEMs out there,” Davis said. “We don’t know how serious IBM’s interest is [in Storwize], but Storwize only does a subset of what Ocarina does. So if they’re worth $140 million, Ocarina should be worth even more than that.”


June 11, 2010  5:27 PM

Moving On



Posted by: Beth Pariseau
Around the water cooler

Photobucket
Imagine if the development of the Internet had stopped here…


…or if 1 GB of disk drive space had never developed beyond this.

Individual careers in the fast-paced world of technology are just as subject to change.

Five years ago, I was working two jobs, a typical 40-hour-a-week office job by day and a ‘stringer’ position for a local newspaper by night. I had graduated from college into what was then the worst job market since the Great Depression (as we all know, sadly, it has since been surpassed).

I knew I wanted to go into journalism, but couldn’t find a way in the door at traditional “dead tree” media publications. Sometime in the early spring of 2005, it dawned on me that the stringer position was aptly named — I was being strung along, but a full-time job on the paper’s staff was probably not in the cards.

That’s when I entered the word “writer” into a job search site, and a job description caught my eye for a “News Writer” position being offered at a company I’d never heard of before. I did the customary interviews, and was then hired on to the best job I’ve ever had.

It hasn’t just been technology I’ve gotten the chance to learn while working as a news writer for the Storage Media Group at Tech Target. It’s also given me a opportunity that’s becoming increasingly rare in today’s world: the chance to gain irreplaceable experience as a young writer covering a daily beat. Tech Target hired me with virtually no technology experience, and gave me the opportunity to establish a career for myself.

That’s why I’m happy to say that as I move to the next step in that career, it will still be as a Tech Target employee, in the Data Center and Server Virtualization Group.

Part of the “reporter’s personality” is being a naturally curious person. The need to know, to find out, to keep learning more, is deeply ingrained in a mind that is journalistically inclined. Which is why, though I have grown quite comfortable in the storage industry in the last five years, it has come time for me to begin broadening my technical expertise again.

I’m a nerd from way back. The only activity I’ve enjoyed more in my life than writing was being a student. Now, I want to understand other facets of the enterprise IT infrastructure. I want to keep learning, keep growing, the same way the technologies we all work with continually develop and advance. An opportunity to do just that, while remaining with a company that can provide me with the means to develop as a journalist and learn on the job, as Tech Target can, is an opportunity that’s just too good to pass up, just as it was when I first joined this company half a decade ago.

We’ve all seen how way leads on to way in enterprise IT, how paths cross, companies and technologies integrate and consolidate — and thus I anticipate remaining in touch with the brilliant people I’ve had the good fortune to get to know while covering this market. Data storage remains a crucial aspect of the evolution of digital technologies in our modern age, and that’s one thing I don’t see changing.


June 8, 2010  8:11 PM

Riverbed: cloud data storage products will come ‘from the ashes’ of Atlas



Posted by: Beth Pariseau
Cloud storage

I had a brief conversation last week with Ed Chapman, Riverbed’s VP of cloud storage acceleration products, hired away from Cisco in May. Chapman (and senior vice president of marketing and business development Eric Wolford, who chimed in frequently) stopped short of divulging much detail on the planned Cloud Storage Accelerator product, but did offer some new information about its origins…

What are the goals for Riverbed’s cloud business this year?

Wolford: We haven’t launched yet, but we told our user group about the Cloud Storage Accelerator that Ed is going to head up getting to market. It’s a bit of a phoenix from the ashes of Atlas in part, and Steelhead in part, and some new development, in part. I don’t want you to think it’s the same product [as Atlas]– we took key components from that product and put them into the Cloud Storage Accelerator, to sit in the data center and accelerate access to AT&T Synaptic Storage, Amazon S3, etc.

While Riverbed’s working on its cloud product, other vendors like Nasuni, TwinStrata and StorSimple are already out in the market selling cloud storage gateway appliances to interface securely – and in some cases with deduplication features – with the cloud. How does Riverbed plan to differentiate its cloud product against those offerings?

Chapman: We are not going after the same market segments they’re going after. We’re going after a focused market segment that we think is more applicable in the marketplace. Just to give you a viewpoint of what customers are looking for in general, if you look at the application of new storage technologies in the market, it sort of follows a hierarchy…[users say] ‘maybe I’ll use this for backup’… then archive…then they look at all the rest, including primary storage. What we’ve heard from our customers that we’ve spoken to, they want to utilize cloud storage infrastructure in the same sort of mechanism, backup and archive and then going down the rest of the hierarchy from a storage perspective. Our goal looking at the marketplace is to leverage things our customers will want to leverage and utilize first, along the parameters of backup and archive rather than primary storage filer replacement.

Doesn’t Riverbed already allow replication to cloud services for DR? How is this different from that?

Wolford: Well…we’re just going to have to wait.

Similarly, EMC and other vendors are working on systems like VPlex and Atmos, which they claim can replicate data at scale among data centers, with little mention of WAN optimization technology as a necessary component of the infrastructure. Do those products represent a threat to Riverbed’s market? What would your role be in that environment?

Wolford: We’ll help them. The parallel I would make is that when cloud first came out, nobody mentioned WAN optimization. Nobody mentioned, ‘we have this great product but unfortunately, it has this problem.’ Any time users have distance between them and their data they have a problem. There’s been a trend toward consolidation in remote offices and data centers – the cloud is a variant of that reality.The more that reality occurs, we are just lovin’ it because it spotlights the performance problem.

Chapman: EMC has been selling SRDF and SRDF/A and never said WAN optimization is needed, but we just won EMC Select Channel Partner of the Year because we could be used as the primary WAN optimization tool with SRDF/A. So my point there is while EMC talks and launches VPlex and talks about distributed cacheing — and I think it’s a fabulous technology — that doesn’t mean we’re not going to be able to add a lot of value to it the way we have with SRDF, SRDF/A and other technology used for replication.


June 7, 2010  10:22 PM

Emulex marries its 10-GigE partner; acquires ServerEngines



Posted by: Dave Raffo
storage networking, storage vendors

Emulex today acquired its 10-Gigabit Ethernet silicon partner ServerEngines for $78 million in cash and eight million shares of Emulex stock to be issued at closing, which could bring the price to around $160 million.

The deal is expected to close next month. The Emulex shares would be worth $81 million using Friday’s closing price of $10.11.

Emulex sold ServerEngines silicon with its OneConnect Universal Converged Network Adapters (UCNAs) adapters for the past two years as part of an OEM and joint development deal between the two. ServerEngines contributed the Ethernet and iSCSI portions of the ASICs for the converged adapters.

Emulex CEO Jim McCluney says ServerEngines gave Emulex a fast path to combining Etherent with its own Fibre Channel stack, and now that the UCNAs are coming to market it makes sense to bring the technology in-house. That gives Emulex more control of the technology.

“We saw this as a new game where the old rules didn’t apply,” McCluney said of converged networking. “Instead of repurposing our Fibre Channel ASICS, we wanted to do something unique. We went out and found best of breed 10-gigabit ASIC to combine with our own Fibre Channel stack.We felt the time was right to take things to the next level.”

ServerEngines has two main product families — the BladeEngine 10-GigE ASICs that Emulex uses in OneConnect, and a Pilot family of server management controllers soled by Cisco, Hewlett-Packard, NEC, and Unisys. McCluney said the Pilot products will bring Emulex around $4 million in revenues each quarter.

ServerEngines has about 170 employees, mostly engineers based in Sunnyvale, CA, Austin, TX, and Hyderabad, India.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: