Fusion-io came out of stealth today with a PCIe flash card designed to give off-the-shelf servers SAN-like performance.
Fusion-io calls its product the ioDrive, and it’s NAND-based storage that comes in 80 Gbyte, 160 Gbyte and 320 Gbyte configurations. Fusion-io CTO David Flynn says the startup will have a 640 Gbyte card later this year. The ioDrive fits in a standard PCI express slot, shows up to an operating system as traditional storage and can be enabled as virtual swap space.
Flynn said its access rates are more comparable to DRAM than traditional flash memory.
“This is an IO drive, we do not consider it to be a solid state disk,” Flynn said. “It does not pretend to be a disk drive. It does not sit behind SATA or a SCSI bus talking SATA or SCSI protocol to a RAID controller. It sits directly on the arteries of a system.”
Fusion-IO bills its card as high-performance DAS that can reduce the need for more expensive SAN equipment. Fusion-io prices the drives at $2,400 for 80 Gbytes, $4,800 for 160 Gbytes and $8,900 for 320 Gbytes.
“Dropped into commodity off the shelf server, you have something that can outperform big iron,” Flynn said.
Not even the Fusion-io execs see their cards as SAN competitors, though. If it finds a place in storage, it will be as a way to run applications that require high performance — such as transactional databases or digital media – on servers that aren’t attached to SANs.
“It’s a way of extending the life of servers with direct attached storage,” said analyst Deni Connor of Storage Strategies Now. “I don’t see it as a replacement for Fibre Channel SANs, but it may prevent companies from going to Fibre Channel SANs as quickly.”
Last August I wrote about Swiss research facility CERN and its plan to store petabytes of data from its Large Hadron Collider (LHC) device on commodity NAS and in tape silos for scalability and cost savings. A month ago, it came to my attention that some people thought the collection of that data beginning in May might create a black hole that will eat the Earth.
Anybody who’s ever been exposed to TimeCube will know that just because people are shouting about something scientific on the Internet doesn’t make it solid science or make them experts. So my first post on the black hole issue was tongue in cheek–and it still all seems far-fetched (which is what CERN apparently wants us to believe…cue spooky music).
But since that post, more people with a bit more gravitas have come forward with black hole concerns. Such as a Scientific American blogger who commented on my original post. And former U.S. nuclear safety officer Walter Wagner, who according to MSNBC has filed a lawsuit against CERN to stop LHC in Hawaii.
There’s one puzzling element of the story about the lawsuit for me: the MSNBC writer says conferences on the suit are scheduled for June 16. In part, the suit seeks a temporary restraining order to keep CERN from turning on LHC until everybody’s satisfied it’s not going to bring about Armageddon. But last I knew, LHC was supposed to start up in May, making that hearing on the restraining order about a month too late if something disastrous does happen…
P.S. Speaking of lawsuits (or, at least, potential lawsuits), I got a very interesting followup call to my story on Atrato this week from a man who declined to tell me who exactly he is or why he’s interested, but who claims not to have been able to find evidence of the more than 100 patents Atrato claims for its Self-managing Array of Idle Disks. (An Atrato spokesperson sent a link to a Google search page when asked for a list of the patents.)
One thing this followup caller did happen to mention to me is that he’s an attorney in Minnesota. The light bulb went on…there’s another Minnesota-based company that has been rumored to be working on a product very similar to Atrato’s.
Could just be coincidence, though.
Although NetApp fired the first volley in its ZFS lawsuit against Sun Microsystems, Sun has been the aggressor since NetApp’s initial strike. Following NetApp’s lawsuit last September charging that Sun violated several of its patents regarding ZFS, Sun countersued and accused NetApp of violating Sun’s patents. Sun has also asked the U.S. Patents Office to re-examine several NetApp patents.
Sun filed yet another lawsuit Wednesday, alleging patent infringement related to storage management technology NetApp acquired when it bought Onaro in January.
“As NetApp attempts to extend its product line, it also expands its exposure to Sun patents,” Dana Lengkeek of Sun’s Corporate Communications office wrote in an emailed statement.
The latest lawsuit filed in U.S. Discrict Court in the northern district of California claims that software NetApp gained from Onaro uses Sun’s patented technology. Sun seeks compensation from NetApp for patent infringement and an injunction preventing NetApp from using Sun’s technology.
Sun also revealed the U.S. Patent Office granted its request to re-examine NetApp’s patent related to its “copy on write” technology.
But perhaps the harshest accusation Sun leveled against NetApp in its latest filing came in the opening paragraph of the suit. Chiding NetApp for only spending about $390 million on research and development last year and for holding “only approximately 200″ patents, Sun declared: “Indeed, rather than innovate, NetApp builds on the innovation of others” and “NetApp … uses extensive amounts of open source code developed by others, without contributing any innovation of its own.”
Instead of demanding money if it wins the suit, maybe Sun should request that NetApp change its already-taken slogan “Go Further, Faster ” to “NetApp: Built on others’ innovation.”
NetApp responded to the latest suit with a terse: “NetApp does not comment on ongoing litigation.”
As a fanatical Red Sox fan and a storage reporter, the whole EMC-logo-on-Red-Sox-uniforms thing has been a matter of some, er, ambivalence for me. It’s also been the source of some trash talk between me and EMC acquaintances, one of whom–a Yankees fan–keeps threatening to send me one of the defiled jerseys. To which I reply I’ll be ready with a seam ripper suitable for removing the patch on the sleeve. To which my father, who raised me a Red Sox fan, replied that I would be an idiot for not keeping it as a collector’s item. But anyway.
Meanwhile, since the EMC logo was slapped on the Olde Towne Team for the Japan trip (and for the Japan trip ONLY, they promise us, but we’ll see), Joe Tucci took a jaunt to Japan with the team and hobnobbed with the players at a gala reception last week. A gala reception at which Globe Red Sox columnist Dan Shaughnessy was also present, and witnessed the following, as reported in his column today:
Highlight of the trip, hands down, was EMC CEO Joe Tucci having a catch with Hideki Okajima at a fancy reception at the Sox’ New Otani Hotel headquarters Monday. While 2007 World Series clips were shown on a Green Monster-sized LED screen, assorted clients and dignitaries – most of them Japanese – feasted on sushi and fine wines. After a few speeches and interviews with Mike Lowell, Dustin Pedroia, Kevin Youkilis, and Terry Francona, a couple of fielding mitts were produced and Tucci lined up to play catch with the Sox’ second-most-famous Japanese hurler. Standing in front of the giant screen, Okajima softly tossed to Tucci, who was about 20 feet away. Tucci made the catch, and before you could say, “Nuke LaLoosh,” gunned a wild heater that sailed far high and wide of a sprawling Okajima and punctured the precious LED screen. I will never look at the EMC logo (which was on the Sox uniforms for the Japan games) without thinking of this.
Was that karmic payback for Tucci — a Yankees fan who dismayed Sox purists everywhere? Not for me to say. But I would have killed to be a fly on the wall–especially if I could have been a fly on the wall with a camera.
The UK’s Channel Register broke the story yesterday that NetApp’s new slogan, ‘Go Further, Faster,’ is kind of, um, already taken. By, er, Halliburton.
Eh, no worries. Not like that company is really well-known or well-connected or anything.
The Register weighs the two slogans:
On one hand, according to the Halliburton recruitment video, the company makes a habit of going further, faster every god damn day. That’s consistency. On the other, NetApp’s video has a 4/5 star rating on YouTube.
“Very cool!” says a commenter who we are sure is not an employee of NetApp. “Awesome,” echoes another completely random observer.
(When vendors get all nitpicky with me, I wonder how they even deal with The Register, or if they just pretend it doesn’t exist, since it’s across the pond anyway.)
And of course you know EMC bloggers are jumping up and down and singing happy tunes about this little gaffe.
Barry Murphy, formerly of Forrester Research, has been named the new director of product marketing for Mimosa, tasked with “expanding the company’s eDiscovery and content management partner ecosystem and broadening awareness for and adoption of Mimosa Systems’ award-winning content archiving platform.”
The cynically inclined might say he already did a similar thing with his last major act as a Forrester analyst, the publication of two reports on message archiving products. The reports concluded that on-premise software archives (such as Mimosa’s) are gaining more traction and are more mature in their features than hosted archiving offerings.
I don’t really believe this was anything other than coincidence–the research for such a report goes on for months and the report was obviously started well in advance of this transition. It makes sense that an analyst whose expertise was in records management and archiving would go to a vendor in that sector of the market. But sometimes the appearance of a conflict of interest can be as problematic as an actual conflict of interest. At the least, from my perspective, it’s unfortunate timing.
Murphy joins Tony Asaro, who recently resurfaced as chief strategy officer for Virtual Iron after a short stint with Dell, as the most recent storage analysts to head to vendors. It has been suggested to me that most analysts wind up at vendors or doing consulting, so maybe this is a natural lifecycle we’re seeing.
Speaking of defections, it has also been announced that Dr. David Yen has left Sun for Juniper. Yen was formerly the head of Sun’s storage group, who was shifted to their chip group following the restructuring of the storage and server groups under John Fowler last year.
Ever since I started covering storage, I’ve been hearing the disk vs. tape debate, usually including proclamations that tape is dead or dying.
There are good reasons to make that assertion. Disk-based backup is catching on, particularly among SMBs, and data deduplication is evening out the cost-per-GB numbers between disk and tape for many midrange applications. Disk is preferable to tape in many ways, especially because it allows faster restore times for backup and archival data. Once again, people are starting to ask, what’s the point of using tape? Dell/EqualLogic’s Marc Farley posted a funny video on his blog to illustrate the question on Friday.
I’m not so sure we’ll ever really see the end of tape. When it comes to the high end, there’s simply too much data to keep on spinning disk. The cost of disk is often still higher per GB, depending on the type of disk and the type of application accessing it. And that doesn’t include power and cooling costs.
I’ve also heard lots of good reasons to give up tape. And maybe in certain markets, like SMBs, tape will die — if it hasn’t already. But whenever tape is on the ropes, another trend comes along to boost it back into relevance. When disk took over backup, the data archiving trend kicked in, and tape’s savings in power and cooling and its shelf life for long-term data preservation came to the fore. Now, as data dedupe has disk systems vendors pitching their products for archive, too, along comes “green IT” to buoy tape.
Now, I’d like to ask the same questions Farley did, because I’m just as curious to know, and because he and I may have different audiences with different opinions. Do you think tape is dead? If not, what do you use it for? Let us know the amount of data you’re managing in your shop as well.
I love listening to NPR. I listen to, watch and read many news sources, but I find the stories they choose and the nuances they bring to their reporting refreshing. I was listening to NPR this morning when a very rare thing happened–I heard someone being interviewed that I’ve interviewed before myself. It’s not often that IT industry news makes a mainstream general-purpose broadcast, so I paid close attention.
The pundit in question was Rob Enderle, a technology analyst I interviewed last month when EMC acquired Pi. After hearing his brief comments on the current state of the US economy and how he predicts it will affect technology innovation in Silicon Valley, I called him up myself and dug a little deeper into the matter with him.
Not all storage startups either went public or got acquired for big bucks over the past two years. Mendocino Software sold little of its continuous data protection (CDP) software and found no takers for its intellectual property, so Wednesday it sold whatever was left at auction.
Mendocino did have five customers through an OEM deal with Heweltt-Packard, which rebranded Mendocino’s product as HP StorageWorks CIC.
According to an email HP sent to SearchStoage.com today, “HP has assigned a task force and is working closely with each of its five HP CIC customers to understand their specific information availability requirements and to determine an appropriate plan of action.”
According to the email, HP is offering to switch CIC customers to HP Data Protector at no charge for the software and installation, and will transfer CIC support contracts to Data Protector.
Last week, I blogged about discussions I’ve recently had with NetApp and NetApp customers about the company’s messaging and products. One of the focal points of the debate was what users understood about best practices for overhead on FC LUN snapshots. A couple of users I’d talked to prior to reporting on NetApp’s analyst day event said NetApp best practices dictate at least 100% overhead on FC LUNs, but that NetApp salespeople tell them a different story before the sale.
However, when I followed up with NetApp, officials told me in no uncertain terms that their most current best practices for FC LUNs dictate the same snapshot overhead as any other type of data: 20%.
After posting on this, I got another response from a NetApp customer disputing those statements that seems worthy of adding to the discussion. Here’s the message verbatim: