Last August I wrote about Swiss research facility CERN and its plan to store petabytes of data from its Large Hadron Collider (LHC) device on commodity NAS and in tape silos for scalability and cost savings. A month ago, it came to my attention that some people thought the collection of that data beginning in May might create a black hole that will eat the Earth.
Anybody who’s ever been exposed to TimeCube will know that just because people are shouting about something scientific on the Internet doesn’t make it solid science or make them experts. So my first post on the black hole issue was tongue in cheek–and it still all seems far-fetched (which is what CERN apparently wants us to believe…cue spooky music).
But since that post, more people with a bit more gravitas have come forward with black hole concerns. Such as a Scientific American blogger who commented on my original post. And former U.S. nuclear safety officer Walter Wagner, who according to MSNBC has filed a lawsuit against CERN to stop LHC in Hawaii.
There’s one puzzling element of the story about the lawsuit for me: the MSNBC writer says conferences on the suit are scheduled for June 16. In part, the suit seeks a temporary restraining order to keep CERN from turning on LHC until everybody’s satisfied it’s not going to bring about Armageddon. But last I knew, LHC was supposed to start up in May, making that hearing on the restraining order about a month too late if something disastrous does happen…
P.S. Speaking of lawsuits (or, at least, potential lawsuits), I got a very interesting followup call to my story on Atrato this week from a man who declined to tell me who exactly he is or why he’s interested, but who claims not to have been able to find evidence of the more than 100 patents Atrato claims for its Self-managing Array of Idle Disks. (An Atrato spokesperson sent a link to a Google search page when asked for a list of the patents.)
One thing this followup caller did happen to mention to me is that he’s an attorney in Minnesota. The light bulb went on…there’s another Minnesota-based company that has been rumored to be working on a product very similar to Atrato’s.
Could just be coincidence, though.
Although NetApp fired the first volley in its ZFS lawsuit against Sun Microsystems, Sun has been the aggressor since NetApp’s initial strike. Following NetApp’s lawsuit last September charging that Sun violated several of its patents regarding ZFS, Sun countersued and accused NetApp of violating Sun’s patents. Sun has also asked the U.S. Patents Office to re-examine several NetApp patents.
Sun filed yet another lawsuit Wednesday, alleging patent infringement related to storage management technology NetApp acquired when it bought Onaro in January.
“As NetApp attempts to extend its product line, it also expands its exposure to Sun patents,” Dana Lengkeek of Sun’s Corporate Communications office wrote in an emailed statement.
The latest lawsuit filed in U.S. Discrict Court in the northern district of California claims that software NetApp gained from Onaro uses Sun’s patented technology. Sun seeks compensation from NetApp for patent infringement and an injunction preventing NetApp from using Sun’s technology.
Sun also revealed the U.S. Patent Office granted its request to re-examine NetApp’s patent related to its “copy on write” technology.
But perhaps the harshest accusation Sun leveled against NetApp in its latest filing came in the opening paragraph of the suit. Chiding NetApp for only spending about $390 million on research and development last year and for holding “only approximately 200″ patents, Sun declared: “Indeed, rather than innovate, NetApp builds on the innovation of others” and “NetApp … uses extensive amounts of open source code developed by others, without contributing any innovation of its own.”
Instead of demanding money if it wins the suit, maybe Sun should request that NetApp change its already-taken slogan “Go Further, Faster ” to “NetApp: Built on others’ innovation.”
NetApp responded to the latest suit with a terse: “NetApp does not comment on ongoing litigation.”
As a fanatical Red Sox fan and a storage reporter, the whole EMC-logo-on-Red-Sox-uniforms thing has been a matter of some, er, ambivalence for me. It’s also been the source of some trash talk between me and EMC acquaintances, one of whom–a Yankees fan–keeps threatening to send me one of the defiled jerseys. To which I reply I’ll be ready with a seam ripper suitable for removing the patch on the sleeve. To which my father, who raised me a Red Sox fan, replied that I would be an idiot for not keeping it as a collector’s item. But anyway.
Meanwhile, since the EMC logo was slapped on the Olde Towne Team for the Japan trip (and for the Japan trip ONLY, they promise us, but we’ll see), Joe Tucci took a jaunt to Japan with the team and hobnobbed with the players at a gala reception last week. A gala reception at which Globe Red Sox columnist Dan Shaughnessy was also present, and witnessed the following, as reported in his column today:
Highlight of the trip, hands down, was EMC CEO Joe Tucci having a catch with Hideki Okajima at a fancy reception at the Sox’ New Otani Hotel headquarters Monday. While 2007 World Series clips were shown on a Green Monster-sized LED screen, assorted clients and dignitaries – most of them Japanese – feasted on sushi and fine wines. After a few speeches and interviews with Mike Lowell, Dustin Pedroia, Kevin Youkilis, and Terry Francona, a couple of fielding mitts were produced and Tucci lined up to play catch with the Sox’ second-most-famous Japanese hurler. Standing in front of the giant screen, Okajima softly tossed to Tucci, who was about 20 feet away. Tucci made the catch, and before you could say, “Nuke LaLoosh,” gunned a wild heater that sailed far high and wide of a sprawling Okajima and punctured the precious LED screen. I will never look at the EMC logo (which was on the Sox uniforms for the Japan games) without thinking of this.
Was that karmic payback for Tucci — a Yankees fan who dismayed Sox purists everywhere? Not for me to say. But I would have killed to be a fly on the wall–especially if I could have been a fly on the wall with a camera.
The UK’s Channel Register broke the story yesterday that NetApp’s new slogan, ‘Go Further, Faster,’ is kind of, um, already taken. By, er, Halliburton.
Eh, no worries. Not like that company is really well-known or well-connected or anything.
The Register weighs the two slogans:
On one hand, according to the Halliburton recruitment video, the company makes a habit of going further, faster every god damn day. That’s consistency. On the other, NetApp’s video has a 4/5 star rating on YouTube.
“Very cool!” says a commenter who we are sure is not an employee of NetApp. “Awesome,” echoes another completely random observer.
(When vendors get all nitpicky with me, I wonder how they even deal with The Register, or if they just pretend it doesn’t exist, since it’s across the pond anyway.)
And of course you know EMC bloggers are jumping up and down and singing happy tunes about this little gaffe.
Barry Murphy, formerly of Forrester Research, has been named the new director of product marketing for Mimosa, tasked with “expanding the company’s eDiscovery and content management partner ecosystem and broadening awareness for and adoption of Mimosa Systems’ award-winning content archiving platform.”
The cynically inclined might say he already did a similar thing with his last major act as a Forrester analyst, the publication of two reports on message archiving products. The reports concluded that on-premise software archives (such as Mimosa’s) are gaining more traction and are more mature in their features than hosted archiving offerings.
I don’t really believe this was anything other than coincidence–the research for such a report goes on for months and the report was obviously started well in advance of this transition. It makes sense that an analyst whose expertise was in records management and archiving would go to a vendor in that sector of the market. But sometimes the appearance of a conflict of interest can be as problematic as an actual conflict of interest. At the least, from my perspective, it’s unfortunate timing.
Murphy joins Tony Asaro, who recently resurfaced as chief strategy officer for Virtual Iron after a short stint with Dell, as the most recent storage analysts to head to vendors. It has been suggested to me that most analysts wind up at vendors or doing consulting, so maybe this is a natural lifecycle we’re seeing.
Speaking of defections, it has also been announced that Dr. David Yen has left Sun for Juniper. Yen was formerly the head of Sun’s storage group, who was shifted to their chip group following the restructuring of the storage and server groups under John Fowler last year.
Ever since I started covering storage, I’ve been hearing the disk vs. tape debate, usually including proclamations that tape is dead or dying.
There are good reasons to make that assertion. Disk-based backup is catching on, particularly among SMBs, and data deduplication is evening out the cost-per-GB numbers between disk and tape for many midrange applications. Disk is preferable to tape in many ways, especially because it allows faster restore times for backup and archival data. Once again, people are starting to ask, what’s the point of using tape? Dell/EqualLogic’s Marc Farley posted a funny video on his blog to illustrate the question on Friday.
I’m not so sure we’ll ever really see the end of tape. When it comes to the high end, there’s simply too much data to keep on spinning disk. The cost of disk is often still higher per GB, depending on the type of disk and the type of application accessing it. And that doesn’t include power and cooling costs.
I’ve also heard lots of good reasons to give up tape. And maybe in certain markets, like SMBs, tape will die — if it hasn’t already. But whenever tape is on the ropes, another trend comes along to boost it back into relevance. When disk took over backup, the data archiving trend kicked in, and tape’s savings in power and cooling and its shelf life for long-term data preservation came to the fore. Now, as data dedupe has disk systems vendors pitching their products for archive, too, along comes “green IT” to buoy tape.
Now, I’d like to ask the same questions Farley did, because I’m just as curious to know, and because he and I may have different audiences with different opinions. Do you think tape is dead? If not, what do you use it for? Let us know the amount of data you’re managing in your shop as well.
I love listening to NPR. I listen to, watch and read many news sources, but I find the stories they choose and the nuances they bring to their reporting refreshing. I was listening to NPR this morning when a very rare thing happened–I heard someone being interviewed that I’ve interviewed before myself. It’s not often that IT industry news makes a mainstream general-purpose broadcast, so I paid close attention.
The pundit in question was Rob Enderle, a technology analyst I interviewed last month when EMC acquired Pi. After hearing his brief comments on the current state of the US economy and how he predicts it will affect technology innovation in Silicon Valley, I called him up myself and dug a little deeper into the matter with him.
Not all storage startups either went public or got acquired for big bucks over the past two years. Mendocino Software sold little of its continuous data protection (CDP) software and found no takers for its intellectual property, so Wednesday it sold whatever was left at auction.
Mendocino did have five customers through an OEM deal with Heweltt-Packard, which rebranded Mendocino’s product as HP StorageWorks CIC.
According to an email HP sent to SearchStoage.com today, “HP has assigned a task force and is working closely with each of its five HP CIC customers to understand their specific information availability requirements and to determine an appropriate plan of action.”
According to the email, HP is offering to switch CIC customers to HP Data Protector at no charge for the software and installation, and will transfer CIC support contracts to Data Protector.
Last week, I blogged about discussions I’ve recently had with NetApp and NetApp customers about the company’s messaging and products. One of the focal points of the debate was what users understood about best practices for overhead on FC LUN snapshots. A couple of users I’d talked to prior to reporting on NetApp’s analyst day event said NetApp best practices dictate at least 100% overhead on FC LUNs, but that NetApp salespeople tell them a different story before the sale.
However, when I followed up with NetApp, officials told me in no uncertain terms that their most current best practices for FC LUNs dictate the same snapshot overhead as any other type of data: 20%.
After posting on this, I got another response from a NetApp customer disputing those statements that seems worthy of adding to the discussion. Here’s the message verbatim:
As the first vendor to make data deduplication a key piece of the backup picture, Data Domain has benefitted most from the dedupe craze. And now it has the most at stake when deduplication becomes mainstream. If all the major storage vendors offer deduplication, there goes at least part of Data Domain’s edge.
That’s not lost on Data Domain CEO Frank Slootman. He sees NetApp’s decision to build deduplication into its operating system and use it for primary data, and the move by other large disk and tape vendors to put dedupe into their virtual tape libraries as part of a strategy to marginalize the technology.
“NetApp’s and EMC’s fundamental strategy is to make deduplication go away as a separate technology,” Slootman said “NetApp has been giving away their deduplication, and we think EMC [through an OEM deal with Quantum] will fully charge for storage but give away dedupe. They don’t want dedupe to be a separate business, or even a technology in its own right.”
Slootman says he’s not worried, though. He sees the biggest benefit of deduplication as an alternative to VTLs, and claims many new Data Domain customers use deduplication to replace virtual tape rather than enhance it. He calls deduplication for VTLs a “bolt-on” technology, where Data Domain built its appliances specifically for dedupe.
And he maintains that deduplication doesn’t work for primary storage. It’s not a technical issue, but a strategic one.
“Primary data lives for short periods of time, why dedupe that?” he said. “It doesn’t live long enough to get any benefit to reducing its size. If data doesn’t mutate, it should be spun off primary storage anyway. It should go to cheaper storage. It’s the stuff that doesn’t change that mounts a huge challenge for data centers. You can’t throw it away, and it’s expensive to keep online.”