Ocarina Networks, which came out of stealth in April, claims its compression appliance will reduce file data on primary storage systems. Its main competitor, StorWize, applies standard (2:1) compression to files, but Ocarina claims 10:1 compression and the ability to compress already compressed objects, such as video and photos. The company even claims its algorithms can be used to create a 3-D cube of numeric values to represent a photo or video image, so it can recognize elements that it has “seen” before. Pretty interesting, albeit ice-cream-headache-inducing stuff.
So it was puzzling to see the announcement of the Ocarina Compression Prize, a $1 million fund that will be doled out in $10,000 increments for each submission that advances the current best scoring compressor by at least 3%. Isn’t the idea supposed to be that Ocarina has the most compression expertise in the market?
“A lot of our compression work is already based on prior art,” CEO Murli Thirumale told me. The idea, he explained, is to make this contest a “category builder,” raising interest in the subject of primary storage compression. “A lot of compression work is focused around transmission of files, rather than reducing them for storage. We want to encourage the best minds in compression to address it in that context.”
So I guess it doesn’t matter how many cool algorithms you can bring to the table if there isn’t really a market yet. “As there’s more widespread adoption [for products], clearly [vendors] with a leadership stance will benefit more,” Thirumale said.
“Good compression has a history of coming from independent researchers, open source or anywhere that can foster easy standardization and non-proprietary code,” Taneja Group analyst Jeff Boles says. “So this seems like a pretty good approach to me. Interesting stunt to boot.”
The initial prize fund will include awards for three categories: JPEG 2000 recompression, h.264 video recompression and an industry file mix for engineering CAD file types. Maybe Riverbed, Silver Peak and Autodesk will jump in on that last one.
The political conventions are over, but Sun CEO Jonathan Schwartz is spinning Sun storage in a way that would make any candidate proud.
In his latest blog, Schwartz points to recent figures from market research firm IDC as validation of Sun’s open storage strategy. Those numbers release last week showed Sun with the greatest increase of overall storage revenue among the major vendors – up 29.2 percent. Gartner also chimed in by placing Sun’s growth at 34.7 percent in external storage revenue, again tops among the large vendors.
But it’s too early to praise Sun for a great turnaround. Sun ranks seventh in external storage in both lists – behind EMC, Hewlett-Packard, IBM, Hitachi Data Systems, Dell, and NetApp, and fifth on IDC’s list of all storage sales. In each case, Sun’s market share is in single digits.
Sun’s own figures aren’t nearly as cheery for storage. Sun reported a modest revenue increase of 3.9% year-over-year in its earnings report for last quarter – the same quarter IDC and Garnter was reporting on.
It’s also hard to attribute any gains to the open storage initiative. Sun is growing storage revenue for the same reason almost every other vendor is: much more data is stored digitally than ever before, and that trend is still accelerating.
IDC and Gartner attributed Sun rise with big increases in midrange and enterprise disk products, and those systems don’t reflect increases in open storage use. Those systems aren’t even Sun’s IP — they come from OEM deals with Hitachi, LSI, and Dot Hill.
Sun’s Thumper is tied to open storage, and that only generated $100 million in revenue for the fiscal year that ended June 30 according to Schwartz’s blog. Thumper revenue grew 80 percent over last year, but is still “relatively small” as Schwartz put it on the earnings call.
Perhaps there will be reasons to cheer Sun storage soon, although the jury is still out. Sun seems to be keeping up with the market in its embrace of solid state disk and may soon see the fruits of its Fishworks project, which could help drive open source storage. According to Schwartz:
Now, our view is “OpenStorage” (systems built from commodity parts and open source software) will grow far faster than the proprietary storage market. We plan on driving that growth, and over the next few months, you’ll see a tremendous amount of storage innovation targeting the growing breadth of customers wanting better/faster/cheaper/smaller options. Expect to see flash, zfs, dtrace, and good old fashioned systems engineering play a very prominent role in an aggressive push into the storage market.
Time may prove Schwartz right about open storage. But we’ve seen no evidence of any great success yet.
Optical archiving vendor Plasmon revamped its management team last December and has since rolled out a new marketing strategy focusing on offering multi-tier archiving packages with partner NetApp (see “NetApp Plasmon’s Trojan Horse in Enterprise Data Centers,” July 16).
But according to a statement released by Plasmon Aug. 8, so far that strategy hasn’t been bearing fruit. The company saw disappointing sales for the first quarter of its fiscal year (which began April 1), 20 percent below earlier predictions. “There have been some encouraging signs, including a fast-growing pipeline, especially for our newest products,” VP of global marketing Patrick Dowling said. “We remain committed to our strategy – it’s just that we’re not getting the results from sales yet.”
Today, the company notified investors that it has been approached by a private equity firm (Dowling declined to name the firm, though some reports say it’s a U.S,-based company) with an offer to take the company private for $25 million, or 0.25 pence per share on the U.K. stock exchange where it’s currently listed. It’s not a done deal yet – there’s still due diligence to be done, and shareholder agreement to get, among other things. But “it’s our best viable option,” according to a Plasmon statement.
Adaptec’s Series 2 and Series 5 SAS/SATA RAID controllers can now spin down disk drives from several drive manufacturers — Hitachi, Fujitsu, Seagate, Western Digital and Samsung.
Adaptec director of worldwide marketing Suresh Paniker said spin-down is already a part of the serial drive interface specs, so no API or special integration is required for Adaptec’s product to put drives into idle mode or power them off entirely. Sensitive data that may be unexpectedly accessed, such as registry information for Windows apps, can be kept on battery-backed cache within the controller.
The controller will set the rotation, and power draw, of the drives at three levels: normal, standby and power-off. The speed and power draw of the standby stage will vary by drive manufacturer, but generally drives on standby will require 7 to 10 seconds to return to normal operation and will draw between 5 and 7 watts per drive. Power-off requires 20 to 40 seconds for the disk drive to spin up, and though not spinning, the rest of the drive’s internal mechanisms will still draw about 3 watts of power.
The feature will be made available this week through distributors to resellers, OEMs and end users, along with new fields in the controllers’ software interface to manage spin-down policies. Getting the controller and other system parts from a distributor is a cheaper way of building your own storage system, and to some extent, the availability of this feature from Adaptec suggests MAID is going mainstream. But obviously, some assembly is required.
Companies like to try and bury bad news by disclosing it on a Friday, so it’s no surprise that Overland Storage issued a press release about layoffs on the day before the three-day Labor Day weekend.There is really little surprise around Overland’s 53 layoffs, which come to 13 percent of its employees. It’s the next step in the company’s transition from a tape to disk vendor as it fights for survival. Overland acquired the Snap Server NAS product line from Adaptec for $3.6 million in June, but lost $8.6 million last quarter, and had $9.7 million in cash at the end of the quarter. The restructuring is expected to save around $10 million a year.
Overland CEO Vern LoForti said on the last earnings conference call that the company is close to completing financing to support the Snap business. A company spokeswoman says that financing is still in place, which means Friday’s layoff did not come about because financing fell through. But even that financing would not be enough without layoffs.
“… our recent acquisition of the Snap Server business facilitates our entry into the distributed NAS market, and initial customer response has been very positive,” LoForti said in a statement about the layoffs. “The Snap acquisition did, however, result in a substantial increase to our operating expense base. Having recognized the need to rationalize the newly combined business, we have examined all areas of the company in order to streamline and focus on the geographic regions and product initiatives that offer the most immediate return on investment.”
So basically, it comes down to replacing 53 jobs on the tape end of the business with about the same amount acquired with Snap.
IBM. What to make of them these days when it comes to storage?
It’s a question I’ve heard asked a lot this week in my conversations with industry watchers and in my blog reading. Much of it came in the wake of the leak (again) on IBM’s European website of information about an upcoming product announcement.
“This now makes two ‘new platform’ storage announcements from IBM where they simply post a Web page regarding a completely new storage product on their European site and call it a day,” wrote Chuck Hollis in a blog post that got the word out about the leak. “Has IBM decided to focus its marketing efforts elsewhere, and decided not to bring much attention to their … storage business?”
The “announcement” of the XIV clustered block storage array in similar fashion earlier this month prompted similar head-scratching, and, more worrying if I’m IBM, analysts have begun to sit down and dig through the XIV specs they’ve released to the market without a single PR person or marketeer accompanying it with a message.
“Where’s the beef”? is the phrase I’ve heard used at the end of the analysts’ analyzing. Robin Harris’s StorageMojo blog post is a pretty good representation of the questions I’m also hearing from others in the wider market.
“I hope there is a cohesive strategy behind the XIV product. But so far I’m not able to even guess what it might be,” Harris concluded. “Maybe the decades of warfare between geeks and suits has so totally paralyzed the product marketing function that even the normal IBM facade can’t cover the cracks. It must be something.”
I’m no PR expert, but I have to believe this is what you have PR and marketing for – to at least try to counteract speculation like this. I’ve heard differing opinions on the reasons for the leaks this week–some close to Hollis’s, and others who say IBM has always done this kind of pre-release Web posting (other companies, like Hewlett-Packard, have been known to do it, too) . The problem is, there are many more people nowadays scouring the Web for every morsel of information they can dig up. And IBM’s competitors can quickly criticize those products via blogs, putting spin on IBM’s products before IBM does.
Perhaps the most perplexing part is that IBM is just letting rivals take their shots. As far as I can tell, they haven’t responded at all to the criticisms levied by competitors and analysts. And I can’t figure out why that would be. The cat’s out of the bag. The specs are out there. Pretending it hasn’t been announced yet and declining comment isn’t going to change that.
This isn’t the first wondering I’ve done this year about IBM. I’ve also wondered what the deal is with their DS6000 array (about which I’ve been assured it’s still in existence, but not much more information is forthcoming). I’ve wondered what the deal is with thin provisioning for the DS8000. My news director, Dave Raffo, asked them what the deal is with MAID, dedupe and thin provisioning at this year’s SNW, and got a lot of fairly vague answers.
In fairness, IBM has since acquired Diligent Technologies, finally adding dedupe to their backup hardware product line. But in the dedupe wars (which you can bet are still raging), IBM has been relatively silent.
Instead, yesterday, they sent out a press release saying they’ve developed and tested SSDs at 1 million IOPS. The press release is chock-full of verbiage about how much more technical and expert IBM researchers are and what a wealth of knowledge IBM brings to the SSD table, none of which I doubt.
But the thing is, that’s it. They’ve tested these things as part of Project Quicksilver. IBM labs are the studliest and most advanced in the world. The end, except for an intriguing but vague passage about some future products —
IBM Research has developed breakthrough data center provisioning technology that automatically understands and balances the utilization of diverse storage components in the information infrastructure, including solid-state storage. Additionally, to get the most value from high performance system resources in storage, IBM Research patented key technologies that help maintain required quality-of-service for higher priority applications.
I asked an IBM spokesperson when we’ll see product come out based on what was tested for this press release, and got the following response. “To clarify, there is no timeline/commercialization plan to discuss at this time and we’re not announcing a specific product.” As for the management software (I’m assuming), “we’re not going into specifics at this point.”
To be fair, I’ve heard some criticism recently of other vendors coming out with product pre-announcements months before product availability. But everyone in the industry has by now either launched or announced they will launch solid-state support. IBM, with its server business and experience developing memory technology, ought to be ahead of this pack. Instead, despite the fact that it’s clear they wouldn’t be testing such a thing if there were no potential revenue stream attached, they aren’t saying much else about it.
Maybe the folks running IBM storage think they don’t have to say anything. They’re still an established behemoth with a large, loyal customer base. The phrase “no one ever got fired for buying from IBM” is still thrown around, and IBM officials have argued that customers are willing to wait to get whatever technology is fashionable until they can get it in vetted form from IBM. Given its ginormous customer base, IBM says, its testing and QA processes are much more involved than other vendors, and hence, it takes longer for new technologies to hit the streets from IBM – but customers are willing to wait for the extra assurance.
Good points all, and storage buyers are a conservative lot. But IBM spent $300 million on a product it hasn’t yet promoted except to cast it as the new crown jewel of Big Blue storage. Meanwhile, people in the marketplace are beginning to tear it apart before anyone sees a PowerPoint slide. People are beginning to wonder if it wasn’t really Moshe Yanai IBM was after, and that they had to buy his startups to get him. People are starting to speculate about what’s going on internally at IBM – about a battle between geeks and suits, or that IBM is ashamed of its storage products and therefore hiding them, competitors are having a field day, and IBM’s doing nothing to counteract any of it.
What is the deal?
Detroit police are investigating the Aug. 19 death of Cisco marketing executive Benjamin Goldman, 42, who was found fatally shot outside a strip club called the Penthouse on Detroit’s Eight Mile, according to reports. So far, no one is in custody.
According to San Jose Mercury News coverage of Goldman’s memorial service, he worked 16 years at Cisco in customer-facing marketing roles.
This was originally scheduled for May, but after some delays the CERN Large Hadron Collider, which some believe will create a black hole that will swallow the Earth (beginning with France), has been put through its paces on its first test runs. According to the latest reports, launch is now set for Sept. 10. As a great man once said, “hang on to your butts.”
Personally, though, I’m a little more concerned today with reports that an upgrade to the U.S. terrorism database is not going well. But perhaps we’ve gotten to the bottom of why so many random people are on the No-Fly List. Ain’t technology grand?
Symantec Corp. released the results of its survey of 1000 IT managers and decision makers about disaster recovery for 2008 today. Among its findings was a decrease in C-level executive involvement in DR planning compared to the results for the 2007 survey, which Symantec officials said they found alarming.
In the 2007 DR survey, 55 percent of respondents said that their DR committees involved the CIO/CTO/IT director. In 2008, that number dropped to 33 percent worldwide.
“Executive complacency could be attributed to the improvement in DR testing successes,” according to the company’s survey report. Delegation of tasks to lower-level managers once the C-suite sets overall DR goals could also be at play, conceded Symantec director of product marketing for Data Protection Marty Ward. However, the survey results remain a cause for concern at Symantec, Ward said. “It’s more likely that DR is still just not seen as a basic requirement for companies – there also haven’t been as many current events lately that spur people into thinking about disaster recovery.”
As for that last statement, let’s all just take a moment to knock on wood. Meanwhile, Symantec says other results of the survey, like the fact that only 14% of chief security officers are involved in DR, point to complacency rather than delegation.
Other key findings of the study:
- Although one third of organizations have had to execute a disaster recovery plan, just under half say they can get fully operational in a week.
- The amount of applications that IT Managers believe are business critical has increased 20 percentage points over data from the previous year, and only about half of these applications are covered in DR plans.
- Virtualization is driving organizations to reevaluate their DR plans.
- Organizations report that DR testing impacts customers, sales and revenue because of the lack of tools that can address both virtual and physical environments.
On that last one, a recent customer case study we ran on the site can attest to that issue. It’s tough enough for companies to classify all data and arrange for tiered recovery while maintaining accurate and realistic RTOs and RPOs. So tough, in fact, that very few companies I’ve come across have even reached the frontier Northeast Utilities came up against – keeping the DR plan current and in working order without the operational bandwidth to complete live tests.
The analogy I’d use for this situation is to another unpleasant task – dieting. If initial DR planning is like losing weight, continued monitoring and updating for the environment is like keeping it off–in other words, the really hard part. According to the 2008 Symantec survey results, only 30 percent of tests meet RTO objectives. Only 31 percent of respondents reported that they could achieve baseline operations within one day if a significant disaster occurred that obliterated their main data center. Only 3 percent believed they have skeleton operations within 12 hours.
Not all is doom and gloom, though. “Don’t get me wrong, there has been a 10 fold increase in testing over the last decade, and one of the most encouraging things about the 2008 survey is that it showed that not only are people testing, but more people are testing successfully,” Ward said. Last year, 50% of DR tests failed. This year, that number was 30 percent. “But there are still ongoing issues.”
My Google Reader isn’t quite as busy as Robert Scoble’s, but it gets a decent workout each week. Between that, the wires and all the different pitches I get – not to mention the interesting stories I come across that are more general IT than storage-specific – I usually end up with a backlog. Every so often, I’ll clear out that backlog with a link dump. Here’s one for this week:
NetApp’s Simple Steve on how to recover corrupted photos. [Simple Steve: Photo Recovery]
The Storage Anarchist, who already broke the arrival of IBM’s XIV array, keeps pounding away at IBM. [The Storage Anarchist: How much does a free XIV array really cost?]
In case you haven’t heard, former Dell/Equallogic evangelist Marc Farley has signed on with 3PAR. One of his first vids for the 3PAR blog features mad props for the above mentioned Storage Anarchist, with low-tech farm animals in the background. [StorageRap: Props to Anarchist for Blogging Coup]
Back to storage (well, sort of). Another really enjoyable post from Steve Duplessie, with humorous anecdote about his “militaristic” attempts to recycle, how his town has thwarted them, and how it all ties in with green IT. [Steve’s IT Rants: Hybrid IT]
Okay, back to storage: Curtis Preston offers his advice for home data protection. [Backup Central: Friends & Family Computer Recommendations]
While EMC’s Anarchist keeps IBM busy, another EMC’er picks on NetApp’s VTL. [The Backup Blog: NetApp’s VTL is “Dangerous”]
Amazon adds more cloud storage, this time for its EC2 platform. [TechCrunchIT]
When I got to college, all I got was a POP email account and some spectacularly crappy dining hall food. Kids these days are getting iPhones and iPod Touches. Also, I just said “kids these days”, meaning I’m officially old. Thanks a lot, New York Times. [NYT Technology: Welcome, Freshmen. Have an iPod]
The San Jose Mercury News has an employee’s-eye look at the Agami shutdown. [Promising start-up abruptly shuts down]
Finally, if you only check out one item from this list, make it this one. A new blog called Where is Bob? Tales of an Absentee Manager, is one I recommend bookmarking for anyone who works in IT. It’s kind of like the IT blog equivalent of Office Space, and even involves storage-related hilarity(yes, you read that correctly):
I could see sweat forming on Marek’s forehead. I marveled at his self control, and wondered whether he was practicing zen meditation when he wasn’t hacking into the Pentagon.
“Bob.” He was speaking slowly, enunciating every syllable. “Do you know the meaning of words, back-up and eve-ry-thing?”
“What?” Bob was laughing, he was clearly in good spirits, and Marek’s accent often amused him.
“Backup. Everything.” Marek repeated even slower. I saw a few blood vessels rupture, and his left eye began to twitch violently. I knew that I had to intervene.
“Now look, Bob. What you are asking just doesn’t make sense,” I said. “You can’t have a backup of everything. You need a backup of a particular thing at a particular time.”
“I need a backup of all our servers for all time.” So, he knew that we had servers. I underestimated Bob. But he clearly didn’t understand the passage of time, so perhaps I still had an advantage.
“That’s impossible, Bob. Can’t be done.” It was one of those times when you begin regretting what you said before you even finish saying it.
“Can’t be done!” He didn’t say it like a question, and I knew what was coming. “You are one of those people who say NO all the time. No, we can’t write our own operating system! No, we can’t have a backup of everything! People hate that! You impede progress!”
“Ok, we’ll do it.” Marek gave me a classic crazy-girl-what-are-you-doing look. “Come back next Wednesday.”
When Bob returned to work on Thursday, he forgot about his outlandish backup request, and left us alone. Unfortunately, Bob forgot to mention that we were in violation of a university mandate to have redundant copies of our backups stored in an off-site location. He received the notice about our lack of compliance along with a detailed write-up of the policy. He compressed the forty page document into three incongruous words – backup of everything. So, when we learned about the violation, Marek and I had to postpone all our other projects and commitments, and scramble to make duplicates of critical backups to be sent off site along with other disaster recovery tools and documents. [Where is Bob? Welcome Party for Dave, Part I]