Carbonite made a big splash in the consumer space this week with the announcement that Sun Microsystems Inc. (soon to become part of Oracle Corp.) will offer a free 30-day trial of its online backup service to Sun customers who upgrade to the latest version of Java or download it for the first time.
Java’s about the most ubiquitous Web interface in the consumer world, so it’s a pretty major coup for Carbonite in its quest to compete with much bigger companies in online backup like Symantec and EMC Mozy. Carbonite’s press release puts Java’s reach at 800 million personal computers.
It’s unclear what proportion of that number is represented by Sun’s direct Java share, since Java is licensed by a number of third-party companies who develop their own custom code. But since Java prompts users for updates automatically, without them seeking out the service, it seems like it should be a pretty effective tool for putting Carbonite in front of users, whether or not they actually take the offer. (One aside here as a PC user myself – the constant “Java update available” reminders are annoying enough. If I have to click through multiple advertisements on my way to installing them, I can see getting very annoyed very quickly…so, of course, it all depends on the consumer response).
“It’s an interesting distribution model for Carbonite,” said Forrester Research analyst Stephanie Balaouras. “It’s not clear how it benefits Sun technically, I’m sure there’s a monetary benefit.”
But this is where things get really interesting – in the course of my conversation with Balaouras I ran across a post on Jonathan Schwartz’s blog discussing a new plan to offer an “app store” in association with Java (if that name sounds familiar, it’s because of Apple’s already-popular App Store service for the iPhone and iPod Touch).
According to Schwartz:
…not all Java runtimes are the same. For most devices, from RIM’s Blackberry to Sony’s Blu-Ray DVD players, original equipment manufacturers (known as “OEM’s”) license core Java technology and brand from Sun, and build their own Java runtime. Although we’re moving to help OEM’s with more pre-built technology, the only runtimes currently that come direct from Sun are those running on Windows PC’s.
And oddly enough, that’s made the Windows Java runtime our most profitable Java platform…a few years ago, we called our friends at one of the world’s largest search companies (you can guess who), to talk about helping them with software distribution – because of Java’s ubiquity, we had a greater capacity than almost anyone to distribute software to the Windows installed base. We signed a contract through which we’d make their toolbar optionally available to our audience via the Java update mechanism. They paid us a much appreciated fee, which increased dramatically when we renegotiated the contract a year later.
The post, which is about two months old and written in anticipation of the JavaOne conference in June, goes on to announce a new business model being pursued by Sun:
The revenues to Sun were also getting big enough for us to think about building a more formal business around Java’s distribution power – to make it available to the entire Java community, not simply one or two search companies on yearly contracts.
And that’s what Project Vector is designed to deliver – Vector is a network service to connect companies of all sizes and types to the roughly one billion Java users all over the world. Vector (which we’ll likely rename the Java Store), has the potential to deliver the world’s largest audience to developers and businesses leveraging Java and JavaFX.
“Everyone.” Schwartz points out, “craves access to consumers.” Particularly in the storage and storage software-as-a-service (SaaS) markets, where consumers are the focus of growth.
Control over Java has been considered the primary focus of Oracle’s intention to acquire Sun. It’s hard to imagine Sun doing any deal right now that didn’t meet with Oracle’s approval, but stranger things have happened… “This seems opportunistic, not a strategic alignment with one online backup vendor over another,” pointed out Balaouras. “It’s also a consumer, SMB play. I’m not sure how much Oracle will care at the moment.”
But if Carbonite can be distributed to consumers through Java, so could virtually any other online backup and storage service. And Schwartz’s post about Project Vector and the partnerships with search engines show Sun is willing to acquiesce to the highest bidder:
The year following [the initial search engine toolbar deal], the revenue increased dramatically again – when an aspiring search company (again, you can figure out who) outbid our first partner to place their toolbar in front of Java users (this time, limited to the US only). Toolbars, it turns out, are a significant driver of search traffic – and the billions of Java runtimes in the market were a clear means of driving value and opportunity.
It will be interesting to see if Carbonite’s competitors make any counter-moves. It will also be interesting to see how significant a channel to market Sun’s Java becomes for cloud storage vendors – could Sun have a last laugh in storage after all?
Fusion-io has taken a step towards bridging the gap between expensive single-level cell (SLC) and cheaper but slower and less reliable multi-level cell (MLC) NAND Flash.
The startup calls the new solid state drive (SSD) technology single mode level cell (SMLC) and expects to be shipping products in its ioDrive and ioDrive Duo PCI Express product lines this quarter.
Fusion-io says SMLC “combines a cost-effective MLC-based solid-state solution with the endurance and performance of SLC,” but it’s really a third option that falls between SLC and MLC in price and performance.
Fusion-io hasn’t released performance numbers, but CTO David Flynn says the SMLC drives close the gap in write speeds and endurance cycles between MLC and SLC. SMLC drives store two bits per cell and come in capacities of 160 GB and 320 GB just like MLC drives – although the SMLC drives require greater overprovisioning to reach those capacities. Generally, SLC drives write about 20% to 30% faster than MLC drives and have about 10 times the write cycles. For the most part, MLC’s shortcomings have kept it out of enterprise SSD products while SLC’s price still scares off a lot of people.
“(SMLC) is very close to SLC,” Flynn said. “I wouldn’t say it’s exactly SLC, but it’s sufficiently close for most uses cases.”
Flynn says SMLC drives will roughly split the difference in cost between its performance SLC ($30 per GB) and capacity MLC ($15 per GB) drives.
Fusion-io already ships enterprise MLC drives that Hewlett-Packard sells as the HP StorageWorks 320GB IO Accelerator.
“This (SMLC) is subtly different,” Flynn says. “Now we can get endurance and performance characteristics of SLC.”
The difference is in the way the controller manages the NAND Flash, he says. “We don’t need special MLC Flash, that would defeat the purpose,” Flynn said. “The purpose is not to have special requirements.”
Dell also sells Fusion-io cards, and IBM has released test results and is committed to selling Fusion-io SSDs down the road.
None of Fusion-io’s partners have publicly signed on to the SMLC cards yet, but Enterprise Strategy Group analyst Mark Peters says SMLC will likely become a third category for NAND Flash alongside SLC and MLC, at least until NAND is replaced by better technology.
“More people will follow, because they have to,” Peters says. “It’s logical. Every piece of research we’ve done says the No. 1 reason people aren’t adopting solid state is price, and this is a move to get the price down.”
Flynn agrees that Fusion-io won’t be the only vendor with SMLC, even if others call it something different.
“We’re first, but we don’t think we’ll be the last,” he said. “It’s too compelling.”
Because the Snap Server acquisition prompted questions about its ability to protect application data, Overland Storage Inc. today introduced a new Business Continuity Appliance for server and application failover based in part on a partnership with InMage Systems Inc.
Overland acquired the Snap Server product line from Adaptec a year ago, and its move into the internal storage market with the NAS boxes’ direct-attached disks prompted customers to ask more frequently about server, operating system and application availability offerings from Overland.
“We saw deals going away from us,” Overland senior product director Kevin Wise said candidly. “BCA strengthens that part of our data protection story.”
The BCA is available in two form factors: the BCA100 and the BCA200. The BCA100 is a 1U pizza box and the BCA200 is a 2U chassis. The BCA100 contains enough software licenses to support up to five application servers, the BCA200 comes with support for up to 10 and can expand beyond that with additional license keys.
“The REO BCA is designed and priced for SMBs—starting at less than $24,000,” noted Enterprise Strategy Group (ESG) analyst Lauren Whitehouse in an email to Storage Soup. “The all-in-one appliance enables application-aware failover/failback to deliver near-zero recovery objectives … now SMBs have a cost-efficient alternative to tape-based backup for local operational recovery and remote disaster recovery.”
The boxes can protect application data, but they don’t perform bare-metal restores. That means customers need operating system licenses available at a secondary DR location to completely rebuild a server. Customers can also purchase application agents to support Microsoft Exchange, SQL and Windows file systems. Plans call for adding support for Oracle and Microsoft SharePoint down the road.
Wise was cagey when I asked for a complete list of the partners Overland is working with for this product, saying multiple pieces of software had been integrated into the box. Vice president of product marketing Ravi Pendekanti confirmed InMage software is at least one of the pieces of the software puzzle, contributing continuous data protection (CDP) with replication. Overland reps would not name any other partners.
“The real value of this product is in the integration and support,” Wise said.
Sure, but customers want to know what’s being integrated, I pressed. No dice. Long story short – if you’re evaluating this product, make sure to ask for all the details on what’s behind the curtain.
Happy 4th of July everybody!
Other than the extension of EMC’s bid for Data Domain last Friday, the NetApp / Data Domain / EMC drama has begun to simmer along at a more muted pitch than we saw during the initial bid and counter-bid process. For now, the storage industry is in a holding pattern, waiting to see who wins – and looking to place bets.
The prevailing wisdom so far is that, for all the seeming enmity between Data Domain’s management and EMC Corp., the ultimate decision lies with the shareholders, and it’s unlikely shareholders will choose NetApp mixed stock / cash deal over EMC’s all-cash bid. Some shareholders have already filed suit against the Data Domain board, saying the board failed in its responsibility to shareholders by agreeing to be acquired by NetApp.
Talk has also turned to anti-trust due diligence currently being carried out on the proposed deal by government regulators including the FTC. According to a Reuters report last week,
The U.S. government could hinder EMC Corp’s (EMC.N) $1.8 billion bid for Data Domain Inc
(DDUP.O) as antitrust regulators are expected to scrutinize it more closely than a competing offer by NetApp Inc (NTAP.O).
While by far the bigger company, EMC is in a more precarious antitrust position than its smaller rival because EMC is the largest player in the market for so-called data reduction technology in which Data Domain specializes.
Both bids are being reviewed by the U.S. Federal Trade Commission, but antitrust experts and industry analysts say EMC’s offer could get delayed for weeks or months, while they expect NetApp’s to win quick approval.
However, storage industry analysts say it would be a stretch for antitrust laws to block an EMC acquisition. “It’s tough to unravel,” said Forrester Research analyst Stephanie Balaouras. “Given [that] dedupe will exist everywhere, [in both] hardware and software, I think there are plenty of options.”
In the meantime, the Motley Fool published an interesting post yesterday entitled “EMC’s Just Not That Into Data Domain Anymore“:
EMC’s (NYSE: EMC) tender offer for storage efficiency expert Data Domain (Nasdaq: DDUP) was set to expire today, so the company filed an extension until July 10. Data Domain will hold its annual shareholders’ meeting in the meantime. And none of it matters.
As of last Friday, with an already-extended deadline looming large, only 0.28% of Data Duplication’s shares had been tendered to EMC’s offer. That’s tantamount to a vote of “no confidence” in the deal…. it looks like Data Domain’s owners prefer to see the competing NetApp (Nasdaq: NTAP) offer coming to fruition…EMC would have to cough up more cash to win this battle. Even then, EMC might have to resort to downright hostilities if it really wants Data Domain…That’s just not a healthy way to get hitched, unless you want to start planning the divorce party already.
Acrimony is nothing new between NetApp and EMC, of course, but the lack of interest from Data Domain shareholders as pointed out here is quite interesting. After all this, might the original news we reported on a month ago might still wind up being the story, give or take a few hundred million dollars?
Curiouser and curiouser.
The saga of Broadcom and Emulex continues. Broadcom has upped the ante to $11.00 per share of Emulex ($9.25 per share was the previous offer), and dropped litigation against Emulex in a Delaware court, according to an Emulex press release this morning.
Emulex said it would review the revised offer, but I think it’s a long shot they’ll accept – it’s clear the Emulex board doesn’t want any part of merging with Broadcom. In the meantime, Emulex is advising its shareholders to take no action while it reviews the new bid.
Hitachi Data Systems’ (HDS) AMS 2000 series got a touching up today with the announcement of some incremental updates to the midrange disk array.
HDS is making two updates available now – a new High-Density Storage Expansion tray and a NEBS-certified DC power option for the 2500 model.
The High-Density Storage Expansion Tray holds up to 48 one-terabyte SATA disk drives in 4U; existing AMS trays hold 15 SAS or SATA drives in 3U. The maximum number of drives supported in the 2500 (480) hasn’t changed, but the maximum configuration now takes up one less rack than with the 15-drive trays. Good news for users focusing on storage and energy efficiency. A fully loaded high-density tray is listed at $83,260.
The AMS 2000 series has had the option of running on battery power (DC) since the arrays were first announced last fall, but the new 2500DC model has been certified as compliant with the Network Equipment Building System (NEBS) standard for use in telecom and other “lights out” environments.
According to HDS senior product marketing manager Mark Adams, there’s little technical difference between the certified and non-certified versions, but the certified version “has been proven operational through intense earthquake activity” and certified by an independent lab. Another difference between the NEBS-certified and non-NEBS certified models is the price: the compliant list price is $102,870, while the non-compliant list price is $92,500.
Later this year, HDS will make 8 Gbps Fibre Channel host ports available for the AMS 2300 and AMS 2500 models (internal disks will remain SAS or SATA). Security features to become available in the second half of 2009 include support for external authentication, meaning the AMS array and authenticating server don’t have to reside on the same network. Finally, as announced last week, HDS is extending its Dynamic Provisioning (HDP) software to run on the AMS in addition to the high-end USP-V.
User Matt Stroh, SAP business administrator for Wisconsin-based Industrial Electric Wire and Cable (IEWC), said he’s looking forward to deploying thin provisioning for the AMS 2300 he bought to replace an EMC Clariion CX-300 and HDS AMS 500 at the beginning of the year. “I’d like to get my hands on that as soon as possible,” he said. “We have a lot of file systems just storing SAP and Oracle binaries, and I don’t need much storage for them, but I’ve been giving them a big chunk anyway.”
While dynamic provisioning is going to be available for AMS, the Zero-Page Reclaim feature recently announced for the USP-V version of HDP will not be available for the forseeable future, according to HDS officials, who have not disclosed a technical reason why that’s the case.
(0:23) DataDirect Networks Web Object Scaler (WOS) challenges EMC’s Atmos in the cloud
(6:52) Emulex plans cloud HBA
A new private-cloud SaaS player launched this week, with plans to combine VMware and Data Domain products into an off-site disaster recovery service with a money-back recovery time service level guarantee.
Simply Continuous, based in San Francisco, is offering two services: Data Recovery Vault and AppAlive. Both involve the use of Data Domain’s DD series appliances at the customer site, which replicate to Data Domain appliances at the Simply Continuous data center. AppAlive adds bare-metal restore of servers from virtual hot standbys stored by Simply Continuous, which can also perform the conversion of physical servers to virtual ones using VMware’s vConverter tool.
Founder and CEO Tom Frangione said Simply Continuous will charge for capacity according to actual physical data stored, rather than by ‘virtual’ data – so if a user’s 20 TB are compressed to 1 TB by Data Domain’s dedupe algorithm, Simply Continous will charge the user for 1 TB.
Both services also come with a recovery time service-level agreement (SLA), based on the type and amount of data stored. The SLAs first guarantee that data will be recoverable on demand, and then set a maximum recovery window for data. According to a copy of the SLA provided to Storage Soup, the consequences for Simply Continuous are as follows:
- If Data Recovery Vault is not available at our expected 99.9% rate in any calendar month, we will give [the customer] a credit toward the next month’s service.
- If that happens three times in any 12 month period, [the customer] can terminate the contract.
- If [the customer] cannot recover data in the agreed upon time frame, we’ll give [the customer] 3 months service credit toward future services
Customers can also monitor their own storage capacity at Simply Continous with tools the service provider makes available through its web portal, including SNMP trap reports and a Salesforce.com-based help-ticketing system. The company is targeting users with between 1 and 100 TB of data. Pricing depends on capacity. Frangione said the company, which received $10 million in a recent series A funding round, has signed up about 20 customers since last November.
The launch of this company comes after some discussion this spring about the use of service providers for the backup and offsite DR storage of business data, after a well-publicized lawsuit between backup service provider Carbonite and its former storage provider. Enterprise Strategy Group founder Steve Duplessie urged enterprise users to seek out service provider offerings that included service-level agreements. Backup SaaS provider SpiderOak said SLAs will be soon be available, though both SpiderOak reps and Carbonite CEO David Friend have pointed out that offering SLAs, especially SLAs that include geographic redundancy, raises the cost of the service for customers. Either way, both say SLAs, when and if they are added, will not be added to public-cloud consumer-oriented services, but to separate business or enterprise offerings.
Are lithium-ion batteries running out of juice as a method to protect cache in storage arrays?
There’s probably still a lot of life left in batteries in arrays, but Adaptec today unveiled an alternate approach. The Adaptec Series 5Z RAID controllers use flash memory powered by a super capacitors instead of batteries.
Capacitors store energy until they need it, and provide enough power to destage data to Flash disk. This differs from batteries, which are in constant use, requiring monitoring, and lose power over time. Adaptec director of marketing Scott Cleland says the super capacitors last longer and require less maintenance and lower operating costs than batteries. Adaptec expects to sell the 5Z controllers through integrators and resellers, mostly in entry level and remote office systems.
“Having a battery has been a necessary evil,” Cleland said. “It goes against everything RAID stands for. RAID is about availability without touch.”
Cleland says the 5Z controller is “like having a USB stick on steroids integrated in a system.”
Adaptec isn’t the first storage vendor to use a capacitor in place of batteries. Dot Hill Systems introduced a storage controller with super capacitors two years ago, and recently was granted a patent for a “RAID controller using capacitor energy source to flush volatile cache data to non-volatile memory during main power outage,” according to a vendor press release issued today. Fujitsu also uses a capacitor to back up cache in its Eternus DX midrange storage systems.
“Today it’s available in SANs,” Cleland said. “We’re making it available for everyone else – in appliances, the departmental space, SMBs, not just the high-end Fibre Channel space.”
But Data Mobility Group analyst Joe Martins wonders if this is a solution in search of a problem, because battery life isn’t a big complaint among storage administrators. Still, Martins thinks capacitors can catch on if they work as advertised.
“I never knew it was a problem,” Martins said. “I suspect that this is one of those undercurrents where people don’t know they have the problem until you point it out. It’s like when using Windows you become accustomed to the screen freezing, and after awhile it’s just something you get used to. It’s not thought to be a problem until you encounter something else. A lot of folks may not like the situation as it is, and they may have lost data and travelled miles and miles to get to a data center and thought ‘this is the way it is, there’s no alternative.’ Maybe it will become a requirement as more vendors do it.”
Of course, larger vendors must embrace capacitors before they become a requirement.