If you think the hype about cloud computing has peaked, think again.
Vendors who build their hardware and software around cloud computing say it has a solid grasp on the service provider market and is getting ready to cover the enterprise.
Sajai Krishnan, CEO of cloud storage startup ParaScale, says service providers are heavyily committed to the cloud and others are coming around as they grasp its value.
“The cloud is still a fuzzy concept, despite all this religion around it,” Krishnan said “You need to spend a couple of hours talking about it, then the light bulb goes off. But for people who only read the occasional article, it still raises more questions than it answers.”
He thinks enterprises are ready to turn to clouds, initially with the help of service providers.
“It’s not either/or,” he said. “For certain large applications, we’ve seen folks roll out an internal cloud and for other things they go external. When you have an application that’s running inside an enterprise four or five years, take it out and put a VMware wrapper on it, test it and put it back into production, an IT shop doesn’t have the cycles to do that. If a service provider has the expertise to handle something like that, it drives up the value of cloud storage inside these companies very quickly.”
During his company’s earnings call with analysts Monday night, 3PAR CEO Dave Scott said the storage systems vendor is anticipating a shift toward the cloud. 3PAR has long billed itself as a utility storage company and counts service providers as large customers.
“We believe that we are in the midst of major secular trend with cloud computing as an ultimate replacement of much of the information technology that is currently owned and operated by enterprises,” Scott said.
For now, much of that trend is driven by service providers.
Tier1 Research pegs the cloud service market at around $300 million this year with cloud storage making up about 40 percent to 60 percent of that. Tier1 analyst Antonio Piraino says he considers storage “low-hanging fruit” among cloud services.
Krishnan says the service provider market is starting to split into three areas. One is the mass-market cloud service providers such as Amazon S3, Google, and Rackspace. The second area is smaller providers who combine virtualization, multi-tenant storage clouds and private hosted clouds. The third segment consists of the large telcos such as AT&T, etsVerizon Business, and Deutsche Telecom.
“Service providers are beyond the confusion and in the fourth quarter you’ll see a whole slew of announcements around cloud services,” Krishnan said. “They know the technology. On the enterprise side, there’s still a lot of confusion. When you talk about cloud, Amazon comes to mind first.”
(6:13) Cisco makes mainframe move
Alliance Storage Technologies Inc. (ASTI), the Colorado company that bought Plasmon’s assets last January, says it is selling the Plasmon product line whole and hopes to significantly expand its business based on the proprietary Ultra Density Optical (UDO) technology.
ASTI was a Plasmon reseller before the U.K.-based archiving vendor went under after years of financial problems. ASTI picked up Plasmon’s assets for an undisclosed sum, leased Plasmon’s Colorado Springs manufacturing plant, hired many of its employees and is now stepping up marketing of Plasmon products. ASTI will keep the Plasmon brand name and is selling its UDO appliances, drives, libraries and media after rebuilding its channel with new VARs and integrators.
“It’s an identical product lineup as Plasmon’s,” said Bill Gallagher, a former Plasmon exec who is now ASTI’s director of strategic accounts and regional sales director. “I don’t think Plasmon’s failure was a failure of technology. The company suffered for years with restructuring and trying to get its financials in order. Alliance is profitable, and we haven’t seen any change in demand. Customers are happy, they wanted to see what would happen. ”
ASTI has sold optical storage for more than 10 years, carrying products from Hewlett-Packard, IBM, and Sony as well as Plasmon. ASTI CEO Chris Carr says the company is committed to the future of UDO. “Last year we caught wind of Plasmon’s financial difficulties and we saw an opportunity,” he said. “Specifically, we were looking for UDO technology.”
UDO discs hold up to 60 GB and are supposed to last for more than 50 years. Plasmon’s largest libraries have 638 slots and store 38.3 TB. ASTI execs claim Plasmon shipped over 17,000 libraries. ASTI will not honor service contracts for Plasmon customers but is offering discounts on new contracts, Carr said.
Cisco is adding performance and security features to its MDS 9000 Fibre Channel SAN director platform to make it more palatable to mainframe shops.
The enhancements will speed replication on mainframes over the WAN and add encryption and management capabilities along with 8 Gbps FICON support. The idea is to improve Cisco’s FICON performance as IBM begins phasing out older ESCON mainframe connectivity devices, which will force customers to swap out mainframe switches and HBAs.
“All those ESCON directors are going to be taken out and need to be replaced,” Enterprise Strategy Group analyst Bob Laliberte said. “It will be a phased type of thing, and it presents a great opportunity for Cisco to get penetration on mainframes.”
Cisco’s enhancements include:
• Cisco XRC Acceleration that speeds the performance of IBM z/OS Global Mirror – formerly known as Extended Remote Copy (XRC). Cisco XRC Acceleration caches data to reduce latency over the WAN and speed replication.
• Cisco TrustSec Fibre Channel Link Encryption works on data that goes over any native Fibre link through an upgraded 8 Gbps linecard on either end of Cisco Inter-Switch Links (ISLs). The encryption works on FICON and open systems.
• Cisco I/O MDS 9000 I/O Accelerator, a SAN-based fabric application to speed replication to disk or tape for disaster recovery.
Cisco MDS directors have been more popular in open systems environments than mainframes. When Cisco got into the Fibre Channel switch market in 2003, rivals McData and InRange had a lock on the mainframe space. McData and InRange are now gone, and Brocade has the IP from both companies through acquisitions. Cisco’s storage competitor Brocade still sells McData directors and will likely leverage mainframe IP into its DCX and other new generation directors, but Cisco is looking to persuade organizations with ESCON to switch to MDS.
Laliberte says the XRC Acceleration and Cisco’s ability to re-map FC ports from old directors to MDS 9000 directors by using VSANs should prove especially helpful for mainframe customers.
Cisco’s software line product manager for the MDS Bob Nusbaum says, “IBM’s phase out of ESCON is a strong signal that ESCON users should transition to FICON.”
Nusbaum estimates there are millions of ESCON devices still in use, although the migration to FICON has been going on for years. “If it was easy for customers to get off of it, they’d have done it already,” he says.
LSI Corp. acquired NAS vendor ONStor today, continuing the trend of storage acquisitions that likely will continue for at least a few more months.
LSI got a good price. It paid $25 million for a company that had close to $140 million in VC funding. But for now LSI isn’t talking about its plans for ONStor because it is in its “quiet period” ahead of its earnings report next Wednesday. An LSI spokesman said the company will talk about the acquisition on its earnings call.
But one thing is obvious. “Now LSI is in the NAS business,” StorageIO Group analyst Greg Schulz says. “LSI already sells storage to Dell, IBM, Sun, SGI and others. This is a golden opportunity to go in and provide a turnkey box to go in front of the boxes they already sell.”
ONStor was among the vendors talking IPO at the start of 2008, only to fall on hard times when the economy tanked. It completed a funding round of less than $10 million in December, with only existing VCs kicking in – apparently a move to keep it going long enough to get acquired.
ONStor also began a technology change this year, adopting the Zettabyte File System (ZFS) developed by Sun as its primary architecture and bringing out the ZFS-based Pantera LS2100 in April. The LS2100’s iSCSI support also brought ONStor into the multiprotocol storage market.
Because its NAS gateway is compatible with other vendors’ storage, ONStor has frequently partnered with SAN companies over the years – including Fujitsu Computer Systems, Nexsan, 3PAR, Pillar and LSI.
“That’s the appealing thing for LSI,” Schulz says. “They could put ONStor in front of any arrays.”
LSI sells its SAN systems exclusively through OEMs — mainly IBM – while ONStor has its own set of partners and sells everything under its own brand. That raises an interesting set of questions:
Will LSI sell NAS only through OEMs, or will it sell NAS through the LSI or ONStor brand?
Will LSI compete with its partner IBM on the NAS front, will it try to replace NetApp as IBM’s NAS partner, or will it offer IBM an alternative NAS platform?
With ONStor’s ZFS support and its own background as Sun’s midrange SAN supplier, will LSI go after the Sun midrange storage market if Oracle changes Sun’s storage strategy?
Will LSI use ONStor’s file virtualization capabilities as part of the SVM (Storage Virtualization Manager) platform it picked up in its acquisition of StoreAge in 2006?
Hopefully LSI will begin to shed light on some of these issues next week.
An interesting little tidbit crossed my inbox yesterday – an announcement from Kroll OnTrack, which specializes in recovering damaged or unreadable hard disks (we covered some of their recovery efforts after Hurricane Katrina). According to the company’s press release, it “can now offer NetApp users a trusted and viable option to address data loss for the Data OnTap platform.”
The press release referred specifically to snapshots:
As NetApp OnTap provides users with Snapshots (automated, point-in-time backup), this new technology is critical as sometimes the snapshots are purged before the creation of a more permanent backup is created [sic] (i.e. when there are gaps between snapshots and backups) – as such, the data is lost and no longer available to the NetApp storage system. The new technology allows for the recovery of these snapshots by essentially ‘turning back the clock’ on a NetApp FAS system enabling Ontrack Data Recovery engineers to restore the data to its last Snapshot state.
I followed up with Kroll yesterday to find out if this is the first in a series of offerings for major storage vendors. After all, they all offer snapshots.
This was the response I got from a spokesperson:
The NetApp solution was actually developed in response to customers requests – a “just in time” solution. They may develop solutions for other storage vendors, but they have not had many requests at this time.
Another line that jumped out at me in the press release:
the company also offers a hardware solution beyond NetApp’s RAID-DP safeguard. While RAID-DP allows for the failure of two disks in a system, Kroll Ontrack provides an additional layer of protection when more than two drives fail and before a rebuild occurs.
I also followed up on this, to clarify whether Kroll was releasing a data protection product or if they meant something else. The response:
This may be misleading in the press release. It’s not that [Kroll has] an additional product/solution. What they are saying is that in addition to the ability to recovery from software failure (the snapshots), they are also able to recovery from Hardware failures (the RAID-DP). So when the RAID-DP fails, they can still recovery from the system as well.
When the RAID-DP fails?
If you read between the lines here, it seems like the case of purged snapshots is what drove the initial recovery from a specific customer or customers, and Kroll is now trying to advertise it as a generally available service. The snapshot issue could arguably have been caused by user error, and there’s no indication the RAID-DP service has actually been used “in anger.” But I can only imagine that for NetApp, seeing this press release must’ve been like a landlord reading about Orkin offering a special for its pest control services on one of his buildings. The implications are not explicit, but they’re there.
I didn’t realize before today that there is such a rich niche community of bloggers focused solely on watching every move Google makes.
Color me more educated after I ran across some detective work by two bloggers today in my Google Reader (make of that what you will) which makes a case that Google is preparing to launch a long-rumored cloud file storage service known as GDrive.
Tony Ruscoe at Google Blogoscoped described GDrive this way: “the most eagerly anticipated Google product ever, with rumors literally going back years.” In January, he pointed out a reference to a “Google Web Drive” option in a beta release of Google’s photo-sharing software, Picasa, for Mac. Ruscoe also published a post that month in which several more tantalizing hints toward a possible Google Web Drive were uncovered in cached copies of Google documents in search engines (irony, anyone?).
Fast forward to this morning, and another blog dubbed Google Operating System, tagline, “An unofficial blog that watches Google’s attempts to move your operating system online,” posted on more possible clues in the latest update to the Google Docs interface:
The new interface of Google Docs, which is slowly rolled out to all users, brings the service one step closer to an online storage service. The “items by type” menu replaced “PDFs” with “Files”, suggesting that Google Docs will allow users to upload any type of files.
On the one hand, GDrive has been rumored and “impending” for years. On the other hand, with competitors like Amazon and Microsoft launching cloud storage services, it seems like a no-brainer for Google to want to compete in trendy cloud storage. But will this be the year? Stay tuned…
The maker of software that connects Mac workstations with Windows servers is launching a new product that it claims will prevent “bad Mac behavior” with data archive stub files.
Group Logic’s main product is ExtremeZ-IP, software used to connect Mac clients with Windows servers. According to CEO Reid Lewis, a problem can arise when Mac clients are attached to Windows file servers where a file archiving program is leaving stubs.
Apple’s Mac OS X operating system includes features for end users called Quick Look, which shows users a preview of documents in the OS X file system. According to Lewis, the call that Quick Look makes to the primary file share can make archiving software think the files are being called back from the stub location. “When the Mac tries to render a prieview, the archive sees that as a read and bumps the file back up to primary storage.” It’s easy to imagine a scenario from there where a quick flip through all the contents of a folder could clog up the primary file server, Lewis added.
Group Logic’s new ArchiveConnect software, when installed on the Mac client, can provide a translation that allows for Quick Look while preventing stub files in the archive from being restored during a preview operation. Group Logic is charging $1.60 per GB of archive data addressed by Mac clients, and contemplating a per-client licensing scheme as well.
It’s a niche issue, said Brian Babineau, senior analyst with the Enterprise Strategy Group (ESG), and it would be easier for users if this kind of integration came directly from an archiving vendor rather than a third party.
However, he added, non-Windows applications remain an area that has largely been ignored in the enterprise archiving world to date. “We rare all aware of the benefits file archiving can bring–however, Mac environments that need archiving need more than just HSM because the type of data that they store is usually different than your traditional Windows or Linux environment,” Babineau said. “Solutions that can support the applications which generate more content types and archive the data right from the application are more compelling from my standpoint.”
SunGard’s technical officer for cloud computing, Don Norbeck, talked with Storage Soup this afternoon on the service provider’s participation in the Distributed Management Task Force (DMTF) Open Cloud Standards Incubator and the “physics problem” that currently stands between IT and true application portability.
Storage Soup: Tell me about the standards body you joined and why…
Norbeck: DMTF has a good track record with previous initiatives. They brought VMware, Microsoft and Citrix to the table and got them to agree to include metadata to allow a base level of interoperability between them for the Virtualization Management Initiative (VMAN). The Open Virtualization Format (OVF) is similarly impressive to us.
SS: Did you just join the group this week? Is it a new initiative?
Norbeck: It’s relatively new – the group formed this April, and SunGard was part of that initial discussion. The news today is that we petitioned to be included in the leadership board and were just approved.
SS: Who else is participating in this standards effort?
Norbeck: Other members of the initiative include Cisco, EMC, VMware, Microsoft, HP, AMD, Rackspace, Savvio and Sun. Right now it’s an incubator discussion to define basic components of the cloud and how they should be administered. We don’t often participate in standards efforts, but we see extreme value as a service provider in being involved in this conversation early on.
SS: What kinds of things will the incubator be defining? What does it have to do with SunGard’s disaster recovery business?
Norbeck: Our first hypothesis is that there are going to be hundreds of different clouds out there with different characteristics – some optimized for speed, some for cost and some for availability. The cloud will serve two purposes: avoiding downtime and the expansion of infrastructure for peak demand. How much capacity you can spin up and how quickly you can fail over to a cloud data center depends on an up-front information exchange between the end user and the provider to tell how much and what to spin up for true application portability.
SS: I always thought cloud standards had more to do with interoperability between service providers – I thought the way users send data to service providers is already relatively well understood.
Norbeck: Before you can float workloads between service provider infrastructures, you have to figure out first how users move the workload beyond their firewall. That’s the first step. If we can all agree on application portability standards within that framework, we may be able to set something up where you can follow the sun from an electrical power perspective.
SS: Will the standard address how to move data over distance? Seems like that’s a hurdle VMware is trying to overcome right now, for example.
Norbeck: We’re still limited by distance. Network data transmission capacity is still a scarce resource. I’m not sure what the solution is – maybe enabling content delivery networks for branch offices so there are small bits of critical data everywhere, or leveraging some WAN acceleration technology in between. Storage is going to be the final domino to fall for the model of computing platform cloud aspires to be.
SS: How would you answer those who say it’s too early at this stage of the cloud to start imposing standards?
Norbeck: With any standards effort, the proof is in the actual utilization of the standard. This effort is more at the discussion stage, in which we’re looking to agree to language that will enable our customers to utilize us better. It’s too early to impose World Wide Web (WWW) type standards on cloud computing, but it’s not too early for the conversation.