Microsoft often has been seen as opposed to any operating system that isn’t Windows — particularly Linux. However, Redmond has been changing its attitude, in some cases even going out of its way to make room at the table for the open source OS.
In fact, if recent rumors are borne out, the company will soon add Linux to the list of OSEs that Windows Azure public cloud platform supports.
According to reports from Microsoft watcher Mary Jo Foley, Microsoft is adding support for Linux in addition to Windows Server in Windows Azure’s so-called Virtual Machine (VM) role along with other upcoming changes to its Windows Azure public cloud offering.
It will do that in part to meet the demands of larger customers who have apparently been leaning on the company over the fact that heterogeneous data centers are the rule, not the exception. Linux is a fact of life, not something to be ignored, even in the cloud.
Additionally, and perhaps a little ironically, Azure does not support several key Microsoft applications, including SharePoint Server, SQL Server, Small Business Server and Terminal Server.
The VM role has been in beta for months. It provides an easy and quick way to move an application onto Azure by simply loading it as a Virtual Hard Disk (VHD) image into a VM role. Microsoft points to the VM role as a way to run legacy applications on Azure.
However, the VM role doesn’t currently persist application states nor does it support Linux.
Microsoft architects had apparently expected customers to build their applications on Azure’s Platform as a Service (PaaS) APIs. Writing apps from scratch is more work than running them in VMs.
“If Microsoft makes VMs stateless and even lets Linux VMs load, it would address some of [its] issues with Amazon [and other PaaS purveyors],” said Rob Sanfilippo, research vice president at analyst firm Directions on Microsoft.
If this is true, the move could help Microsoft’s public cloud story with enterprise IT.
“It’s the first non-Windows server supported by Azure [and] it broadens their offering …. If you really want to get the most out of Azure, a lot of organizations really just want to move their applications to the cloud,” Sanfilippo added.
The updated VM role capability with support for Linux and preserving application state is set to go into community technology preview, or CTP, in late March, said Foley.
Microsoft declined to comment on pending Azure futures and has not made any announcements regarding hosting Linux on Azure.
Get a bunch of cloud proponents in a room together to talk service-level agreements and things can get heated and even a little emotional. Or at least that’s how it seemed at the recent Cloud Standards Customer Council meeting this week in Santa Clara, Calif.
Discussions argued about the lack of (and desire for) a single cloud standard on which all cloud vendors can compare themselves. Think Underwriters Laboratories Inc. for the cloud. Sure, it would create a starting point for consumers to weed through the budding cloud vendor market, but the industry is still a long way from seeing a UL-like certification stamp of approval on the cloud.
So what’s an enterprise to do? Trust. And read the SLA. “You need a strong rooting of trust in the cloud to begin with, then comes the SLAs,” said CSCC member Larry Carvalho, an independent cloud consultant located in Ohio.
One problem is that there’s no single answer on what a company should look for in an SLA. Different vertical markets will need varying levels of commitment, security and uptime. And some small to medium-sized businesses may not even need to read the SLA.
“We just trusted the cloud. It was better than the alternative,” said Eric Eric Edelson, vice president and co-owner of Fireclay Tile, a recycled tile manufacturer. Edelson helped the SMB move its outdated system run on FileMaker Pro to Salesforce.com. And he hasn’t looked back. Salesforce.com gave the then-struggling tile company a sense of order and visibility into the entire production process. And for a flat fee of about $12,000 for eight licenses (plus additional maintenance costs) with promises from Salesforce that it won’t raise fees, Edelson said he didn’t pay very close attention to the SLA.
Of course, large enterprises don’t have it that easy. They need assurance and they need to study contract terms. So with nothing but trust as the first “stamp of approval,” where’s an enterprise to turn? When asked which vendors are the most trusted, the big guys prevail: Amazon, Google, Rackspace and Salesforce.com. But we all knew that already.
Makes me wonder … How do small or new cloud vendors prove themselves in the market? And what can they do to gain your trust?
For U.S. cloud users looking for a provider with a global footprint, Tata Communications is easing its way into the Infrastructure as a Service (IaaS) market, one continent at a time.
“We have no plans to go up against Amazon or Rackspace in the U.S.,” said John Landau, senior vice president of technology at Tata. The company has data centers on the east and west coasts of North America and will stand up cloud instances in the U.S. for customers in India or Singapore who want a presence here. But that’s it for now.
“AWS is formidable,” Landau said of Amazon’s cloud business. Tata’s Instacompute looks similar to it, as do all IaaS offerings that have popped up in Amazon’s wake; it’s multi-tenant, Xen-based and pay-by-the-hour. Roughly a thousand companies are running Instacompute in trial installments and almost 300 are up and running in production.
The usual suspects are jumping on board with Tata’s product — users building scalable Web apps, games, portals and those running development and test operations. It’s much the same make up as the early adopters of virtualization.
Tata’s competition in India is pretty slim for now, according to Landau. NetMagic has an IaaS facility locally and companies in India are using Amazon Web Services, although its nearest hub is Singapore. “[AWS] will eventually get facilities here and they do a good job of education,” Landau said. But he doesn’t expect Tata’s enterprise customers to go for Amazon’s cloud.
For these customers –the bulk of Tata’s business — the firm is building a VMware enterprise cloud service. “The enterprise guys need VMware,” Landau said. Initially, Tata’s VMware and Xen clouds will be separate offerings, but eventually users will be able to see their VMs across both clouds through the same portal.
Landau expects the VMware cloud service to be up and running and available to U.S. customers in 2012. This will be the basis of Tata’s hybrid cloud offering, connecting private VMware infrastructures to Tata’s public VMware-based cloud. Tata has no plans to offer the Xen cloud as a hybrid solution.
Key to the “enterprise cloud” will be the payment system, according to Landau. “Credit cards are a painful way to manage finances and spend control.” Tata’s enterprise cloud will let administrators set budgets for each project, place a purchase order and consume services off of that. It’s an enterprise governance model, but it’s still on-demand cloud, he said.
Tata’s cloud technology choices
In evaluating the multitude of cloud platform products, Tata chose Cloud.com’s CloudStack OS (now owned by Citrix) to run Instacompute. Landau’s onboard with Citrix’s goal to absorb OpenStack into CloudStack as the open source platform matures.
“OpenStack is still some distance from that for public cloud,” Landau said. He believes it’s at least 12 months from the functionality available from other cloud platform software like CloudStack, Eucalyptus or Abiquo, among others. Tata originally went with Cloud.com and CloudStack for its multi-hypervisor support, among other things.
Landau’s not convinced OpenStack has the muscle behind it to really succeed now. He compared it to the early days of the Linux market, before IBM and Intel got involved. “Once those companies doubled down on Linux, it became what it is today.” Dell and HP have stated their commitment to OpenStack, but it will be a struggle until these vendors really put resources behind it, he added.
Cloud test and dev service provider, Skytap plans to expand to other colo facilities beyond its Savvis data center in Tukwila, Washington, next year, according to Brett Goodwin, VP of marketing and biz dev at Skytap.
Savvis always seemed like an odd choice for Skytap which recevied funding early on from Amazon founder Jeff Bezos.
“It’s part of our 2012 plan to expand and explore our options,” Goodwin said refering to the company’s existing colocation agreement with Savvis. He declined to get into the matter further. Savvis was acquired by CenturyLink earlier this year.
Goodwin said Skytap has approximately 150 enterprise customers, mostly in the mid-market, using its cloud service and their requirements for storage, especially, keep growing.
Skytap announced three new features to its cloud service this week. The first is a set of advanced notification features that alert admins when users are close to a threshold limit instead of manually having to check when those users are close to their quotas of storage or CPU usage etc.
The second feature is a self-healing capability that auto detects when a VPN connection has broken and reconnects it. This is important for users doing hybrid cloud that need to maintain a connection between their private and public cloud environment.
And lastly Skytap is now supporting the Open Virtualization Format (OVF) for users that have multiple hypervisors in their infrastructure, beyond just VMware. This is helpful for users exporting workloads from Skytap back into a private infrastructure that runs on Xen, KVM or Windows Hyper-V, for example.
“Over time we’ll see ever more commodization in the hypervisors,” said Goodwin.
I got a glimpse of the future of driving this week and it made me feel a bit queasy.
I sat in the new Tesla Motors electric car at the GigaOm Roadmap conference in San Francisco and it’s a beautiful vehicle, but it’s as much an entertainment system as it is a car. It is designed to be constantly connected to the cloud and has TWO iPad-sized screens one above the other to the right of the steering wheel. To call these screens distracting is the biggest understatement of all time.
But my experience of this new car comes at an interesting moment, just one day after a Mercedes c230 plowed into the back of me on route 101 southbound. Traffic slowed quickly and the driver, who was texting, was distracted.
Here’s what his car looked like. It’s totaled. Mine is at the shop but the back was smashed up pretty bad.
Having too much online access and in-dash entertainment is something else to distract drivers and can only increase the risk of accidents, I think. Will motorists be allowed to drive while shopping on Amazon, reading an article or watching a video on YouTube?
Many states have banned texting while driving and I can see more regulations that prevent drivers from fully utilizing these dashboard touch screen computers while in motion.
As the bumps and bruises from the crash I was in yesterday start to fade, I hope I will remember that safety is the number one requirement in a car, not how well connected it might be to the Internet or my cloud services.
Infrastructure as a Service (IaaS) providers, OpSource and Rackspace, threw their hats into the cloud software ring this week, disrupting the traditional enterprise software market and other upstarts in the cloud market.
OpSource announced Cloud Software, a way for companies to buy enterprise software such as Microsoft SQL Server, SharePoint and Oracle database products on a pay-per-use basis. So far, cloud users have been able to pay pennies per hour for servers, but still had to pay the full perpetual license fee for whatever software they ran on those machines.
With this offering, users can rent Oracle or Microsoft products for a fraction of the price to buy them, as long as they are an OpSource customer.
Microsoft is moving in this direction itself with its EAs and Open Value licensing, but so far we haven’t seen this kind of licensing model from Oracle. The database giant is expected to announce changes to its licensing model for Oracle Public Cloud soon.
Keao Caindec, chief marketing officer at OpSource said he doesn’t expect these new pricing schemes for enterprise software in the cloud to cannibalize the perpetual license business. “It satisfies a different need,” he said. OpSource’s market is developers testing software in the cloud, where they only need to turn on a machine for the period of the test. He said pay-per-use would be more expensive on an annual basis than a perpetual license if you kept the machines running. A perpetual license from Oracle costs from $7,000 to $20,000 per processor. While OpSource cloud software is approximately $350 per month, flat rate for a fully-fledged Oracle machine.
There are many cloud database-only providers out there and also cloud providers that offer a database as part of their service, but OpSource said its goal is to be multi-vendor and multi-product.
Meanwhile, OpSource isn’t the only company upsetting the apple cart in the cloud software market. Rackspace, the hosting provider that morphed into a cloud provider, is now offering software to enterprises. It’s Rackspace Cloud: Private Edition, is its distribution of the OpenStack IaaS software for enterprises to run their own private clouds.
The OpenStack OS is free, but Rackspace will charge users for implementation support, OpenStack updates and upgrades, performance tuning, system analysis, security patching and fixing, escalation support for engineering questions and OpenStack training for developers. Think Red Hat Linux, but for the OpenStack OS.
For Rackspace, the goal is to push wider adoption of the OpenStack platform. As companies deploy this in house, and then need more resources, it becomes easier to move their OpenStack environments into Rackspace’s facilities, or into one of its partner’s data centers, like Equinix which just announced support for Rackspace Cloud Private Edition.
It’s a similar strategy to VMware’s vCloud business. VMware is seeding service providers with vCloud data center software that can connect to VMware infrastructure companies run in-house.
The big question now is how long all the standalone IaaS platform providers, Nimbula, Abiquo, Eucalyptus, et al, can last. It’s a bit of a bet to be sure, but I’m guessing OpenStack is most apt to nail the ’80′ while all these small cloud plays are nailing the ’20′.
Universities, cultural heritage organizations and libraries around the world – there’s a cloud service for you now too. It’s an open source offering developed by not-for-profit organization, DuraSpace, called DuraCloud and is focused on preserving important documents.
The service runs on top of cloud storage providers’ Amazon S3 and Rackspace Cloud Files and eventually, Microsoft Azure. Users can store documents, images, video, just about any content you like and as many copies as you like, across these providers and it’s all accessible from a single portal. Try moving content across different cloud providers today without this kind of service. It’s a royal pain. DuraCloud automatically synchronizes your copies across providers and offers a health check service to verify the integrity of your files.
There are no requirements to how your content must be structured for ingest into DuraCloud. In terms of content, DuraCloud is essentially a blob store. You can upload any bitstream, in any format. DuraCloud is also capable of storing any type of package (i.e., AIP, ZIP, TAR, etc.). And since there are no requirements, you can easily transfer data to DuraCloud yourself. There are three options for uploading content to DuraCloud: via the web interface, the client-side synchronization utility, or the REST API.
DuraSpace started the project in 2009 and initially built it on EMC’s Atmos Online and Sun’s Cloud storage services, both of which went poof in 2010. It was a good test of the software, according to Michele Kimpton, CEO of DuraSpace, who said they were easily able to move DuraCloud to Amazon and Rackspace.
“It proves the model, you can’t rely on just one provider …Users need flexibility of providers and their data in multiple geographies,” Kimpton said.
The service is geared to the 1200 or so academic institutions and cultural heritage organizations already using DuraSpace’s Fedora framework for building an archive and Dspace, a repository application. These hook directly into DuraCloud, although you don’t need them to use DuraCloud. The service doesn’t offer any kind of security capability today such as encryption, which is a definite downside for anyone thinking of using it for sensitive information.
And it’s not especially cheap. DuraSpace charges a subscription fee for running the service of $375 per month which includes 500 GB of storage and access to all services in the platform. Additional storage is charged at the rate of the underlying cloud provider.
There are other preservation services out there, but so far none have taken advantage of the cloud. Chronopolis is a digital preservation service developed by the San Diego Supercomputer Center (SDSC) at UC San Diego. It takes a copy of your content and stores it offline, so you can’t see it or easily access it but they will keep it “forever” for you. Stanford University has a service called LOCKSS (Lots of Copies Keep Stuff Safe), but you have to be a member and run a server called a LOCKSS box in your IT environment. Your box joins others in a peer to peer network and if any one box goes down, you can pull your content from another LOCKSS box. Kimpton claims it doesn’t scale well and you need specialist skills to use it.
Eventually DuraCloud will offer data mining and data analytics services for the content in its stores and Kimpton expects someone will probably want to license it as some point for commercial purposes. “We’ll decide if we want to do that down the line,” she said. “We’re not trying to make a profit, that’s why there is trust within our community.”
Just this week two different cloud marketplaces went live, Equinix’s cloud marketplace and Synnex’s CloudSolv. Both claim to bring buyers and sellers of cloud services together under one roof reducing network costs and improving performance of the services available.
Equinix says companies already housed in one of its International Business Exchanges can acquire services from each other, with Equinix facilitating the connection in the middle. Instead of doing an extensive search for a provider of a particular service only to discover they are right next door to you in an Equinix data center, now users can search the marketplace and in an instant, see what’s available. Equinix claims 4,000 companies are housed in its data centers.
A key advantage of finding a provider inside the same data center as you is bandwidth costs and network performance. There’s no need to shell out money for fat pipes to a provider if they are right next door to you, and you can expect much better performance from your neighbor than someone a million miles away.
Similarly, Synnex, the third largest distributor of IT products in the US, launched a cloud application marketplace for its resellers. It runs on a new product from FullArmor Corp. called AppPortal Marketplace.
One of the reasons these cloud marketplaces, sometimes called Cloud Brokerage Services are taking off is the sheer number of companies launching cloud-based services. There are literally thousands of them from industry specific vertical clouds that meet regulatory requirements for those industries, to cloud based IT services (security being a key one), to business functions like CRM as a service i.e., Salesforce.com to IaaS and PaaS offerings.
Making sense of who does what and then figuring out which provider to go with is a huge undertaking. These marketplaces go some way toward simplifying that process and hopefully providing a better service as the providers are in a trusted community.
The downside to marketplaces is the fees the owner of the marketplace might charge to be in its club so to speak (i.e. eBay) and the power it gives the marketplace. Amazon for example monitors all the sales data on its site and uses that information to cherry pick popular products in categories it doesn’t normally stock, sometimes undercutting other sellers in its marketplace. It’ll be interesting to see if the cloud marketplaces evolve this way too.
The OpenStack project’s release of Diablo a few weeks ago invited comparisons to adolescence, but after attending the OpenStack Conference in Boston last week, that analogy strikes me as premature. OpenStack is more like a precocious first-born toddler from whom the family expects great things, but who still has a long way to go.
No doubt about it, OpenStackers have incredibly high hopes for their open-source cloud software stack. Take Chris Kemp, founder and CEO of Nebula, which is building an OpenStack-based private cloud deployment package. “OpenStack is more than just a platform, it’s turning in to an economy,” Kemp said, ”….that will power the next generation of computing.” If successful, “I really think we have an opportunity to change the world.”
At just one year, OpenStack’s achievements are impressive. At the show, Alejandro Comisario, infrastructure senior engineer at MercadoLibre, an e-commerce provider focused on Latin America, described how his firm runs 6000 VMs in a production cloud on top of OpenStack. Meanwhile, researchers from the University of Melbourne told me they are developing a national OpenStack cloud for use by Australian research universities. Clearly, OpenStack has gained a lot of traction in a very short time.
But OpenStack is far from a done deal. The newly formed OpenStack Foundation, which took the reins from RackSpace, is still grappling with fundamental questions about OpenStack’s identity and modus operandi. In a panel session entitled ‘Winning OpenStack’s Second Year,’ panelists from companies including Citrix, HP, RackSpace, Nebula and Cisco discussed issues like whether it should publish a roadmap; whether to stick with Infrastructure as a Service or extend to Platform as a Service; how to ensure code quality; and how to engage end users.
These are all foundational questions which commercial cloud platform providers have, by and large, already answered for themselves.
“The troublesome two’s are a difficult time for parents,” said Tim Hill, group leader of the IT/OIS group within IT at CERN that has experimented with the platform. “Hopefully the OpenStack Foundation will have an easier time.”
IBM has acquired Platform Computing, a score for the commodity private cloud champions over those pushing expensive, proprietary cloud in a box systems.
Historically a strong player in the high performance computing market, Platform switched its focus from grid management software to private cloud management in 2009. Its software enables IT shops to create Infrastructure as a Service in-house from multiple hypervisors, provisioning tools and commodity hardware.
With the acquisition of Platform, Big Blue is hedging its bets on which way users will go to build private clouds. One approach is to lash together x86 servers with some virtualization, automation and management software; the alternative is to buy an expensive cloud in a box, like IBM’s Workload Deployer hardware appliance, where the software and hardware is pre-integrated. Oracle, EMC, Cisco, VMware, NetApp and HP all have cloud in a box systems.
Platform’s approach has won it over 2,000 customers including 23 of the top 30 largest global enterprises. CERN, Citigroup, Infineon, Pratt & Whitney, Red Bull Racing, Sanger Institute, Statoil and the University of Tokyo all use the software to manage commodity clusters.
Other vendors offering cloud platform management software include Embotics, Eucalyptus, Abiquo, Gale Technologies and VMware among others.
Platform Computing has approximately 500 employees worldwide who will join IBM Systems and Technology Group. Platform was privately held and thought to be profitable, due to its range of products and market leadership in HPC, not just Platform ISF its cloud management application.