There’s been some handwringing since late January when the open source cloud platform OpenStack suggested cutting “dead wood” from the pending next release — and that includes Microsoft’s Hyper-V.
Sure vendors have shown excitement over the future of OpenStack and Hyper-V, but having Microsoft’s virtualization hypervisor in OpenStack doesn’t seem to matter to enterprise IT today, according to one cloud analyst.
“It’s a completely minor deal … the hypervisor support isn’t a big deal, functionally. Most users for the foreseeable future are [going to] stick with OVF [Open Virtualization Format] or similar,” he added.
Additionally, users can still run Windows as a guest operating system with OpenStack, said Chad Keck, senior director of sales at cloud-based hosting provider AppFog, who worked on OpenStack. “I don’t know anyone who is using OpenStack that is also leveraging Hyper-V,” Keck added.
That didn’t stop the discussion from getting a little shrill.
In a post to the OpenStack team mailing list, release manager Thierry Carrez described the project’s Hyper-V support as “known broken and unmaintained.”
“It sounds like a good moment to consider removing deprecated, known-buggy-and-unmaintained or useless feature code from the Essex tree,” Carrez’s post continued.
Microsoft said, however, it’s not giving up on support for the project and stressed its commitment to resolve current issues with Hyper-V and OpenStack.
Even if Microsoft drops the ball, there is little reason to worry, Brooks noted.
“By the time OpenStack is ready for prime time, it’ll probably support Hyper-V again,” he added. “If not, it will happen in a twinkling of an eye as soon as someone finds a good reason.”
Beth Pariseau also contributed to this article.
Laying out its Microsoft Product Roadmap for 2012 this week, an analyst at market researcher Directions on Microsoft said it believes the company will bring the System Center management suite and Windows Azure closer together over the next few years to where the two will likely merge into a single platform.
Evidence of this tighter relationship can be seen in the upcoming System Center 2012 suite, due in early spring, which has new features supporting a number of capabilities in Azure. System Center and Azure won’t be the only two getting cozier. Microsoft will also enrich Windows Server to work more hand-in- glove with Azure as well, said Rob Helm, managing vice president of Directions on Microsoft.
“System Center will continue its reach toward Windows Azure with Virtual Machine Manager (contained in System Center 2012) already gaining the ability to manage some Azure resources. I think Windows Server will also gain the ability to run Azure’s unique services for things like storage and authentication. This way if something deployed (on Azure) is not working out or there are security concerns, users can bring them over to Windows Server,” Helm said.
Continuing on what he sees for Azure in 2012, Helm said the cloud platform will receive two important updates this year – updates he originally expected in 2011 – that will make it more compatible with Windows Server as well as deploy applications with significantly less upfront costs. The first will be the VM roles feature which will allow the platform to run Hyper-V virtual machines.
The second will be the delivery of Application Virtualization, better known as Server App-V, which will allow Azure to run Windows Server components it can’t today, making it easier to summon up server applications, Helm said. He added that in the second half of this year Microsoft itself would be putting server-based apps up on Azure, namely some of its Dynamics applications such as Dynamics NAV.
As Azure gains the ability to host virtual machines, Helm predicts it will generally function as an Infrastructure as a Service (IaaS) offering, not just as a Platform as a Service (PaaS). This evolution will bring it more directly into competition with Amazon Web Services.
“I think you will gradually see Amazon Web Services and Azure converge in terms of their capabilities,” Helm said.
Let us know what you think about this story; email Ed Scannell at firstname.lastname@example.org.
VMware and virtualization changed the face of enterprise IT. And cloud computing — in some form or another — promises to do the same.
What shape will the cloud take? It’s still too early to say for sure, but my gut tells me the cloud will be inextricably linked with Linux-inspired tools, applications and operational philosophies.
Web 2.0 and the cloud set is dominated by mainstays of the Linux ecosystem: programming languages (Ruby and Python), operating system-provisioning tools (Cobbler and Foreman), configuration management and automation frameworks (Puppet and Chef) and monitoring suites (Nagios and Zabbix). Linux folks, who lament Windows’ cost, security and lack of programmability, also dominate the emerging DevOps movement.
In a roundabout way, a new Linux Foundation survey confirms my suspicions: New instances of Linux — and that has to describe anything remotely cloud-like — are overwhelmingly going toward new applications. In the past two years, the survey found, 71.6% of new Linux deployments went to brand new applications and greenfield deployments, versus 38.5% and 34.5% of new Linux instances that were derived from Windows and Unix migrations, respectively. It’s hard to change horses midstream, but less so when you’re still on the riverbank.
What kinds of new workloads are IT shops deploying on Linux? Big data, for one. Organizations that plan to add servers to support big data workloads will use Linux over Windows by a two-to-one margin (71.8% vs. 35.9%). Given big data’s open source and Linux heritage, that’s not entirely surprising, but it’s still quite telling.
Meanwhile, in the short term, the big names in cloud are hedging their bets.
Amazon, for example, recently extended its Amazon Web Services Free Usage Tier to Windows Server 2003 R2, 2008 and 2008 R2, providing developers up to 750 hours of testing time per month, for up to one year. The service was previously limited to Linux Amazon machine images, and it should be a boon to enterprise developers testing multi-tier apps that run on mixed platforms.
But at the same time, Microsoft itself is set to begin offering Linux instances on Azure, making it possible to move existing Linux apps to Redmond’s Platform as a Service (PaaS), rather than building them from scratch. I would have loved to have been a fly in the wall in that meeting.
Of course, Windows still dominates the data center. In the third quarter of 2011, Windows servers represented 49.7% of all factory revenue, compared to 18.6% for Linux servers, according to the IDC Worldwide Quarterly Server Tracker. But Linux server growth outpaced that of Windows by a healthy margin, 12.3% compared to 5.3% for Windows. Linux won’t overtake Windows anytime soon, but with cloud on the horizon, the wind is at its back.
Microsoft often has been seen as opposed to any operating system that isn’t Windows — particularly Linux. However, Redmond has been changing its attitude, in some cases even going out of its way to make room at the table for the open source OS.
In fact, if recent rumors are borne out, the company will soon add Linux to the list of OSEs that Windows Azure public cloud platform supports.
According to reports from Microsoft watcher Mary Jo Foley, Microsoft is adding support for Linux in addition to Windows Server in Windows Azure’s so-called Virtual Machine (VM) role along with other upcoming changes to its Windows Azure public cloud offering.
It will do that in part to meet the demands of larger customers who have apparently been leaning on the company over the fact that heterogeneous data centers are the rule, not the exception. Linux is a fact of life, not something to be ignored, even in the cloud.
Additionally, and perhaps a little ironically, Azure does not support several key Microsoft applications, including SharePoint Server, SQL Server, Small Business Server and Terminal Server.
The VM role has been in beta for months. It provides an easy and quick way to move an application onto Azure by simply loading it as a Virtual Hard Disk (VHD) image into a VM role. Microsoft points to the VM role as a way to run legacy applications on Azure.
However, the VM role doesn’t currently persist application states nor does it support Linux.
Microsoft architects had apparently expected customers to build their applications on Azure’s Platform as a Service (PaaS) APIs. Writing apps from scratch is more work than running them in VMs.
“If Microsoft makes VMs stateless and even lets Linux VMs load, it would address some of [its] issues with Amazon [and other PaaS purveyors],” said Rob Sanfilippo, research vice president at analyst firm Directions on Microsoft.
If this is true, the move could help Microsoft’s public cloud story with enterprise IT.
“It’s the first non-Windows server supported by Azure [and] it broadens their offering …. If you really want to get the most out of Azure, a lot of organizations really just want to move their applications to the cloud,” Sanfilippo added.
The updated VM role capability with support for Linux and preserving application state is set to go into community technology preview, or CTP, in late March, said Foley.
Microsoft declined to comment on pending Azure futures and has not made any announcements regarding hosting Linux on Azure.
Get a bunch of cloud proponents in a room together to talk service-level agreements and things can get heated and even a little emotional. Or at least that’s how it seemed at the recent Cloud Standards Customer Council meeting this week in Santa Clara, Calif.
Discussions argued about the lack of (and desire for) a single cloud standard on which all cloud vendors can compare themselves. Think Underwriters Laboratories Inc. for the cloud. Sure, it would create a starting point for consumers to weed through the budding cloud vendor market, but the industry is still a long way from seeing a UL-like certification stamp of approval on the cloud.
So what’s an enterprise to do? Trust. And read the SLA. “You need a strong rooting of trust in the cloud to begin with, then comes the SLAs,” said CSCC member Larry Carvalho, an independent cloud consultant located in Ohio.
One problem is that there’s no single answer on what a company should look for in an SLA. Different vertical markets will need varying levels of commitment, security and uptime. And some small to medium-sized businesses may not even need to read the SLA.
“We just trusted the cloud. It was better than the alternative,” said Eric Eric Edelson, vice president and co-owner of Fireclay Tile, a recycled tile manufacturer. Edelson helped the SMB move its outdated system run on FileMaker Pro to Salesforce.com. And he hasn’t looked back. Salesforce.com gave the then-struggling tile company a sense of order and visibility into the entire production process. And for a flat fee of about $12,000 for eight licenses (plus additional maintenance costs) with promises from Salesforce that it won’t raise fees, Edelson said he didn’t pay very close attention to the SLA.
Of course, large enterprises don’t have it that easy. They need assurance and they need to study contract terms. So with nothing but trust as the first “stamp of approval,” where’s an enterprise to turn? When asked which vendors are the most trusted, the big guys prevail: Amazon, Google, Rackspace and Salesforce.com. But we all knew that already.
Makes me wonder … How do small or new cloud vendors prove themselves in the market? And what can they do to gain your trust?
For U.S. cloud users looking for a provider with a global footprint, Tata Communications is easing its way into the Infrastructure as a Service (IaaS) market, one continent at a time.
“We have no plans to go up against Amazon or Rackspace in the U.S.,” said John Landau, senior vice president of technology at Tata. The company has data centers on the east and west coasts of North America and will stand up cloud instances in the U.S. for customers in India or Singapore who want a presence here. But that’s it for now.
“AWS is formidable,” Landau said of Amazon’s cloud business. Tata’s Instacompute looks similar to it, as do all IaaS offerings that have popped up in Amazon’s wake; it’s multi-tenant, Xen-based and pay-by-the-hour. Roughly a thousand companies are running Instacompute in trial installments and almost 300 are up and running in production.
The usual suspects are jumping on board with Tata’s product — users building scalable Web apps, games, portals and those running development and test operations. It’s much the same make up as the early adopters of virtualization.
Tata’s competition in India is pretty slim for now, according to Landau. NetMagic has an IaaS facility locally and companies in India are using Amazon Web Services, although its nearest hub is Singapore. “[AWS] will eventually get facilities here and they do a good job of education,” Landau said. But he doesn’t expect Tata’s enterprise customers to go for Amazon’s cloud.
For these customers –the bulk of Tata’s business — the firm is building a VMware enterprise cloud service. “The enterprise guys need VMware,” Landau said. Initially, Tata’s VMware and Xen clouds will be separate offerings, but eventually users will be able to see their VMs across both clouds through the same portal.
Landau expects the VMware cloud service to be up and running and available to U.S. customers in 2012. This will be the basis of Tata’s hybrid cloud offering, connecting private VMware infrastructures to Tata’s public VMware-based cloud. Tata has no plans to offer the Xen cloud as a hybrid solution.
Key to the “enterprise cloud” will be the payment system, according to Landau. “Credit cards are a painful way to manage finances and spend control.” Tata’s enterprise cloud will let administrators set budgets for each project, place a purchase order and consume services off of that. It’s an enterprise governance model, but it’s still on-demand cloud, he said.
Tata’s cloud technology choices
In evaluating the multitude of cloud platform products, Tata chose Cloud.com’s CloudStack OS (now owned by Citrix) to run Instacompute. Landau’s onboard with Citrix’s goal to absorb OpenStack into CloudStack as the open source platform matures.
“OpenStack is still some distance from that for public cloud,” Landau said. He believes it’s at least 12 months from the functionality available from other cloud platform software like CloudStack, Eucalyptus or Abiquo, among others. Tata originally went with Cloud.com and CloudStack for its multi-hypervisor support, among other things.
Landau’s not convinced OpenStack has the muscle behind it to really succeed now. He compared it to the early days of the Linux market, before IBM and Intel got involved. “Once those companies doubled down on Linux, it became what it is today.” Dell and HP have stated their commitment to OpenStack, but it will be a struggle until these vendors really put resources behind it, he added.
Cloud test and dev service provider, Skytap plans to expand to other colo facilities beyond its Savvis data center in Tukwila, Washington, next year, according to Brett Goodwin, VP of marketing and biz dev at Skytap.
Savvis always seemed like an odd choice for Skytap which recevied funding early on from Amazon founder Jeff Bezos.
“It’s part of our 2012 plan to expand and explore our options,” Goodwin said refering to the company’s existing colocation agreement with Savvis. He declined to get into the matter further. Savvis was acquired by CenturyLink earlier this year.
Goodwin said Skytap has approximately 150 enterprise customers, mostly in the mid-market, using its cloud service and their requirements for storage, especially, keep growing.
Skytap announced three new features to its cloud service this week. The first is a set of advanced notification features that alert admins when users are close to a threshold limit instead of manually having to check when those users are close to their quotas of storage or CPU usage etc.
The second feature is a self-healing capability that auto detects when a VPN connection has broken and reconnects it. This is important for users doing hybrid cloud that need to maintain a connection between their private and public cloud environment.
And lastly Skytap is now supporting the Open Virtualization Format (OVF) for users that have multiple hypervisors in their infrastructure, beyond just VMware. This is helpful for users exporting workloads from Skytap back into a private infrastructure that runs on Xen, KVM or Windows Hyper-V, for example.
“Over time we’ll see ever more commodization in the hypervisors,” said Goodwin.
I got a glimpse of the future of driving this week and it made me feel a bit queasy.
I sat in the new Tesla Motors electric car at the GigaOm Roadmap conference in San Francisco and it’s a beautiful vehicle, but it’s as much an entertainment system as it is a car. It is designed to be constantly connected to the cloud and has TWO iPad-sized screens one above the other to the right of the steering wheel. To call these screens distracting is the biggest understatement of all time.
But my experience of this new car comes at an interesting moment, just one day after a Mercedes c230 plowed into the back of me on route 101 southbound. Traffic slowed quickly and the driver, who was texting, was distracted.
Here’s what his car looked like. It’s totaled. Mine is at the shop but the back was smashed up pretty bad.
Having too much online access and in-dash entertainment is something else to distract drivers and can only increase the risk of accidents, I think. Will motorists be allowed to drive while shopping on Amazon, reading an article or watching a video on YouTube?
Many states have banned texting while driving and I can see more regulations that prevent drivers from fully utilizing these dashboard touch screen computers while in motion.
As the bumps and bruises from the crash I was in yesterday start to fade, I hope I will remember that safety is the number one requirement in a car, not how well connected it might be to the Internet or my cloud services.
Infrastructure as a Service (IaaS) providers, OpSource and Rackspace, threw their hats into the cloud software ring this week, disrupting the traditional enterprise software market and other upstarts in the cloud market.
OpSource announced Cloud Software, a way for companies to buy enterprise software such as Microsoft SQL Server, SharePoint and Oracle database products on a pay-per-use basis. So far, cloud users have been able to pay pennies per hour for servers, but still had to pay the full perpetual license fee for whatever software they ran on those machines.
With this offering, users can rent Oracle or Microsoft products for a fraction of the price to buy them, as long as they are an OpSource customer.
Microsoft is moving in this direction itself with its EAs and Open Value licensing, but so far we haven’t seen this kind of licensing model from Oracle. The database giant is expected to announce changes to its licensing model for Oracle Public Cloud soon.
Keao Caindec, chief marketing officer at OpSource said he doesn’t expect these new pricing schemes for enterprise software in the cloud to cannibalize the perpetual license business. “It satisfies a different need,” he said. OpSource’s market is developers testing software in the cloud, where they only need to turn on a machine for the period of the test. He said pay-per-use would be more expensive on an annual basis than a perpetual license if you kept the machines running. A perpetual license from Oracle costs from $7,000 to $20,000 per processor. While OpSource cloud software is approximately $350 per month, flat rate for a fully-fledged Oracle machine.
There are many cloud database-only providers out there and also cloud providers that offer a database as part of their service, but OpSource said its goal is to be multi-vendor and multi-product.
Meanwhile, OpSource isn’t the only company upsetting the apple cart in the cloud software market. Rackspace, the hosting provider that morphed into a cloud provider, is now offering software to enterprises. It’s Rackspace Cloud: Private Edition, is its distribution of the OpenStack IaaS software for enterprises to run their own private clouds.
The OpenStack OS is free, but Rackspace will charge users for implementation support, OpenStack updates and upgrades, performance tuning, system analysis, security patching and fixing, escalation support for engineering questions and OpenStack training for developers. Think Red Hat Linux, but for the OpenStack OS.
For Rackspace, the goal is to push wider adoption of the OpenStack platform. As companies deploy this in house, and then need more resources, it becomes easier to move their OpenStack environments into Rackspace’s facilities, or into one of its partner’s data centers, like Equinix which just announced support for Rackspace Cloud Private Edition.
It’s a similar strategy to VMware’s vCloud business. VMware is seeding service providers with vCloud data center software that can connect to VMware infrastructure companies run in-house.
The big question now is how long all the standalone IaaS platform providers, Nimbula, Abiquo, Eucalyptus, et al, can last. It’s a bit of a bet to be sure, but I’m guessing OpenStack is most apt to nail the ’80’ while all these small cloud plays are nailing the ’20’.
Universities, cultural heritage organizations and libraries around the world – there’s a cloud service for you now too. It’s an open source offering developed by not-for-profit organization, DuraSpace, called DuraCloud and is focused on preserving important documents.
The service runs on top of cloud storage providers’ Amazon S3 and Rackspace Cloud Files and eventually, Microsoft Azure. Users can store documents, images, video, just about any content you like and as many copies as you like, across these providers and it’s all accessible from a single portal. Try moving content across different cloud providers today without this kind of service. It’s a royal pain. DuraCloud automatically synchronizes your copies across providers and offers a health check service to verify the integrity of your files.
There are no requirements to how your content must be structured for ingest into DuraCloud. In terms of content, DuraCloud is essentially a blob store. You can upload any bitstream, in any format. DuraCloud is also capable of storing any type of package (i.e., AIP, ZIP, TAR, etc.). And since there are no requirements, you can easily transfer data to DuraCloud yourself. There are three options for uploading content to DuraCloud: via the web interface, the client-side synchronization utility, or the REST API.
DuraSpace started the project in 2009 and initially built it on EMC’s Atmos Online and Sun’s Cloud storage services, both of which went poof in 2010. It was a good test of the software, according to Michele Kimpton, CEO of DuraSpace, who said they were easily able to move DuraCloud to Amazon and Rackspace.
“It proves the model, you can’t rely on just one provider …Users need flexibility of providers and their data in multiple geographies,” Kimpton said.
The service is geared to the 1200 or so academic institutions and cultural heritage organizations already using DuraSpace’s Fedora framework for building an archive and Dspace, a repository application. These hook directly into DuraCloud, although you don’t need them to use DuraCloud. The service doesn’t offer any kind of security capability today such as encryption, which is a definite downside for anyone thinking of using it for sensitive information.
And it’s not especially cheap. DuraSpace charges a subscription fee for running the service of $375 per month which includes 500 GB of storage and access to all services in the platform. Additional storage is charged at the rate of the underlying cloud provider.
There are other preservation services out there, but so far none have taken advantage of the cloud. Chronopolis is a digital preservation service developed by the San Diego Supercomputer Center (SDSC) at UC San Diego. It takes a copy of your content and stores it offline, so you can’t see it or easily access it but they will keep it “forever” for you. Stanford University has a service called LOCKSS (Lots of Copies Keep Stuff Safe), but you have to be a member and run a server called a LOCKSS box in your IT environment. Your box joins others in a peer to peer network and if any one box goes down, you can pull your content from another LOCKSS box. Kimpton claims it doesn’t scale well and you need specialist skills to use it.
Eventually DuraCloud will offer data mining and data analytics services for the content in its stores and Kimpton expects someone will probably want to license it as some point for commercial purposes. “We’ll decide if we want to do that down the line,” she said. “We’re not trying to make a profit, that’s why there is trust within our community.”