Infosys believes Microsoft is staking out the cloud as the inevitable future of IT — and designing Azure to be a seamless bridge between hither and thither in order to make the transition in small steps for enterprise consumers.
According to Jitendra Pal Thethi, principal architect for Microsoft business intelligence at the Indian IT giant, Microsoft’s aim with Azure to hopscotch right over the Infrastructure-as-a-Service part of the cloud and sell what it already has – software — in the approved cloud fashion, on-demand, scalable and transparent at the hardware level. Why should the baker sell wheat, after all?
Thethi has been involved with Azure since development began, some three years ago. He said that Azure is designed to let developers to carve off sections of their projects and put them in Microsoft’s cloud without having to re-learn or revamp anything. Databases already developed in Microsoft SQL can go right into Azure’s SQL Data Service without a hitch, storage, processing and all, for example.
“This concept [is that] not everything will be in the cloud; not everything will be on-premise—it will be a hybrid world,” he said. Thethi said businesses already using Microsoft for development “can pick off the low hanging fruit” without having to leave their comfy Microsoft environment or design an interface to a non-Microsoft cloud.
“Azure today gives you an on-premise experience….It’s something none of the other cloud providers provide,” he said. Thethi said that cloud computing will fundamentally change development and design, but it’s years away and Microsoft is well aware of that.
“The fact of the matter is that they want to get the ball rolling,” he said, and get developers comfortable with using online services in small ways before thinking bigger. “The entire architecture and development [model] is going to change,” he said, but Microsoft is betting businesses will want to move into the cloud in safe, familiar steps.
Microsoft plans to make Azure as compatible and useful as it can, reasoning that the less developers have to do, the easier it will be for them to make the switch. Some people already call Azure “on-demand Server 2008.”
Furthermore, it should be noted that Microsoft has no real advantages in delivering computing power itself; it neither makes computers nor helps people run them. Hosting companies and data centers do that, and they are already cutting a broad swath in the public cloud market.
So Redmond, by virtue of ubiquity, has the opportunity to carve out the Platform-as-a-Service territory very neatly. It already makes the software that (mostly) everyone is using, it has plenty of spare cash and plenty of big iron on the ranch for users, and it can scoop up subscribers and users just be being that little bit easier to use, and just a few cents cheaper than the competition, and by letting enterprises come in at their own pace.
After all, Microsoft is nothing if not patient. With cloud computing, it has everything to gain here, very little to lose and an audience it doesn’t have to chase. All it has to do is make Azure run, and wait.
In a demonstration of cloud computing’s increasing stature in the real world, Washington state freshman state representative Reuven Carlyle called for scrapping a $300 million data center in favor of cloud computing last week.
“We are deeply troubled by the weakness of the technical and financial support behind this decision, and fear the state is potentially making a $300 million mistake,” Carlyle said in a letter to Governor Christine Gregoire published on Carlyle’s website. Co-written with Representative Hans Dunshee, the letter was first picked up by Pacific Northwest regional news site Crosscut.com
In a nutshell, the letter calls for a halt to a bond sale to fund the project and a review of existing cloud services, like “Google, Microsoft, Amazon or others as many companies and governments are doing today.” Further, it argues that the trend in outsourcing data and services is a fait accompli and a better use of taxpayer dollars.
Unfortunately, Carlyle’s letter sometimes reads like it was written by a jingo-happy IT vendor. To wit: “How best to efficiently and effectively move away from hardware-centric, expensive, proprietary, silos of data trapped in old databases to open, transparent, flexible, accessible, customer-oriented applications available via the Internet?” he asks.
(I think we’ve all snoozed through that PowerPoint talk, no?)
This is understandable. Carlyle comes fresh from the communications industry, where silos are not filled with grain and budgets are fine-tuned with an axe, as opposed to government, where silos are more than likely filled with grain and budgets are fed like foie gras geese.
Dunshee appears to be a more traditional politician; interestingly, he lists many unions as backers, groups likely to want state construction dollars.
It’s unclear why Carlyle and Dunshee believe the new IT infrastructure would go to waste. What’s notable, however, is that cloud is now commonplace enough that a politician will throw it out there and hold traditional IT up as the poorer model. That’s a long step in discourse from “cutting edge.”
Rackspace released the API specification under the Creative Commons license. Source for the software used by the APIs is under the MIT X11 free software license. Find it at http://github.com/rackspace and start your own cloud.
Speaking from OSCON, Rackspace’s Erik Carlin said the company would maintain the code in traditional style.
“The intention was to open it up — we’d love to get to the point where we have external committers,” said Carlin. Currently, Rackspace is the only commiter (an entity that can make final changes to any open source project) for the code that’s been released. Carlin said Rackspace wanted to steer a “canonical set of bindings” on top of the project but looked forward to seeing what developers would do with the project.
“I hate to create our own interface and add to the [plethora of cloud APIs], but there was nothing we could embrace,” Carlin said. As it stands, the proliferation of both open and closed cloud interfaces has been an impediment to cloud computing adoption, he said.
Going forward, Carlin said he hoped to see standards emerge that will prune out the thicket of cloud technolgies and specifications, and said Rackspace will jump all over an open standard when it emerges.
Asked why Rackspace built its interface around webby ReSt instead of XML-y SOAP, like Amazon, Carlin said there was a trend toward web interfaces on the front end. Furthermore, there are plenty of other aspects to a cloud than just the user screen, he said. For example, issues like competing virtual machine formats and management specs still need to be hammered out.
“APIs are only half the battle,” he said.
Most weeks are pretty cloudy for me these days. However, this one was chock-filled with exciting stuff. In case you missed any, here goes…
Rackspace Cloud API
Rackspace has three cloud offerings, ( Cloud Files, Cloud Sites, and Cloud Servers). Cloud Sites is their PaaS offering that use to be called Mosso. Cloud Files is, of course, their cloud storage offering. The big question for Rackspaces’s IaaS has been no-API (i.e, Cloud Servers). Some people believe that you really can’t be called an IaaS unless you have an API to manage the infrastructure. This week Rackspace answered this question.
This week Microsoft announced the long awaited pricing for their new PaaS offering called Azure. Microsoft announced that their bare bones windows services, running on Azure, will be $0.12 per hour. The big debate this week has been focused on comparing the Azure pricing with Amazon’s EC2 Windows pricing at $0.125 per hour. The answer is, you really can’t compare. First off, Azure is a PaaS that doesn’t offer OS level access and Amazon is an IaaS that gives you Administrator (root) level access. Secondly, Azure applications can only run as .Net or Win32 based applications. Azure runs similar to the way Google’s PaaS works. You can install your application code into their Paas; however, you can’t install an already packaged application. For example, you can’t install something like Drupal on Azure, at least not easily. One last point is that, Amazon EC2 Windows instances run as Windows 2003 Servers only. In the end the primary choice will most likely not be price, and more likely will be based on the target application.
GSA To Build A Store Front To The Clouds
The General Services Administration is plaining to launch an online application, (i.e., storefront), to enable agencies to purchase cloud computing applications like Amazon Web Services. The Federal CIO, Vivek Kundra, announced this on Wednesday.
BMC Offers A Deployment Solution For Amazon Web Services
BMC Software announced this week that they are leveraging Amazon Web Services to manage hybrid cloud environments by managing deployments to Amazon’s EC2. BMC has had a solid story for behind-the-firewall-management ever since their acquisitions of BladeLogic and Remedy. By combing service management solutions with strong provisioning in a cloud environment could make this move exciting.
Microsoft has released its pricing for Azure today. It’s tough to do an apples-to-apples comparison with Amazon AWS because they are different technical models, but the CPU service seems like it will be cheaper. Keep in mind you have to explicitly program for Azure to use the CPU.
A few days ago I had a call with Ellen Rubin, one of the co-founders of a new cloud startup called Cloudswitch. Cloudswitch recently closed an $8M Series B funding from Commonwealth Capital Ventures. The interesting thing is that they are still in stealth mode and have not yet released a product. They have created an enormous amount of buzz based on the fact that their company is still in stealth mode and have attracted so much money. Is the cloud really this hot, or is there more to this story? I decided to tell their story in pure David Fincher style. I will tell this curious case of Cloud Switch story backwards.
- I am given the green light to talk about Cloudswitch, a new kind of cloud service that is described as a cloud broker service.
- After almost a year of ongoing discussions with Ellen, I finally get why they call it a switch. They see themselves moving workloads back and forth within the enterprise, as opposed to the concept of a cloudburst which may imply a unidirectional flow.
- Cloudswitch acquires new office space in Burlington, MA. They now have a good team of developers, management, and good funding to focus on the getting the product ready and are now spending time with early customers and partners.
- June 2009 they closed an $8M Series B led by Commonwealth Capital Ventures with existing investors Matrix and Atlas ventures also participating.
- They spend a lot of time working with enterprises customers and have successfully completed their pilot phase of development. They are now gearing up for a beta later this year.
- The new CEO, John, caused a number of venture firms who know him to express interest in doing a preemptive Series B. Although they were not planning to look for additional funding until 2010, they decided that this was a great opportunity.
- They build a core team and are fortunate to be able to bring in John McEleney as their CEO. John was formerly the CEO at SolidWorks and ComputerVision. He grew SolidWorks to over $350M in revenue and a market leader in the CAD space. He has a great track record of scaling companies.
- Ellen pings me again in February 2009 to get me up to speed on what they are doing. I am very excited about what they are doing.
- They raised $7.4M in a Series A – first part in July 2008, second part added Atlas Venture in December 2008.
- They tried to focus on solving some of the main issues that will enable enterprises to use cloud computing: security, control and integration with the enterprise data center. Their product will be delivered as a software appliance.
- Cloudswitch is founded by Ellen Rubin and John Considine in spring of 2008, and they incubate the company at Matrix Partners. They do a ton of research asking what people think about their idea.
- I am contacted by Ellen Rubin, formerly head of marketing at Netezza, in May of 2008. Ellen asks me what I think about a Cloud Broker appliance startup idea. I am under no restriction to discuss this idea, other than my word. I decide not to divulge anything until Ellen gives me the green light.
We recently came upon some photos displaying, in fancy picture form, the growth of Amazon Web Services in terms of bandwidth usage and objects stored in Amazon S3. The results are impressive, as you’ll see below:
With S3 storage almost tripling in a year, not to mention AWS usage equally skyrocketing, the future of cloud computing at Amazon seems, as assumed, to be very bright indeed.
If anyone out there wants to challenge, confirm or comment on these numbers, we’d love to hear from you.
A few weeks ago I attended Velocity “09” in San Jose, Ca. One of the sessions used a phrase that I had never seen before and it stung me like a bee. In fact, in my opinion, this new phrase described one of the more dominant themes of the conference. These sometimes called “Internet 10” companies had figured something out that the enterprise has not been able to figure out for over 30 years. They understand that managing your infrastructure is as important as managing your applications. Fortune 5000 enterprise have always given lip service to this concept. However, they purchase tools first, thinking that is all they need in order to say that their infrastructure is important. They use monitors and event managers that give them a warm and fuzzy feeling that they are doing all the right stuff. On the configuration and provisioning side, they use large monolithic distribution systems to provide software distribution and sometimes configuration. In the enterprise, they also raise their swords called ITIL and COBIT to protect their “as-important-as” piece of mind. This false sense of confidence always reminds me of the CEO who is confident that all employees are treated equal because they put silly motivational posters on all the walls. Meanwhile, his parking spot is the closest one to the door.
At Velocity, companies like Flickr, Twitter, Google, and Myspace were making this subtle point about how their gods were not found in the tools they used; moreover, they were the process they used. They understood something the enterprise has never quite grasped. That is, if the infrastructure is important in the business then why not treat it as such. Put your process where your mouth is and not the money you spend on your tools. Theses companies at Velocity understood that the code that manages your infrastructure is as important as the code that runs your applications. In fact, the Velocity presenters made the point time after time that the infrastructure “code” and the application “code” need to be treated as equal. To that end, some of the these companies used shared version control of application code and infrastructure code within the same tool. You don’t see that in the enterprise, folks! To quote my good friend @littleidea, “WTF?”.
What is infrastructure code and how would you put it in a version control system? Yeah, yeah, sure, sure, infrastructure is all those pesky objects that “Bob” the sysadmin understands, but know one else does. Yes Virginia, these objects are the glue that make our infrastructures work. It is this meta-nursha that is needed to deploy, manage, and and configure our infrastructures. In the enterprise they have infrastructure code too. NOT! . . . they have scripts, and tons of them . . . perl scripts, shell scripts, proprietary macro configuration languages that look like scripts. All of this “wannabe” metadata scattered across all sorts of buildings and geographies. Some items are embedded in products provided by Tivoli, Microsoft, BMC, and HP. Others are hidden in workload manager tools. Some are managed by the operating system scheduler (e.g., CRON). And last but not least, in Bob’s special directories on some not so well-documented file server. Oh yeah, and guess what, virtualized environments are usually managed by another completely different team.
Infrastructure as Code means managing all the things that appear just after the server comes up. It also means putting a process in place that let’s you better manage all these items. The funny thing is that Luke Kaines of Reductive Labs has been preparing me for the acceptance of this concept of “Infrastructure as Code” for over two years now. When I heard the phrase “Infrastructure as Code” at Velocity, I knew exactly what they were talking about. Luke has preached this concept in a couple of my Cloud Cafe podcasts as well as with Micheal Cote and I in our IT Management podcast series. The defining of the various system configuration files (user ids, mount points, services, etc.) as objects allows the organization to better manage their infrastructure resources. Reusable objects that can be referenced as code provide an enterprise with an object oriented model for managing their infrastructure. There is a beautiful analogy here. In the early 1980’s we switched our applications from a functional paradigm to an object oriented model. We still haven’t done this with our infrastructure yet. Are you starting to get my point?
At the Velocity conference, presentation after presentation pointed out the useful tools that companies are using to implement this “process-first” model. Puppet and Chef are clearly dominate figures in this new IT renaissance. However, they are clearly just the tools and not the process. You would hear things like – “Oh yeah, we use puppet along with things like capastrano and nanite.” In fact, one of the vendors at the Velocity conference, Controltier, had a nice poster describing this whole new stack when it comes to the concept of Infrastructure as Code. They look at the stack as a three layer model. The lowest layer being the virtual/cloud image or the bare metal. The second being systems configuration layer with tools like Puppet, Chef, and cfengine. Lastly, a third stack describing the application service deployment layer. Coincidentally, this is their specialty layer. Controltier manages the application lifecycle for large enterprise Java applications.
Describing a new concept is always difficult and lends itself to confusion (try to Google – “What is a Cloud?”). Infrastructure as Code might not be the best way to label this new concept. Quoting the brilliant Andres Shafer of Reductive Labs from an ongoing argument that I am having with him on this subject:
Care less about the labels, and more about what it enables. We are moving towards enabling what we don’t have words to describe, so I expect some communication to be clumsy…
Debate or no debate, this is a very exciting time for infrastructure and I look forward to working with some of the key players in this new area.
Veteran virtual collaboration software vendor ContactOffice has given up its prime post over the office.com domain, possibly in favor of Microsoft, which is rumored to be announcing its Google Docs killer Office Online next week.
According to WHOIS, a brand protector and “IP investigator” firm called Marksmen now holds the rights to the soon-to-be eponymous domain. Marksmen, in turn, is known to purchase domains on behalf of Microsoft; it was the unwitting dupe in this 2007 little payola scandal. It doesn’t take a Wile E. Coyote to make the leap and assume that come August 1, http://office.com will sport a new coat of pale blue Web 2.0, courtesy of Clippy. (Yes, I know he’s retired, but the blood still boils at the very mention of the name, no?)
Reached by phone, ContactOffice spokesman Tom Graham would not comment on the move in any way, so it is unknown on how Microsoft convinced the small firm to shuffle out of the limelight. The $60 billion Redmond firm has been known to move aggressively to crush competitors, but it has also made millionaires by buying up technologies and intellectual property as it saw fit.
Google Docs, meanwhile, with its tiny market share, hardly seems to be a competitor to Microsoft Office, but Redmond is clearly planting stakes in every cloud market it can; from reports that it will undercut Amazon Web Services (AWS) with Azure, to promoting its Dynamic Data Center Toolkit. (Surely, that should convince legions of data center operators to switch to Hyper-V!)
Cloud-based IT management provider Paglo has busted out with an interesting twist on managing network log files: Google-style browser search, cross referenced over time. Paglo says it’s the first of it’s kind. It’s definitely unique. Here’s a sample of the dashboard:
Anyone who’s ever had to slog through truckloads of log files will see Paglo’s utility instantly. With its intuitive search interface and comprehensive set of data analytics, this screenshot will make admins mouths’ water.
Paglo is an example of a strange new beast that we’ll probably be seeing a lot more of: a pure cloud-based management tool. To get started, simply download the open source Paglo Crawler to your network and it will get start gathering data from WMI or syslogs and feed them back to an individual index on Paglo’s infrastructure.
Paglo CTO Chris Waters said the idea for Paglo sprang from seeing the ubiquity of search (everyone know how to use a search engine, right?) and applying it to the user demand they saw for log management.
“We know there’s latent demand in the market for logs.” he said. Since Paglo was already collecting, massaging and delivering complex data in real time, adding searchable logs to customers’ data was a natural fit. (Click here to read about about folks running new kinds of databases in the cloud.)
Aside from its bravura log search tool, Paglo also has a fairly standard set of MSP features that will be familiar to any IT pro, including performance and network monitoring, and patch management for Windows. It’s thin on other standard features, though, like remote desktop access or remote control. Clearly, it’s biggest strength is the way it aggregates network information.
Most importantly, though, Paglo requires no initial investment — it’s pure pay-as-you-go. More tradtional MSPs require onsite hardware and a hefty licensing fee to get started. Waters is banking on making it cheap and easy to get started and scaling out Paglo’s virtualized, hosted infrastructure to keep growing. All of which, naturally, is only practically possible for a small business “in the cloud.”