IBM introduced its SmartCloud Orchestrator tool on Monday amid a lot of verbiage about supporting open standards that we’ve all heard before. But if you scratch the surface of the product it put into beta, it’s actually quite interesting.
SmartCloud Orchestrator is software based on OpenStack APIs including Nova for compute, Quantum for networking and Cinder for block storage, which also throws in what IBM calls patterns (sort of like templates) for application deployment.
But the most interesting feature may be an adapter IBM engineers have written that translates between OpenStack and Amazon cloud APIs. It only requires users to know OpenStack as a kind of “lingua franca” of the cloud, according to Mohamed Abdula, director of SmartCloud foundation strategy and portfolio management for IBM.
Another user-friendly feature is compatibility with Opscode’s Chef, such that clients would not need to throw away or rewrite Chef recipes; SmartCloud Orchestrator can reuse them from within its UI, Abdula said.
Amazon and OpenStack are often put at odds with one another, and it is OpenStack’s stated mission to compete with AWS. At the same time, however, compatibility with AWS APIs is a foundational concept within OpenStack, as is automation and portability between clouds.
IBM looks to put its money where its mouth is when it comes to those concepts.
Rackspace is jockeying for position aboard the advanced infrastructure services bandwagon with its acquisition today of Object Rocket, a MongoDB Database as a Service provider.
Rackspace is hardly the first to offer a Database as a Service (DBaaS) or even MongoDB as a service — other vendors like Amazon Web Services (AWS) and SoftLayer have beaten them to the punch. But, the newly merging companies claim, the Object Rocket service is faster and offers more predictable performance than competitors.
This notion is further detailed in a blog post by Rackspace’s DevOps team that features performance benchmarks; the post claims that Object Rocket’s service showed more consistent throughput and consistently low latency compared with two other MongoDB services running on — you guessed it — Amazon.
“It’s basically become a standard see-saw,” is how Carl Brooks, an analyst at 451 Group, summed things up today. “AWS launches a crazy, interesting new service, customers try it out, find the performance is non-deterministic, another provider says, ‘Hey, I can do that a jillion times better on my iron!’ and the world continues to revolve.”
For those not in the know, MongoDB is among the most rapidly growing NoSQL databases, which are mainly used in large Web applications that require rapid access to objects or documents that are delivered whole, rather than assembled out of pieces located in the rows and columns of a traditional relational database. It’s used by eBay, Disney, FourSquare and other household names, though none of those blue chips are Object Rocket customers, specifically.
It’s unclear so far how Rackspace’s customer base will respond. A few calls I put in today to IT pros who run on Rackspace produced long pauses and a disinclination to comment on the new offering until more is known about it.
Could this acquisition be filling in a corner case, rather than appealing to the meat-and-potatoes enterprises among Rackspace’s customers? Time, of course, will tell…
The idea of migrating existing workloads to public clouds got a bit of a boost earlier this month when Racemi disclosed its migration software would support SoftLayer’s popular CloudLayer platform. Racemi sweetened the deal with a time-limited offer of $99 per migrations.
Rather than cost-savings, the more important aspect of the deal is SoftLayer’s “bare-metal cloud” approach, which allows customers to customize its hardware infrastructure from processors, storage needs and high-speed networks. What also enhances the company’s offering is CloudLayer doesn’t require any lengthy contractual commitment from customers, who only pay for those resources they need’
IT shops commonly voice objections about moving workloads to the public cloud for several reasons — from security risks to unknown ROI factors to the paranoia of moving their mission-critical data outside their four walls. These fears often turn into inertia, delaying decisions to move to the public cloud indefinitely.
SoftLayer’s approach could give some consumers more confidence to move forward by providing them more control over what hardware infrastructure to choose and just how much they want to pay for it.
The bare-metal cloud approach takes the hypervisor out of the mix, which can increase the raw processing power of consumer’s hardware infrastructure. This approach may help not only those suffering from inertia about moving to public clouds, but it may also give incentive to the growing number of shops with existing cloud implementations thinking about handling “big data” and large databases there.
CloudLayer is made up virtual servers, remote storage and a content delivery network. Each CloudLayer service can work in standalone mode or be integrated with a number of dedicated servers and automated services using one private network and management system.
“We think it can be this easy to create and control a hybrid computing environment, for instance, that is interoperable,” said Marc Jones, vice president of product innovation at SoftLayer.
What might also boost user confidence in this approach is the fact SoftLayer is one of the largest privately held cloud infrastructure providers with 13 data centers that every once in a while picks off a few Amazon Web Services customers.
“We don’t aggressively try to steal Amazon customers, but we do pick some up, particularly those that have performance issues where they need high disk I/O and higher network speeds,” Jones said.
For its part, Racemi updated its Cloud Path Software as a Service (SaaS) offering and DynaCenter on-premises software to support physical and virtual server migrations to CloudLayer. Customers can also automatically migrate cloud instances from other cloud providers to CloudLayer.
It turns out that a headline I wrote for SearchCloudComputing.com last week about the new HP Cloud Compute service was only half right.
The headline calls out that HP has undercut Rackspace with its per-hour pricing of 4 cents (Rackspace’s on-demand offering is priced at 6 cents per hour). But as pointed out today by @ stackgeek on Twitter, that headline, as well as these paragraphs in the story:
Prices for Hewlett-Packard’s (HP) Cloud Compute service start at four cents per hour for a small instance with 1 GB of RAM – 2 cents lower than the price for Rackspace’s 1 GB instance. Amazon Web Services’ (AWS) Reserved Small Instance, which comes with 1.7 GB of memory, costs 3.9 cents per hour.
While AWS still offers the best deal, HP’s pricing for the fledgling service might attract Rackspace customers.
…are not totally accurate.
I re-checked the Amazon pricing page, and @stackgeek is correct in pointing out the $69 one-time upfront cost for a small, lightly utilized one-year Reserved Instance. While the 3.9-cent hourly cost without this fee would make Amazon cheapest on a yearly basis, at $341.64, the fee makes Amazon’s cost per year $410.64. HP’s cost per year at 4 cents an hour is $350.40 and Rackspace’s price per year at 6 cents an hour is $525.60.
@stackgeek is also correct in pointing out that the original comparison was between Amazon’s Reserved Instance and HP and Rackspace’s on-demand instances. The pricing for an On-Demand Instance on Amazon is 6.5 cents per hour, which is half a cent more expensive than Rackspace. That makes the pricing for an Amazon On-Demand instance $47.45 a month, or $569.40 per year.
So, HP’s new service actually undercuts both Rackspace and Amazon on price. And Amazon Web Services is more expensive than Rackspace’s Cloud Servers as well.
I regret the errors.
The notion of using cloud-based services still terrifies enterprise IT pros, even though such services have advanced in both quality and variety for years. IT pros remain frozen by the specter of losing control of data, security breaches and random service outages. Some of these reasons may be losing validity.
In stark contrast to that fact, however, was the smashing success Amazon Web Services (AWS) had last week with re:Invent, its first conference In the six years since AWS launched, cloud services have been increasingly embraced by start-ups and media delivery companies, along with a slew of forward-thinking developers .
Amazon executives, such as CTO Werner Vogels, a senior vice president Andy Jassey and founder Jeff Bezos, brought their value proposition to the masses in person. We’ve heard their case for cloud services before: You can change Capex to variable expense, you can pay lower variable expenses, you don’t have to guess your capacity needs and you can have apps set up in minutes.
They hammered on traditional IT vendors, claiming the economics of AWS is disruptive to the HPs, Dells, Microsofts and Oracles of the world. Amazon is a low-volume, high-margin business and it’s not the game that the old guard can play, they said. They may have a point. With the possible exception of IBM, most of the long time stalwarts of the industry — many so reliant on hardware and more traditional services — have yet to present a compelling cloud services strategy that would make its largest customers disregard AWS.
For instance, you want security? AWS has all the standard security certifications, and the company can help implement them with a bigger and better team than you have.
But still, many IT shops have the same reasons for not moving forward. They are risk averse, they don’t throw things out, legacy apps are hard to move and there are few examples of traditional enterprises that have made the leap. Amazon trots out its old standby, Netflix, along with more recent enterprises, including NASDAQ, NASA, the United States Tennis Association, McGraw Hill and Novartis to name a few. Comcast, for instance, has been reinventing parts of its business using AWS, spending two years quietly refashioning its media delivery network.
Real enterprises still look for accounts they can relate to. At the conference, Amazon touted its prized new public customer Pinterest — not exactly a revenue-generating machine that supports a legacy back end.
And try to dig up a customer reference on the exhibit floor. One energetic marketing manager brightened when I asked for a name. He offered up Grindr.com. If you don’t know who they are, look them up. Hint: It’s not a competitor to the Subway chain of sandwich shops.
Not too far into the future, enterprises will absorb the cultural changes associated with cloud adoption. Amazon with AWS will be a winner, and some of its competitors will win as well. In the 1990s, we had the likes of Gates and Ballmer, Ellison and others, armed with “kill the competitor” strategies that generated high margins. This is a different game. For example, at re: Invent, AWS followed Google when it dropped the price of its cloud storage by 25%. The next morning, someone from Hitachi told me his customers asked, “Why does storage have to cost so much?” Is it getting hot in here or what?
In data-driven 21st century architectures, apps and processes must be automated as business shifts, as IT shops will be determined to not get stuck with hardware — and software — limitations. Security is integrated from the ground up. There are new attitudes. Failure is not an option? Forget that. AWS CTO Vogels says to regard failure is normal. Failure is always around the corner. Embrace it. It’s the new black.
The main thing is that if you are not constraining yourself up front, you will build more successful architectures. It will take more time, which is fine because this team, like another team from Seattle that forged new ground 30 years ago, has the long view. Amazon, and its competitors, look increasingly inevitable.
Margie Semilof is editorial director of the Data Center & Virtualization media group.
The modern office no longer looks like Office Space, with a staff of office drones tied to their cubicle desks, working from an office-provided desktop every day. In a world of iPads, Blackberries, Androids, iPhones and laptops, employees are accessing information from everywhere, giving cloud-based collaboration a clear cue to make its entrance.
While some enterprises say they’re still preparing for the bring-your-own-device (BYOD) era to hit, the truth is it’s already here, whether they’re prepared or not. According to a report by Juniper Research, 150 million people use personal mobile devices for work. That number is set to more than double by 2014.
The rise of the global worker is complemented by a shift toward a services economy, said TJ Keitt, senior analyst at Forrester, a global research and advisory firm based in Cambridge, Mass. Automation that comes from new technologies, such as cloud computing, opens the doors for not only global workers but for the introduction of more creative jobs, such as consulting. And these creative jobs require more communication, collaboration and flexibility in working hours.
“Cloud collaboration is not just about being a different delivery mechanism, it’s about what you’re enabling in your workforce,” said Keitt in a Webinar last week.
A 2012 Forrester survey showed that agility — not cost-savings — was the primary reason companies gave for adopting Software as a Service (SaaS).
TechTarget’s 2012 cloud adoption survey echoed this finding, with 60% of survey respondents using public cloud because it offered increased availability.
Businesses have used collaboration tools primarily for two reasons: reduce overhead costs and improve communication among the workforce. Collaboration software means that there could be fewer in-house employees who are able to communicate without needing to travel, which cuts a company’s overhead costs. Cutting costs plus the ability to more easily dispense and share information make collaboration tools a boon to many businesses.
And companies can better capitalize on these benefits by moving collaboration to the cloud, Keitt argues.
“Cloud is a natural home for collaboration technology because of the confluence of employee mobility, globalization and innovation networks, which are changing the nature of business,” said Keitt.
But will enterprises’ hesitance to adopt cloud undermine the benefits of collaboration software?
Despite lingering concerns about security, compliance and vendor lock-in, TechTarget’s survey show a growing comfort with cloud services. 61% of the 1,500 IT pros surveyed reported they currently use cloud services.
This growing ease with cloud could be good news for enterprises. The rise of the global worker may mean increased access to information for employees, but it could also mean consumers are empowered by information.
In an era when a company’s mistake or a disappointing product could spread through social media like a social disease, the ability to quickly and efficiently communicate with customers could be a solid differentiator. Cloud-based collaboration software could match the changing tides in business, but cloud vendors have to work to overcome persistent qualms about cloud services if they to make major advances in the enterprise.
Caitlin White is associate site editor for SearchCloudComputing.com. Contact her at email@example.com.
VMware will work on a buffed-up compute driver for OpenStack’s Nova project which will allow OpenStack to manage advanced features of vSphere, according to VMware CTO Steve Herrod.
This means that despite the direct competition between OpenStack and VMware’s vCloud Director, VMware will allow OpenStack management tools to more easily manage vSphere virtual machines.
It’s a new olive branch extended to a suspicious OpenStack community by newcomer VMware, which has previously made clear that its proprietary cloud management tools will be able to wrap themselves around OpenStack clouds; this is the first time VMware has actively participated in allowing its hypervisor to be subject to management by another cloud platform.
Citing VMware properties including Spring, RabbitMQ, Linux, Hyperic, and Cloud Foundry, and bearing gifts in the form of hundreds of free copies of VMware Fusion, Herrod played up VMware’s open source street cred in a presentation to a skeptical but standing-room-only crowd at OpenStack Summit on Wednesday.
“We are not strictly a closed source company, we’re not strictly an open source company, we’re a blend of both,” he said.
There’s currently a compute driver within Nova, but it’s “pretty dumb,” Herrod said – essentially it allows users to create vSphere VMs and run them.
With a new driver written by VMware will come support for VMwareHA and live migration, Herrod said.
According to a later presentation by VMware staff engineer Sean Chen, the new driver will also include the ability to launch OVF disk images, use a VNC console to manage VMs, attach and detach iSCSI volumes, get guest information, conduct host operations, assign VLANs, link VMware with Quantum, and create custom VMware image properties for OpenStack’s Glance image management utility.
Herrod also hinted that VMware is exploring ways to integrate the Open vSwitch, used by network virtualization subsidiary Nicira, into the vSphere platform, possibly as a replacement for the existing VMware virtual switch.
“We are looking quite seriously at what aspect of the Open vSwitch to merge and have interoperating in vSphere environments,” he said.
Attendees at the conference weren’t necessarily about to fall into VMware’s outstretched arms, though Herrod’s presentation piqued their interest somewhat.
One a VMware user from a communications company in Texas said he still has yet to decide whether to use a vCloud or OpenStack environment for giving developers access to virtual machines.
“There’s more than one way to skin this cat,” he said.
Another attendee working for a major service provider mused that OpenStack, with its Quantum network virtualization features, might allow for better portability of vSphere VMs between private and public clouds.
Millions of viewers tuned in to NASA’s website to watch streamed live coverage of its ‘Curiosity’ rover landing on the surface of Mars earlier this month and though it all went off without a hitch, a server outage or a website blip could have done some serious damage to NASA’s reputation.
It was an ambitious project to say the least, and NASA knew its site would be hit with possibly its highest amount of website traffic for those seven, nail biting minutes. So how did it ensure everything ran smoothly with so much at stake? The space program turned to SOASTA‘s cloud testing software.
The NASA and SOASTA collaboration came about as a referral, of sorts, from folks at Amazon Web Services (AWS), a SOASTA technology partner. And with an already hefty bill of $25 million riding on the project, NASA wanted an audience and wanted to guarantee that audience saw an uninterrupted stream of the landing.
Often, a company’s reputation and the contents of its wallet are at stake.
“When Knight Capital crashed, it caused them to lose $16 million per minute just because they were down,” said Tom Lounibos, CEO of SOASTA. “If Twitter is down, it costs advertisers $25 million per minute.”
It really is about anticipating failure — imagining worst-case scenarios — so that when the actual moment comes, companies are ready to face adversity and deal with it. SOASTA used its predictive analysis software, GlobalTest, to imitate traffic conditions on NASA’s website three days before the Curiosity rover launch.
Predictive analysis allows you to understand when something could fail and why that happened. “We are in the business of adding more intelligence to the process,” Lounibos said. “We go through a lot of what if situations with predictive analysis.”
Some what-if situations in the NASA project consisted of load testing to help understand what might happen if there is an unexpected spike in traffic, or when back-end services require more capacity. By doing simulations and observing data, SOASTA can predict the effects on infrastructure, a Web application and the database, so that companies can optimize a website or applications to accommodate these changes.
NASA’s biggest issue was it could not predict how many people were going to watch the landing, Lounibos said. “We were able to help predict how much server capacity NASA would need,” he added.
SOASTA also helped NASA prepare for a failure scenario by simulating an outage on a portion of Web servers and proving that failover plans were indeed effective.
“When you’re streaming for millions of people you can’t afford to have failure because there is only one first,” Lounibos concluded.
Fernanda Laspe is the editorial assistant for SearchCloudComputing.com.
Windows Azure customers anxious to learn what Microsoft has been hiding behind its back can finally exhale later this week in San Francisco.
One key piece of the Azure update is support for what Microsoft calls “Persistent Virtual Machine (VM) Roles,” which will let Windows Azure customers run legacy applications in VMs. That includes running Linux, sources said.
Another capability is a Web hosting framework codenamed “Antares” that will provide a fine granularity Web apps-hosting service aimed at customers who don’t see Azure as an economical platform for webpage hosting.
But will Microsoft be able to deliver those features sooner rather than later? Not in a single iteration, one source said. Instead of pulling off the “All singing, all dancing” vision Microsoft would like to promise, it’s more likely the company will need at least two iterations to achieve the basics.
Of course, now that the Windows 8 Release Preview is available there is sure to be a Windows Azure demo on tablets and mobile devices at the event.
Another key trend to watch for, sources said, is an increased focus on hybrid clouds.
Over the short to mid-term, Microsoft aims to achieve, “write once and run anywhere” capabilities for Windows Azure, if I can use the Java slogan. Customers want to be able to run their applications either in the data center or in the cloud, or as a hybrid of two interchangeably. And they want to be able to do so without rewriting any code or worrying about vendor lock-in.
The best way to do that seems simple enough — run applications on the same API on both platforms — Windows Azure and Windows Server 2012. That might not be as easy as it sounds, though.
Windows Azure numbers lower than Amazon
Just as important as what Microsoft says, however, is what Microsoft doesn’t say. That may be telling when it comes to judging the relative veracity and importance of plans and promises at the Meet Windows Azure event, which will be streamed.
Microsoft has been notably quiet about Windows Azure’s status for more than a year. That may be because sales of Windows Azure have been disappointing to date. Windows Azure has garnered fewer than 100,000 customers so far, according to the research firm Directions On Microsoft based in Kirkland, Wash.
That’s quite lower than industry estimates for market leader Amazon Web Services.
In some respects, it’s the same struggle Microsoft has gone through before. How can the company and its products remain relevant in a computing universe that is constantly changing?
The event will likely resemble many previous Microsoft marketing splashes, with system integrators, application developers, resellers and other partners lined up to show solidarity for the company’s strategy du jour.
Again, when Thursday rolls around, remember to listen closely for what doesn’t get said as well as what does.
Stuart J. Johnston is Senior News Writer for SearchCloudComputing.com. Contact him at firstname.lastname@example.org.
“If it is true, it’s pants-on-head retarded.”
That’s how Tier 1 analyst Carl Brooks described reports this week that Microsoft will drop “Azure” from the branding of its public cloud offering.
“Azure is a dynamite brand — it’s almost a byword, like Amazon is, for a certain kind of cloud infrastructure, and in a very positive way,” Brooks said. “They’d be nuts to drop it and I’m hard pressed to understand any potential benefit.”
As it turns out, Brooks was right; Microsoft isn’t that irrational — although sometimes it might seem that way. The confusion began when a popular tech blog got wind that the software titan had sent out an email to Azure subscribers advising them that it’s cutting “Azure” from the names of a bunch of Azure services.
“In the coming weeks, we will update the Windows Azure Service names,” the message said. “These are only name changes: Your prices for Windows Azure are not impacted,” according to the email quoted in the blog post.
What had occurred, however, was less than meets the eye. The changes are to Azure’s “billing portal,” another tech blog revealed, and don’t affect the overall naming of Azure services.
After several hours of silence, Microsoft did finally issue an official clarification. “Microsoft continues to invest in the Windows Azure brand and we are committed to delivering an open and flexible cloud platform that enables customers to take advantage of the cloud. The brand is not going away.”
That’s a good thing. “It would be like dropping ‘Exchange’ in favor of ‘Microsoft Email Server’,” Brooks added, calling the excitement “a tempest in a teapot.”