The community cloud is quickly becoming the more efficient way for enterprises to implement business-to-business connections. In the past, enterprises would create a VPN connection to each and every one of their business partners, which required working with many different partner IT shops with varying abilities. When I was working at a large financial company, setting up more than 500 B2B connections meant dealing with some small IT shops that did not have a clue about security practices or VPN connections. Often when we had outages, smaller companies were unable to provide support after hours to help restore the connection. But individual VPN connections were the best way to set up an isolated network connection to interface with business partners, suppliers and other supply chain partners.
Now many IT shops find it more practical to set up a community cloud to connect to more than five business partners. A community cloud allows you to have a common meeting place to exchange required information, and you no longer need to have untrusted partners connecting to your network — even if it was only to a DMZ.
I have set up two such community clouds: one for an insurance company and the other for a pharmaceutical company. We were able to set up lightweight directory access protocol (LDAP) and security assertion markup language (SAM) access, and then we created a virtual private cloud (VPC) for each client to connect into. We then connected each of those VPCs to the community cloud. In this case, if a client connection to the community cloud goes down, it is no longer the host company’s problem.
But community cloud implementations may not be for every enterprise. Many enterprises are still working to build out their private clouds before they move to a hybrid cloud or connect to a community cloud. And when working with enterprises trying to implement a community cloud, the bulk of efforts often involves getting stakeholders to agree to the connection and not build out of a public cloud and connect to a SaaS provider.
What are your thoughts on using community clouds for B2B connections? Share your comments below or tweet us @TTintheCloud.
The differences between the last OpenStack Summit in San Diego and the one held this week in Portland are sizable. San Diego saw about 1300 attendees – here, the number has been closer to 3000, an estimated 2600 to 2800 in all. Instead of function rooms in a hotel, the conference has expanded to fill a convention center. Instead of a gathering of close-knit propellerheads, this Summit has seen new faces with a distinctive corporate air about them.
None of the above has gone unremarked-upon by conference organizers and presenters, of course, not by a long shot. This is “The Year of the User,” according to a keynote presentation by OpenStack Foundation executive director Jonathan Bryce. Tuesday’s sessions were a coming out party for several household name companies that run OpenStack, including BestBuy.com, PayPal, Samsung and Comcast. An HP session Wednesday was entitled, “OpenStack to Enterprise: Boldly go…”
But while the growth of the Summit, as well as the buzz around OpenStack, has been undeniable, the industry remains in an early adopter phase with this technology. The companies presenting Tuesday were impressive and well-known, but note that it was BestBuy.com architects who showed up to present, not the Best Buy enterprise itself. PayPal, too, is a Web company; Samsung, a technology purveyor; Comcast, a service provider.
In other words, OpenStack appears to be in a boat very similar to the one Amazon Web Services (AWS) currently finds itself in, albeit a small fishing vessel compared to AWS’ hundred -foot yacht. In either case, the messaging is all about the enterprise, but scratch the surface, and the product is still all about the cutting-edge Web developer.
Sure, security is an important concern for any company moving to the cloud. But enterprise worries run much deeper than that.
As large enterprises try cloud computing — either by moving specific workloads to public cloud or by adding a fully automated private cloud — factors such as federation, automation, common management policies and transparency will surface.
When BMW embarked on its cloud project in 2008, its primary goal was to standardize technology across multiple data centers and business units, plus get a better quality at a lower cost. The “Golden Egg” for most enterprises.
“We are nearly at the end of traditional infrastructure,” said Mario Mueller, vice president of IT infrastructure at BMW. “We had clear targets: zero downtime; and with the solution we had, that wasn’t possible.”
But even a long-established company such as BMW, with skilled IT teams in locations throughout the world, had questions about where to start with cloud. “How do you do all the automation?” Mueller said. “How do we implement security? How do we do the identity management?”
Mueller and his team at BMW looked to the Open Data Center Alliance (ODCA) for guidance on building a private cloud to tackle those questions and, ultimately, get from the technology the agility, speed and uptime it had hoped for.
Mueller also happens to be chairman of the ODCA, which was established in 2010 and aims to create a unified voice for cloud customers. More than 300 companies are members and look to the group for examples of cloud applications that help show the way.
Private cloud: Just the beginning
It was clear from the start that private cloud wasn’t the end-game for BMW, said Mueller, nor should it be.
“The real target for most enterprises is the hybrid [cloud] model,” he said. “We have use of a new data center in Iceland where we do high-performance computing; we will get into the hybrid cloud model there.”
Benefits of cloud computing may not be immediate; it takes some time to get things right. Enterprises need to establish a successful private cloud first — and get all the benefits they can there — before moving workloads out of the company, Mueller emphasized.
But in the end, it doesn’t matter which technology you’re using. “It’s all about cost, quality, compliance and security in the infrastructure,” he added.
Rackspace hopes its second acquisition in as many months will increase its appeal to application developers.
The new buy, announced Thursday, will see the employees and assets of Exceptional Cloud Services, based in San Francisco, Calif., join Rackspace as a wholly owned subsidiary. This deal follows a similar acquisition of Object Rocket last month.
Exceptional has three sub-properties that got Rackspace interested:
- Exceptional.io – tracks errors in over 6,000 web applications.
- Airbrake.io – collects errors generated by applications and aggregates the results for review.
- Redis To Go – hosts the open-source Redis key value store for customers
Exceptional’s CEO Jonathan Siegel said yesterday that his company’s ideal customer is one which has end users who are going to have major issues if the customer’s application doesn’t function properly. Also the highest value of its products is realized by customers who have multiple clients, like browsers and phone operating systems, which need to access the customer’s applications simultaneously.
In other words, Web developers.
The next logical question, then, is whether Rackspace plans to integrate Exceptional’s IP with its Cloud SitesPlatform as a Service (PaaS).
Cloud Sites currently supports programming using the PHP and .NET frameworks, but Rackspace now has multiple properties that could theoretically expand the underpinnings of Cloud Sites, from MySQL as a service, the MongoDB NoSQL database, and now an in-memory database service in Redis To Go. Meanwhile Amazon, as usual, is the elephant in the room here; Rackspace is looking to edge in on territory – the next generation of Web developers — to which Amazon Web Services has already staked a firm claim.
I asked whether these acquisitions will give Cloud Sites a shot in the arm, and essentially got elevator music in response. It’s a fair bet, though, that Rackspace will look to boost its integrated offerings to reach developers to the extent Amazon has.
Earlier this week, I reported on some tools that help shave considerable sums of money off of companies’ Amazon Web Services bills. No sooner had that report been filed than I came across an Amazon announcement of new features for its own Trusted Advisor tool on its official blog that had been posted the day before.
Trusted Advisor identifies cost inefficiencies, but also advises users of Amazon Web Services (AWS) on security gaps, high-availability misconfigurations and performance bottlenecks in their deployments. It’s available to users with a Business or Enterprise level of premium support from Amazon.
One of the users of third-party cost efficiency tools I interviewed for the previous article, Andres Silva of Inmar Inc., said he’ll probably use both Trusted Advisor and software from Cloudyn, especially since AWS is offering a free trial this month.
“Trusted Advisor now has things that Cloudyn doesn’t have yet, like security reports,” Silva said.
Two days later, AWS made the biggest cost-cutting move of all by reducing its prices for Reserved Instanceswith a Linux OS, in some cases by almost 28%.
Naturally, Silva was also happy as a clam about this, given his company is about to purchase some new Reserved Instances. However, those who have already purchased Reserved Instances won’t be so lucky — they’re locked in to previous pricing. Furthermore, commenters on Amazon’s post were also a little peeved about the lack of love for Windows.
This news came out less than a week after a price reduction by rival Rackspace for its cloud bandwidth, Cloud Files and content delivery network (CDN) services, continuing an ongoing pricing war in the public cloud that also includes HP.
IBM introduced its SmartCloud Orchestrator tool on Monday amid a lot of verbiage about supporting open standards that we’ve all heard before. But if you scratch the surface of the product it put into beta, it’s actually quite interesting.
SmartCloud Orchestrator is software based on OpenStack APIs including Nova for compute, Quantum for networking and Cinder for block storage, which also throws in what IBM calls patterns (sort of like templates) for application deployment.
But the most interesting feature may be an adapter IBM engineers have written that translates between OpenStack and Amazon cloud APIs. It only requires users to know OpenStack as a kind of “lingua franca” of the cloud, according to Mohamed Abdula, director of SmartCloud foundation strategy and portfolio management for IBM.
Another user-friendly feature is compatibility with Opscode’s Chef, such that clients would not need to throw away or rewrite Chef recipes; SmartCloud Orchestrator can reuse them from within its UI, Abdula said.
Amazon and OpenStack are often put at odds with one another, and it is OpenStack’s stated mission to compete with AWS. At the same time, however, compatibility with AWS APIs is a foundational concept within OpenStack, as is automation and portability between clouds.
IBM looks to put its money where its mouth is when it comes to those concepts.
Rackspace is jockeying for position aboard the advanced infrastructure services bandwagon with its acquisition today of Object Rocket, a MongoDB Database as a Service provider.
Rackspace is hardly the first to offer a Database as a Service (DBaaS) or even MongoDB as a service — other vendors like Amazon Web Services (AWS) and SoftLayer have beaten them to the punch. But, the newly merging companies claim, the Object Rocket service is faster and offers more predictable performance than competitors.
This notion is further detailed in a blog post by Rackspace’s DevOps team that features performance benchmarks; the post claims that Object Rocket’s service showed more consistent throughput and consistently low latency compared with two other MongoDB services running on — you guessed it — Amazon.
“It’s basically become a standard see-saw,” is how Carl Brooks, an analyst at 451 Group, summed things up today. “AWS launches a crazy, interesting new service, customers try it out, find the performance is non-deterministic, another provider says, ‘Hey, I can do that a jillion times better on my iron!’ and the world continues to revolve.”
For those not in the know, MongoDB is among the most rapidly growing NoSQL databases, which are mainly used in large Web applications that require rapid access to objects or documents that are delivered whole, rather than assembled out of pieces located in the rows and columns of a traditional relational database. It’s used by eBay, Disney, FourSquare and other household names, though none of those blue chips are Object Rocket customers, specifically.
It’s unclear so far how Rackspace’s customer base will respond. A few calls I put in today to IT pros who run on Rackspace produced long pauses and a disinclination to comment on the new offering until more is known about it.
Could this acquisition be filling in a corner case, rather than appealing to the meat-and-potatoes enterprises among Rackspace’s customers? Time, of course, will tell…
The idea of migrating existing workloads to public clouds got a bit of a boost earlier this month when Racemi disclosed its migration software would support SoftLayer’s popular CloudLayer platform. Racemi sweetened the deal with a time-limited offer of $99 per migrations.
Rather than cost-savings, the more important aspect of the deal is SoftLayer’s “bare-metal cloud” approach, which allows customers to customize its hardware infrastructure from processors, storage needs and high-speed networks. What also enhances the company’s offering is CloudLayer doesn’t require any lengthy contractual commitment from customers, who only pay for those resources they need’
IT shops commonly voice objections about moving workloads to the public cloud for several reasons — from security risks to unknown ROI factors to the paranoia of moving their mission-critical data outside their four walls. These fears often turn into inertia, delaying decisions to move to the public cloud indefinitely.
SoftLayer’s approach could give some consumers more confidence to move forward by providing them more control over what hardware infrastructure to choose and just how much they want to pay for it.
The bare-metal cloud approach takes the hypervisor out of the mix, which can increase the raw processing power of consumer’s hardware infrastructure. This approach may help not only those suffering from inertia about moving to public clouds, but it may also give incentive to the growing number of shops with existing cloud implementations thinking about handling “big data” and large databases there.
CloudLayer is made up virtual servers, remote storage and a content delivery network. Each CloudLayer service can work in standalone mode or be integrated with a number of dedicated servers and automated services using one private network and management system.
“We think it can be this easy to create and control a hybrid computing environment, for instance, that is interoperable,” said Marc Jones, vice president of product innovation at SoftLayer.
What might also boost user confidence in this approach is the fact SoftLayer is one of the largest privately held cloud infrastructure providers with 13 data centers that every once in a while picks off a few Amazon Web Services customers.
“We don’t aggressively try to steal Amazon customers, but we do pick some up, particularly those that have performance issues where they need high disk I/O and higher network speeds,” Jones said.
For its part, Racemi updated its Cloud Path Software as a Service (SaaS) offering and DynaCenter on-premises software to support physical and virtual server migrations to CloudLayer. Customers can also automatically migrate cloud instances from other cloud providers to CloudLayer.
It turns out that a headline I wrote for SearchCloudComputing.com last week about the new HP Cloud Compute service was only half right.
The headline calls out that HP has undercut Rackspace with its per-hour pricing of 4 cents (Rackspace’s on-demand offering is priced at 6 cents per hour). But as pointed out today by @ stackgeek on Twitter, that headline, as well as these paragraphs in the story:
Prices for Hewlett-Packard’s (HP) Cloud Compute service start at four cents per hour for a small instance with 1 GB of RAM – 2 cents lower than the price for Rackspace’s 1 GB instance. Amazon Web Services’ (AWS) Reserved Small Instance, which comes with 1.7 GB of memory, costs 3.9 cents per hour.
While AWS still offers the best deal, HP’s pricing for the fledgling service might attract Rackspace customers.
…are not totally accurate.
I re-checked the Amazon pricing page, and @stackgeek is correct in pointing out the $69 one-time upfront cost for a small, lightly utilized one-year Reserved Instance. While the 3.9-cent hourly cost without this fee would make Amazon cheapest on a yearly basis, at $341.64, the fee makes Amazon’s cost per year $410.64. HP’s cost per year at 4 cents an hour is $350.40 and Rackspace’s price per year at 6 cents an hour is $525.60.
@stackgeek is also correct in pointing out that the original comparison was between Amazon’s Reserved Instance and HP and Rackspace’s on-demand instances. The pricing for an On-Demand Instance on Amazon is 6.5 cents per hour, which is half a cent more expensive than Rackspace. That makes the pricing for an Amazon On-Demand instance $47.45 a month, or $569.40 per year.
So, HP’s new service actually undercuts both Rackspace and Amazon on price. And Amazon Web Services is more expensive than Rackspace’s Cloud Servers as well.
I regret the errors.
The notion of using cloud-based services still terrifies enterprise IT pros, even though such services have advanced in both quality and variety for years. IT pros remain frozen by the specter of losing control of data, security breaches and random service outages. Some of these reasons may be losing validity.
In stark contrast to that fact, however, was the smashing success Amazon Web Services (AWS) had last week with re:Invent, its first conference In the six years since AWS launched, cloud services have been increasingly embraced by start-ups and media delivery companies, along with a slew of forward-thinking developers .
Amazon executives, such as CTO Werner Vogels, a senior vice president Andy Jassey and founder Jeff Bezos, brought their value proposition to the masses in person. We’ve heard their case for cloud services before: You can change Capex to variable expense, you can pay lower variable expenses, you don’t have to guess your capacity needs and you can have apps set up in minutes.
They hammered on traditional IT vendors, claiming the economics of AWS is disruptive to the HPs, Dells, Microsofts and Oracles of the world. Amazon is a low-volume, high-margin business and it’s not the game that the old guard can play, they said. They may have a point. With the possible exception of IBM, most of the long time stalwarts of the industry — many so reliant on hardware and more traditional services — have yet to present a compelling cloud services strategy that would make its largest customers disregard AWS.
For instance, you want security? AWS has all the standard security certifications, and the company can help implement them with a bigger and better team than you have.
But still, many IT shops have the same reasons for not moving forward. They are risk averse, they don’t throw things out, legacy apps are hard to move and there are few examples of traditional enterprises that have made the leap. Amazon trots out its old standby, Netflix, along with more recent enterprises, including NASDAQ, NASA, the United States Tennis Association, McGraw Hill and Novartis to name a few. Comcast, for instance, has been reinventing parts of its business using AWS, spending two years quietly refashioning its media delivery network.
Real enterprises still look for accounts they can relate to. At the conference, Amazon touted its prized new public customer Pinterest — not exactly a revenue-generating machine that supports a legacy back end.
And try to dig up a customer reference on the exhibit floor. One energetic marketing manager brightened when I asked for a name. He offered up Grindr.com. If you don’t know who they are, look them up. Hint: It’s not a competitor to the Subway chain of sandwich shops.
Not too far into the future, enterprises will absorb the cultural changes associated with cloud adoption. Amazon with AWS will be a winner, and some of its competitors will win as well. In the 1990s, we had the likes of Gates and Ballmer, Ellison and others, armed with “kill the competitor” strategies that generated high margins. This is a different game. For example, at re: Invent, AWS followed Google when it dropped the price of its cloud storage by 25%. The next morning, someone from Hitachi told me his customers asked, “Why does storage have to cost so much?” Is it getting hot in here or what?
In data-driven 21st century architectures, apps and processes must be automated as business shifts, as IT shops will be determined to not get stuck with hardware — and software — limitations. Security is integrated from the ground up. There are new attitudes. Failure is not an option? Forget that. AWS CTO Vogels says to regard failure is normal. Failure is always around the corner. Embrace it. It’s the new black.
The main thing is that if you are not constraining yourself up front, you will build more successful architectures. It will take more time, which is fine because this team, like another team from Seattle that forged new ground 30 years ago, has the long view. Amazon, and its competitors, look increasingly inevitable.
Margie Semilof is editorial director of the Data Center & Virtualization media group.