The Troposphere


September 13, 2013  7:14 PM

Developers bandy about AWS API portability at OpenStack meetup

Mark Szynaka Mark Szynaka Profile: Mark Szynaka

I attended the NYC OpenStack Meetup this week, which focused on understanding what place Amazon EC2 APIs should hold for OpenStack design and implementation. It was billed as the third round of the AWS API debate, with the first two rounds held on the West Coast. And this event did not disappoint.

The audience seemed more focused on enterprise applicability versus a theoretical discussion of AWS APIs. I suppose this was because these enterprise clients and IT bosses want to know if they can make OpenStack work with some of the rogue AWS implementations their companies already own.

Randy Bias, founding CEO, CTO and co-founder of Cloudscaling, was quick to point out the OpenStack community uses an OpenStack version of the Amazon Web Services (AWS) Elastic Compute Cloud (EC2) API already.

Attendee concerns centered on whether developers could depend on AWS to keep the APIs intact, so that if an OpenStack private cloud developer were to make a call to the API, they’re sure it will work. IT pros know that how much you rely on each of the public cloud APIs affects the portability of clouds. So, it was encouraging to hear AWS APIs are not being deprecated and, therefore, are reliable for multi-cloud architectures for the foreseeable future.

Randy Bias, Nati Shalom, CTO and founder of GigaSpaces, and Alex Freedland, CFO of Mirantis, all provided a healthy divergence of how they see OpenStack evolving and what is needed to strengthen the industry.

Freedland postulated that Moore’s Law applies to cloud computing; the acceleration of innovation and the financial impact to companies will drive cloud adoption, he added. Bias and Shalom took a more technology-focused, ‘If you build it, they will come’ view.

But all three speakers agreed there are two things driving the adoption of, or at least investigation into, the use of OpenStack in a private cloud: an IT manager’s desire to stay independent of any single cloud provider and the ability to interface any private cloud for resources they don’t want to build in-house.

August 6, 2013  1:26 PM

The battle for enterprise cloud takes shape among AWS, OpenStack, VMware

Mark Szynaka Mark Szynaka Profile: Mark Szynaka

Most enterprise IT managers have been watching the public cloud race as a proof of concept and a way to shake out the contenders. Amazon Web Services has clearly been the winner with majority market share — and at the rate the company is spending money, it would be difficult for any one company to catch them, including IBM. The industry’s answer to combat AWS in the public cloud has been the OpenStack alliance of IBM, HP, Rackspace and others. But VMware’s recent announcement could put it in the running, too.

In May, Dell dropped out of the public cloud race, VMware entered the private cloud race with VMware vCloud Hybrid Service and IBM purchased SoftLayer. These announcements have created what I believe are the three main choices for enterprise cloud: AWS, OpenStack or VMware’s vCloud Hybrid Service.

Most enterprise IT managers look at their infrastructure using VMware-colored glasses; everything must be built on an existing VMware infrastructure. Because of this, they don’t need to worry about hypervisor choices to move forward. But that may not always be the way.

From a strategic point of view, it does not make sense to choose a public Infrastructure as a Service (IaaS) provider until you understand what your enterprise private cloud design will be. Most enterprises have not implemented a private cloud and are wrestling with ways to implement a service-oriented architecture (SOA) to provide more agile and responsive business to their clients, customers, staff and partners, while maintaining a firm risk-management discipline.

Take action; use your enterprise IaaS as a strategic differentiator, and leverage the public cloud for commoditized services, community/B2B, DevOps or global distribution needs.

While in the past, enterprise IT has shunned open source cloud, OpenStack is emerging as an unlikely leader in the long-term race. You can look at past IT leaders, such as IBM, Oracle, Microsoft, DEC, Novell, Cisco or Sun, as precedents of this kind of turnaround.

The growing number of OpenStack adopters, especially IBM, has made this path more appealing to improve enterprise portability options. Surveys show that many organizations expect to use multiple clouds in the future, and OpenStack offers the widest portability choice today. This is something your auditors will like, and we as enterprise IT managers know how important that is.

AWS is easily the leader in public cloud, with more than 55% market share and a long list of features and functions. Many enterprises IT managers let their developers play there, test their ideas there and when done, bring the app in-house to build for production. For those shops that find the AWS service offerings too appealing to turn their back on, consider installing Eucalyptus for your OpenStack private cloud.

VMware’s vCloud Hybrid Service is the third enterprise cloud option, but it’s the least mature of the three. VMware owns the enterprise virtualization marketplace, and with that installed base and trained skillset already in the enterprise, vCloud could be a path of least resistance for many organizations.

Cisco, EMC and VMware have teamed up to form VCE, which has products that allow enterprise IT managers to move toward a more automated private cloud. With this path, you can connect easily with CSC, AT&T and Bluelock public clouds. VMware also supports Cloud Foundry as its OpenStack offering, but with the recent hybrid announcement and the departure of the CTO Steve Herrod – who was its leading OpenStack advocate — VMware may be splitting off to defend its own ecosystem.


May 30, 2013  1:51 PM

Community clouds offer enterprises a B2B link

Mark Szynaka Mark Szynaka Profile: Mark Szynaka

The community cloud is quickly becoming the more efficient way for enterprises to implement business-to-business connections. In the past, enterprises would create a VPN connection to each and every one of their business partners, which required working with many different partner IT shops with varying abilities. When I was working at a large financial company, setting up more than 500 B2B connections meant dealing with some small IT shops that did not have a clue about security practices or VPN connections. Often when we had outages, smaller companies were unable to provide support after hours to help restore the connection. But individual VPN connections were the best way to set up an isolated network connection to interface with business partners, suppliers and other supply chain partners.

Now many IT shops find it more practical to set up a community cloud to connect to more than five business partners. A community cloud allows you to have a common meeting place to exchange required information, and you no longer need to have untrusted partners connecting to your network — even if it was only to a DMZ.

I have set up two such community clouds: one for an insurance company and the other for a pharmaceutical company. We were able to set up lightweight directory access protocol (LDAP) and security assertion markup language (SAM) access, and then we created a virtual private cloud (VPC) for each client to connect into. We then connected each of those VPCs to the community cloud. In this case, if a client connection to the community cloud goes down, it is no longer the host company’s problem.

But community cloud implementations may not be for every enterprise. Many enterprises are still working to build out their private clouds before they move to a hybrid cloud or connect to a community cloud. And when working with enterprises trying to implement a community cloud, the bulk of efforts often involves getting stakeholders to agree to the connection and not build out of a public cloud and connect to a SaaS provider.

What are your thoughts on using community clouds for B2B connections? Share your comments below or tweet us @TTintheCloud.


April 17, 2013  8:04 PM

Waiting for the OpenStack “explosion”

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

The differences between the last OpenStack Summit in San Diego and the one held this week in Portland are sizable. San Diego saw about 1300 attendees – here, the number has been closer to 3000, an estimated 2600 to 2800 in all. Instead of function rooms in a hotel, the conference has expanded to fill a convention center. Instead of a gathering of close-knit propellerheads, this Summit has seen new faces with a distinctive corporate air about them.

None of the above has gone unremarked-upon by conference organizers and presenters, of course, not by a long shot. This is “The Year of the User,” according to a keynote presentation by OpenStack Foundation executive director Jonathan Bryce. Tuesday’s sessions were a coming out party for several household name companies that run OpenStack, including BestBuy.com, PayPal, Samsung and Comcast. An HP session Wednesday was entitled, “OpenStack to Enterprise: Boldly go…”

But while the growth of the Summit, as well as the buzz around OpenStack, has been undeniable, the industry remains in an early adopter phase with this technology. The companies presenting Tuesday were impressive and well-known, but note that it was BestBuy.com architects who showed up to present, not the Best Buy enterprise itself. PayPal, too, is a Web company; Samsung, a technology purveyor; Comcast, a service provider.

In other words, OpenStack appears to be in a boat very similar to the one Amazon Web Services (AWS) currently finds itself in, albeit a small fishing vessel compared to AWS’ hundred -foot yacht. In either case, the messaging is all about the enterprise, but scratch the surface, and the product is still all about the cutting-edge Web developer.

Continued »


April 3, 2013  9:37 PM

BMW accelerates private to hybrid cloud

Michelle Boisvert Michelle Boisvert Profile: Michelle Boisvert

Sure, security is an important concern for any company moving to the cloud. But enterprise worries run much deeper than that.

As large enterprises try cloud computing — either by moving specific workloads to public cloud or by adding a fully automated private cloud — factors such as federation, automation, common management policies and transparency will surface.

When BMW embarked on its cloud project in 2008, its primary goal was to standardize technology across multiple data centers and business units, plus get a better quality at a lower cost. The “Golden Egg” for most enterprises.

“We are nearly at the end of traditional infrastructure,” said Mario Mueller, vice president of IT infrastructure at BMW. “We had clear targets: zero downtime; and with the solution we had, that wasn’t possible.”

But even a long-established company such as BMW, with skilled IT teams in locations throughout the world, had questions about where to start with cloud. “How do you do all the automation?” Mueller said. “How do we implement security? How do we do the identity management?”

Mueller and his team at BMW looked to the Open Data Center Alliance (ODCA) for guidance on building a private cloud to tackle those questions and, ultimately, get from the technology the agility, speed and uptime it had hoped for.

Mueller also happens to be chairman of the ODCA, which was established in 2010 and aims to create a unified voice for cloud customers. More than 300 companies are members and look to the group for examples of cloud applications that help show the way.

Private cloud: Just the beginning

It was clear from the start that private cloud wasn’t the end-game for BMW, said Mueller, nor should it be.

“The real target for most enterprises is the hybrid [cloud] model,” he said. “We have use of a new data center in Iceland where we do high-performance computing; we will get into the hybrid cloud model there.”

Benefits of cloud computing may not be immediate; it takes some time to get things right. Enterprises need to establish a successful private cloud first — and get all the benefits they can there — before moving workloads out of the company, Mueller emphasized.

But in the end, it doesn’t matter which technology you’re using. “It’s all about cost, quality, compliance and security in the infrastructure,” he added.


March 28, 2013  6:27 PM

Rackspace woos developers with Exceptional acquisition

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Rackspace hopes its second acquisition in as many months will increase its appeal to application developers.

The new buy, announced Thursday, will see the employees and assets of Exceptional Cloud Services, based in San Francisco, Calif., join Rackspace as a wholly owned subsidiary. This deal follows a similar acquisition of Object Rocket last month.

Exceptional has three sub-properties that got Rackspace interested:

  • Exceptional.io – tracks errors in over 6,000 web applications.
  • Airbrake.io – collects errors generated by applications and aggregates the results for review.
  • Redis To Go – hosts the open-source Redis key value store for customers

Exceptional’s CEO Jonathan Siegel said yesterday that his company’s ideal customer is one which has end users who are going to have major issues if the customer’s application doesn’t function properly. Also the highest value of its products is realized by customers who have multiple clients, like browsers and phone operating systems, which need to access the customer’s applications simultaneously.

In other words, Web developers.

The next logical question, then, is whether Rackspace plans to integrate Exceptional’s IP with its Cloud SitesPlatform as a Service (PaaS).

Cloud Sites currently supports programming using the PHP and .NET frameworks, but Rackspace now has multiple properties that could theoretically expand the underpinnings of Cloud Sites, from MySQL as a service, the MongoDB NoSQL database, and now an in-memory database service in Redis To Go. Meanwhile Amazon, as usual, is the elephant in the room here; Rackspace is looking to edge in on territory – the next generation of Web developers — to which Amazon Web Services has already staked a firm claim.

I asked whether these acquisitions will give Cloud Sites a shot in the arm, and essentially got elevator music in response. It’s a fair bet, though, that Rackspace will look to boost its integrated offerings to reach developers to the extent Amazon has.


March 6, 2013  8:29 PM

Amazon throws its hat into cost-cutting ring

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Earlier this week, I reported on some tools that help shave considerable sums of money off of companies’ Amazon Web Services bills. No sooner had that report been filed than I came across an Amazon announcement of new features for its own Trusted Advisor tool on its official blog that had been posted the day before.

Trusted Advisor identifies cost inefficiencies, but also advises users of Amazon Web Services (AWS) on security gaps, high-availability misconfigurations and performance bottlenecks in their deployments. It’s available to users with a Business or Enterprise level of premium support from Amazon.

One of the users of third-party cost efficiency tools I interviewed for the previous article, Andres Silva of Inmar Inc., said he’ll probably use both Trusted Advisor and software from Cloudyn, especially since AWS is offering a free trial this month.

“Trusted Advisor now has things that Cloudyn doesn’t have yet, like security reports,” Silva said.

Two days later, AWS made the biggest cost-cutting move of all by reducing its prices for Reserved Instanceswith a Linux OS, in some cases by almost 28%.

Naturally, Silva was also happy as a clam about this, given his company is about to purchase some new Reserved Instances. However, those who have already purchased Reserved Instances won’t be so lucky — they’re locked in to previous pricing. Furthermore, commenters on Amazon’s post were also a little peeved about the lack of love for Windows.

This news came out less than a week after a price reduction by rival Rackspace for its cloud bandwidth, Cloud Files and content delivery network (CDN) services, continuing an ongoing pricing war in the public cloud that also includes HP.


March 6, 2013  6:43 PM

IBM preps bridge from OpenStack to AWS

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

IBM introduced its SmartCloud Orchestrator tool on Monday amid a lot of verbiage about supporting open standards that we’ve all heard before. But if you scratch the surface of the product it put into beta, it’s actually quite interesting.

SmartCloud Orchestrator is software based on OpenStack APIs including Nova for compute, Quantum for networking and Cinder for block storage, which also throws in what IBM calls patterns (sort of like templates) for application deployment.

But the most interesting feature may be an adapter IBM engineers have written that translates between OpenStack and Amazon cloud APIs. It only requires users to know OpenStack as a kind of “lingua franca” of the cloud, according to Mohamed Abdula, director of SmartCloud foundation strategy and portfolio management for IBM.

Another user-friendly feature is compatibility with Opscode’s Chef, such that clients would not need to throw away or rewrite Chef recipes; SmartCloud Orchestrator can reuse them from within its UI, Abdula said.

Amazon and OpenStack are often put at odds with one another, and it is OpenStack’s stated mission to compete with AWS. At the same time, however, compatibility with AWS APIs is a foundational concept within OpenStack, as is automation and portability between clouds.

IBM looks to put its money where its mouth is when it comes to those concepts.


February 27, 2013  7:39 PM

Rackspace puts Object Rocket in its pocket

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Rackspace is jockeying for position aboard the advanced infrastructure services bandwagon with its acquisition today of Object Rocket, a MongoDB Database as a Service provider.

Rackspace is hardly the first to offer a Database as a Service (DBaaS) or even MongoDB as a service — other vendors like Amazon Web Services (AWS) and SoftLayer have beaten them to the punch. But, the newly merging companies claim, the Object Rocket service is faster and offers more predictable performance than competitors.

This notion is further detailed in a blog post by Rackspace’s DevOps team that features performance benchmarks; the post claims that Object Rocket’s service showed more consistent throughput and consistently low latency compared with two other MongoDB services running on — you guessed it — Amazon.

“It’s basically become a standard see-saw,” is how Carl Brooks, an analyst at 451 Group, summed things up today. “AWS launches a crazy, interesting new service, customers try it out, find the performance is non-deterministic, another provider says, ‘Hey, I can do that a jillion times better on my iron!’ and the world continues to revolve.”

For those not in the know, MongoDB is among the most rapidly growing NoSQL databases, which are mainly used in large Web applications that require rapid access to objects or documents that are delivered whole, rather than assembled out of pieces located in the rows and columns of a traditional relational database. It’s used by eBay, Disney, FourSquare and other household names, though none of those blue chips are Object Rocket customers, specifically.

It’s unclear so far how Rackspace’s customer base will respond. A few calls I put in today to IT pros who run on Rackspace produced long pauses and a disinclination to comment on the new offering until more is known about it.

Could this acquisition be filling in a corner case, rather than appealing to the meat-and-potatoes enterprises among Rackspace’s customers? Time, of course, will tell…


February 25, 2013  7:12 PM

Bare-metal approach may ease cautious enterprises into cloud

Ed Scannell Ed Scannell Profile: Ed Scannell

The idea of migrating existing workloads to public clouds got a bit of a boost earlier this month when Racemi disclosed its migration software would support SoftLayer’s popular CloudLayer platform. Racemi sweetened the deal with a time-limited offer of $99 per migrations.

Rather than cost-savings, the more important aspect of the deal is SoftLayer’s “bare-metal cloud” approach, which allows customers to customize its hardware infrastructure from processors, storage needs and high-speed networks. What also enhances the company’s offering is CloudLayer doesn’t require any lengthy contractual commitment from customers, who only pay for those resources they need’

IT shops commonly voice objections about moving workloads to the public cloud for several reasons — from security risks to unknown ROI factors to the paranoia of moving their mission-critical data outside their four walls. These fears often turn into inertia, delaying decisions to move to the public cloud indefinitely.

SoftLayer’s approach could give some consumers more confidence to move forward by providing them more control over what hardware infrastructure to choose and just how much they want to pay for it.

The bare-metal cloud approach takes the hypervisor out of the mix, which can increase the raw processing power of consumer’s hardware infrastructure. This approach may help not only those suffering from inertia about moving to public clouds, but it may also give incentive to the growing number of shops with existing cloud implementations thinking about handling “big data” and large databases there.

CloudLayer is made up virtual servers, remote storage and a content delivery network. Each CloudLayer service can work in standalone mode or be integrated with a number of dedicated servers and automated services using one private network and management system.

“We think it can be this easy to create and control a hybrid computing environment, for instance, that is interoperable,” said Marc Jones, vice president of product innovation at SoftLayer.

What might also boost user confidence in this approach is the fact SoftLayer is one of the largest privately held cloud infrastructure providers with 13 data centers that every once in a while picks off a few Amazon Web Services customers.

“We don’t aggressively try to steal Amazon customers, but we do pick some up, particularly those that have performance issues where they need high disk I/O and higher network speeds,” Jones said.

For its part, Racemi updated its Cloud Path Software as a Service (SaaS) offering and DynaCenter on-premises software to support physical and virtual server migrations to CloudLayer. Customers can also automatically migrate cloud instances from other cloud providers to CloudLayer.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: