SOA Talk

March 28, 2012  11:58 AM

Cloud computing, virtual image sprawl and labor costs

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

COMMENTARY – The virtual machine image, a powerful driver of cloud computing, may be described as a tiger few can easily ride. The VMs are proliferating. Earlier this month, no less a personage than IBM’s Daniel Sabbah forecast that virtual image sprawl would outgrow IT’s capacity to keep pace.

“Virtual images are tripling every two years, outpacing the doubling in compute power and essentially flat IT budgets,” IBM Tivoli Software General Manager Sabbah said in a statement coinciding with IBM’s Pulse Conference.

“With current operating practices, every two years you’d need 1.5 times the physical infrastructure to support cloud and twice the labor. That’s an unsustainable cost and management problem which is the exact opposite of the promise of cloud,” he continued, as he outlined benefits of IBM’s new SmartCloud Foundation offerings. While public cloud providers can be expected to ramp up to manage ultra-large configurations, it is more difficult to see how labor issues will affect the much discussed user-side cloud type known as private cloud.

Will work for cycles

The labor issue is a stubborn one, and it must be factored into cloud computing ‘what if?’ analyses that enterprise architects are now undertaking. Cost savings are crucial to the dream of cloud, but greater experience with this architecture leads many to downplay cost savings.

Various companies have been working to address the labor issues of cloud, which is a massively scaled architecture that calls for sophisticated and on-demand provisioning of increasingly complex configurations and many virtual images.

The effort suggests that this goes back a long ways. It certainly has been of concern as distributed computing and rack-based blade servers have multiplied. The movement toward grid and autonomic computing looked to address the challenge, and now cloud and even dev ops can be seen contending to solve the problem, but remedies have yet to take hold.

The poster children for the first rush of cloud – Google and Amazon – can be said to have “thrown people at the problem” as they both employed high head counts of developers to service vast farms of servers. And the developers are very advanced developers at that. The classic Google ranch hand is a math and algorithmic wizard who is also adept at systems programming. In Google’s early days, at least, this person combined development and operations skills to a startling degree.

Is cloud computing hugely labor intensive?

We wondered if other companies can repeat this model. So, when we caught up with Skytap’s Brian White at this week’s EclipseCon 2012 in Reston, Virg., we asked for his take. As vice president of products at cloud provider Skytap, White is responsible for product strategy and product management. Before this, he was director of developer resources for Amazon Web and launched the AWS Elastic Beanstalk platform-as-a-service offering. We asked if private cloud labor was not labor intensive.

“It’s hugely labor intensive,” White answered. This is for a reason. “There are things that make [public cloud] a challenge. One is keeping it up and running all the time.” Another, he said, is the fact that the number of servers you can deploy may be relatively modest. “You don’t have unlimited capacity for scaling,” he said.
Where cloud approaches have the most value, White and others have concluded, is where resource needs are unpredictable or irregular. That is why Skytap and many other cloud providers focus on the development and test markets.

Development and test tasks make for a dynamic workload, he said, adding “from a cost perspective you don’t need to have these projects running 24/7.”

For cloud, “there is a huge amount of hype around cost,” said White. “The real benefit people are getting out of it is agility – much more than just pure cost reduction.”

Continuous deployment

While it is largely a beneficial trend, the move to Agile development becomes a factor that further exacerbates the cloud planning dilemma of architects. This was borne home in conversation with Dave West, analyst, Forrester Research, who spoke on Lean development at EclipseCon 2012. He showed that deployment was no longer an end-of-the-Waterfall development lifecycle event. It is now a constant companion. That is because part of the Agile of goal is to deliver bits of functionality as they become available.

The new styles of deployment requirements are certainly an issue with which cloud computing administrators – as well as developers and architects – are going to have to deal. Here, cloud may drive change. It is shedding light on dark problems.

“Cloud is an interesting phenomenon,” said Forrester’s West. “I am excited about what I is doing to drive internal IT to think about its systems in a different way.” – Ryan Punzalan and Jack Vaughan
In the face of fairly rampant fear of placing data on a public cloud, much attention has been placed on private cloud – but labor and cost issues may unsettle such undertakings. What do you think?

March 20, 2012  1:00 AM

A “simple” approach to cloud computing

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

The idea of ‘eventual consistency’ was an Ah-Ha! moment in the history of e-commerce. With it, was able to throw away the traditional play book of transaction processing.  But ‘eventual consistency’ is not a blank check. It requires developers – some of them, anyway – to make a large new conceptual leap.

The notion arose from our recent conversation on SimpleDB with Christopher M. Moyer, vice president of technology at Newstex LLC, who spoke with us on the topic of the Amazon cloud. The topic was no coincidence – Moyer is the author of Building Applications on the Cloud (Addison-Wesley, 2012), a book that neatly describes many  basic patterns of cloud computing based on concrete examples.

”SimpleDB is one of the hardest databases to comprehend,” Moyer said. ”Everyone is used to the idea that, if they write a record to the DB, it will be there. ‘’

But the standard approach of SimpleDB is that marvel: eventual consistency, which is great but unfamiliar to a large legion of developers.

It is a difficult topic for developers to understand and work with in their systems, he told us.

Amazon, like other cloud pioneers, has seen a way toward supporting the familiar, but this too needs special attention.

”They have worked to address [the gap] with a ‘consistent mode,’ but you should be aware that it can affect your performance and stability,” Moyer said.

”Simple is not always simple,” the editor said.

March 16, 2012  7:59 PM

Aggregating cloud services brokerage enhances app management

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

As SOA goes mainstream and cloud services proliferate, will traditional SOA repositories come to look more than a little like app stores?

That notion may be farfetched. But some of the newer cloud marketplaces bear watching. They may betoken a day when SOA services will be sold and versioned like other online offerings.

Among the so-called cloud marketplaces are aggregating cloud services brokerages like that from AppDirect. The company recently enhanced its online application marketplace, releasing a Marketplace Manager that enables users to oversee the components and settings of cloud services that they offer to the public or third-parties. Continued »

March 5, 2012  3:36 PM

Node.js: Bubbling up from JavaScript

Brein Matturro Profile: Brein Matturro

Jack Vaughan, Site Editor

Bubbling under the mainstream of computing these days is the fast growing phenomenon of Node.js – a framework built on Google’s Chrome JavaScript virtual machine and providing a type of server-side JavaScript experience. 

Node.js employs an event-driven architecture and a non-blocking I/O model, and it provides some blindingly fast performance to some types of data-intensive Web apps.  It seems to take a very-light-weight component approach that distinguishes it from even lightest of the lightweight Java servers, while, some would say, harkening back to C and C++ servers that predated the Java server. Node.js is written in C and C++ and, on one level at least, it seems the province of the system programmer rather than the typical application developer. But it is programmed via JavaScript.

LinkedIn, Yahoo and eBay are among ardent Node users (‘’js’’ is sometimes jettisoned and it’s simply called ‘’Node’’), and a recent West Coast Node conference was graced by none other than Microsoft, which is toying with end-to-end JavaScript coverage on its Azure cloud. But it may be said that none are more out front on the Node.js wave than IaaS cloud provider Joyent, which went ahead and hired Node.js creator Ryan Dahl. As a cloud provider, Joyent is driven to optimize performance of its server farms, especially for Web application handling.

We recently caught up with Joyent CTO and co-founder Jason Hoffman to learn more about Node.js. We asked why Joyent took the Node.js route. He said:

“Why we did it is, at Joyent we have a lot of servers, more than most companies in the Fortune500 and we write in C, in a compiled language. We needed to write servers in a dynamic language for talking to certain protocols. Basically, we had to write service endpoints. The Node part of Node.js is separate. It is designed so that it can handle a lot endpoints – on the order of a million. Most things written for the [Java Virtual Machine] can only handle 20,000 [endpoints]. Node is meant to handle a lot of I/O.So we took the node part and married that with V8 [the JavaScript virtual machine from Google].”

Node.js comes without the baggage of early platforms or frameworks. ‘’It has no history of blocking’’ said Hoffman, describing classic program languages as ‘’non-event driven languages.’’ He suggests that the idea of client side JavaScript turning around and running on the server side would not be possible if it weren’t paired with the V8 JavaScript VM. It acts like an application server, or container.

The VM enables the ability to do an event-driven server, he said. Nod, in effect, becomes a framework that shows you how to write a server in JavaScript.

“It’s meant to be very easy. It’s meant to let someone write a server,” said Hoffman. “When we look at the general interest – most businesses are having to do API endpoints today. When you let more people connect via mobile devices, you have a lot more people connecting. Rather than having to have hundreds of servers, you can add two or three.  Node.js is just a very easy way to write endpoints.”

Is it easy enough for a developer legion to grapple with? Maybe, as long as the tools that cover over the complexity for the enterprise come into being. Node.js is a far cry from Web services, representing in a way a new take on some earlier pedal-to-the-metal architectures. But it may help overall to speed and streamline REST services, especially in mobile app settings. It seems poised perhaps to give a further boost to the often maligned JavaScript language, which got a just-in-time boost via the flood of Ajax frameworks that arose almost ten years ago.

March 2, 2012  3:31 PM

From incubator to top-level project: Evolution of Apache Deltacloud

Ryan Punzalan Profile: RPunzalan

The Apache Software Foundation recently announced that Apache Deltacloud has graduated from the Apache Incubator to become a top-level project.

“We are thrilled to have the project’s growth and maturity recognized by The Apache Software Foundation,” said David Lutterkort, chair of the Apache Deltacloud Project Management Committee and principal software engineer at Red Hat. “We’ve shown that we have made progress and that Deltacloud gets to stay. We’ve also shown that we have a strong, vibrant community.”

Deltacloud was developed over two and a half years ago in response to a concern over the infrastructure service cloud landscape. “One of the things that really struck us was that there wasn’t really a way for users to avoid vendor lock-ins,” Lutterkort said. “So we developed Deltacloud as a way to define an API within an open source project.”

After speaking with customers and vendors about the project, Lutterkort said that Deltacloud was brought to the Apache Incubator because users didn’t feel comfortable with Deltacloud only being a Red Hat project. Since then, it has gained supporters.

“While in the Incubator, the Deltacloud API evolved to the point where products can use the Deltacloud API and not have to worry about differences in cloud API,” Lutterkort claimed.

Vendors such as Amazon, GoGrid, IBM, Microsofta and Eucalyptus all have worked with Red Hat for the development of Deltacloud. David Butler, VP of marketing for Eucalyptus, views the graduation of Deltacloud as a big step for Apache.

Eucalyptus has contributed drivers that helped Deltacloud support the company. Along with the contribution by Eucalyptus, the cloud community has contributed other kinds of drivers that have made the project more versatile.

Moving forward, Deltacloud is looking at few things. “The next piece is to see it in action,” Butler said. “See how it evolves, and since it is Apache licensed, see how certain companies may include it in their products.” Aside from the products, Deltacloud is also hoping to make it easier for users as well.

“We’re looking to establish an open source de facto standard by having everyone rally around the idea of implementation,” Lutterkort said. “And we hope that the implementation will help users in becoming more portable across clouds.” – Ryan Punzalan

March 1, 2012  4:17 PM

Application modernization Platform as a Service puts focus on business agility

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

Among all the tasks that face the enterprise looking to modernize, improving business agility is first, according to Andy Gordon, Application modernization Platform as a Service (AMPS) director for Unisys. SOA is a part of that effort, he says, because business agility means you are ”developing capabilities that are built to change.”

Today, ”the parts are interoperable,” he said, noting that ”APIs have now become products for a lot of companies and public sector agencies.” These public APIs must be flexible, and be able to support an increasingly broader user base.

”APIs are now recognized as a first-class revenue generator that is solidifying the need to have a service-oriented enterprise – one with the expertise to do services and to be agile,” he said.

Gordon said Unisys is rolling out new services, known as the AMPS Center of Excellence, to help companies improve their application modernization initiatives. The services suite includes an AMPS SOA Governance and an AMPS SOA Operational Software Platform. Some of the new parts are supplied via deals with other software providers such as EMC Documentum, SOA Software and RedHat JBoss – with special Unisys-tailored customizations, based on the company’s extensive work in the field.

SOA infrastructure provides a useful ”backplane,” according to Gordon, to help orchestrate and manage the new style of API. ”A SOA management intermediary is valuable,” he said. ”It brings the management of APIs, security, logging, protocol mediation and a dashboard for watching services activity.”

Still, like others, Gordon emphasizes that ”SOA is something you do, not something you buy.” As a result, SOA Assessment services and SOA Strategy services are part of the AMPS Center of Excellence.

We asked Gordon to share a few useful tips for achieving a successful enterprise SOA. He noted three elements that need to be in place. These follow.

3 Tips for Laying the Strategic Groundwork of a Successful Enterprise-Wide SOA

1. A prioritization process for requirements that emphasizes enterprise priorities in lieu of departmental priorities

2. A highly transparent, participative governance process comprising all stakeholders including a virtual team of service providers led by the SOA Program Director to ensure uninterrupted support for the SOA initiative.

3. An unbounded commitment of an executive sponsor to steadfastly support this organizational transformation.

Setting Priorities
Requirements for services in an enterprise-wide SOA initiative are determined and funded according to the priorities of the enterprise as a whole, rather than those of departments. This forces alignment of business with IT ensuring the goals of SOA are aligned with organizational objectives. That requires participatory governance and communication processes and especially greater interaction with the business lines.

Sound governance begins with a strategic plan that includes the business goals. These goals, in turn, can be transformed into IT requirements with clear line of sight from business goal to IT requirements, followed by high-level design specifications through testing, deployment maintenance, and application end of life.

Executive Commitment
A key responsibility of the executive sponsor (or their delegate) is to be the final decision making authority when the participative governance process reaches a stalemate during a task. The stalemate may occur during requirements prioritization, or there could be a disagreement on the timing for delivering new capability to customers.

February 21, 2012  7:57 PM

The Big Dig and difficult software

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

Software is magic – sometimes it’s magic out of control. Bad software projects, SOA or otherwise, need good analogies. So, we talk about The Long March, The Project from Hell and so on. A recent conversation adds a new analogy to the canon: The Big Dig.

When we spoke with MIT Systems Researcher Jeanne Ross, she pointed to Boston’s Big Dig as an archetypal muffed project. Continued »

February 14, 2012  3:56 PM

When business processes meet software services

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

When driving school instructors go to action films like “Fast and Furious” you can bet their impression differs from that of would-be race drivers in the same audience. “They should fasten their seatbelts,’’ says one. “Vroom-vroom,” says the other. That split often plays out similarly today in business process management.

There, the business side may buy the tools and model the processes, but then leave it to developers in the application integration team to make it all work with enterprise software services.

The business side sees the cool power slides and money in the bank; the development team sees the dangers in impedance mismatch between services and processes. The business side has the vision; the development side cleans up the mess. That is a bridge that must be crossed.

Continued »

February 3, 2012  8:50 PM

Mobile middleware platform uses open source ESB and BAM

James Denman James Denman Profile: James Denman

Transport for London, the transportation authority for England’s capital city, recently revamped their rider information services with an open source middleware platform. Application development provider Godel Technologies was tasked with upgrading the city’s information services for commuters and tourists. Some of these systems were decades old and in obvious need of new life. Godel Technologies chose a combination of open source ESB and open source BAM to build a middleware platform that supported the reinvigoration of Transport for London’s transportation service applications.

The system supports both one-off inquiries (a user sends a text with their current location and destination and the system replies with the appropriate transit route) and subscriptions to find out about emerging changes in train schedules (for example, a commuter might sign up to find out ASAP when scheduled maintenance effects their route to work).

The backbone of the new system is built on an open source enterprise service bus (ESB) and the associated open source business activity monitoring (BAM) system from WSO2. According to Simon Bidel, head of professional services at Godel Technologies, using a service-oriented architecture with an ESB to separate all the diverse services has made the system easier and less costly to implement.

Continued »

February 3, 2012  8:10 PM

IBM buys in to mobile middleware, acquires Worklight

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

The drive to ‘develop once and deploy everywhere’ has become more acute as small and big enterprise IT shops have needed to support a wide array of mobile devices. This has led to the appearance of mobile middleware that acts as a moderating stage between the enterprise backend and the mobile frontend.

”One of the challenges for companies is to keep up with the pace of mobility. It is difficult – updating [mobile apps] in some cases on a monthly basis,” said Steve Drake, analyst, IDC. That is where the mobile middleware trend gets impetus.

The trend proved vibrant enough this week for IBM to scoop up Israel-based mobile middleware house Worklight for an undisclosed sum. Continued »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: