By George Lawton
Cloud computing offers an extremely appealing future for enterprise IT: the ability to leverage ready services with lower initial investment and a pay-as-you-need pricing model. But it will be filled with surprises, and some of these will reflect surprises encountered in the early going of SOA. The difficulties of cloud computing will parallel many of the frustrations early SOA adopters experienced. The more loosely coupled and distributed the cloud based system becomes, the more moving parts you don’t have control over. “As a consequence, the development cycle in the cloud ends up costing more money than expected,” said John Michelsen co-founder and Chief Architect of iTKO.
Is there perhaps a hidden Cost of cloud development? Companies attempting to leverage new software models such as SaaS, third-party services, data-oriented services and cloud computing need to heed the pitfalls encountered by early SOA attempts. The applications need to integrate with functionality built by outsiders. These ‘outlying’ apps can change the service without notice, As well, hidden costs and constraints are associated with cloud testing.
One of the first challenges is that cloud functionality is created outside the organization, yet it still needs to be tested. Michelsen said that just because a set of functionality worked for another similar application does not mean it will work for yours. It is important to test the functionality of applications you did not build.
The adage after Sun’s Java promise of “write once, run everywhere,” was that you had to “write once and test everywhere.” Michelsen said the challenge is that even though an application only gets written once, its performance can vary widely depending on its particular deployment circumstances.
The problem is that the developers don’t write the requirements doc for the services they are using. They need to go through some kind of analysis by preferably working with the cloud service vendor, or build their own compliance step to check their expectations against reality.
The second issue is the need to continually validate an application against a set of service functionality. After the application is deployed, the service provider can make changes that could adversely affect the application without notice. The stuff that works today could break tomorrow. Michelsen said, “It’s not like you get to participate in the user acceptance phase of the new version of Google services.”
In traditional application development, the app is tested every time a change is made. Now the software does not have a system of validation in place. Without a system of continuous validation the service becomes brittle.
For example, one real estate firm that worked with iTKO incorporated a mapping component into its web site. Initial testing indicated that the mapping service worked flawlessly. But several months after having launched the site, they discovered that the map component was displaying the wrong address. They only found this out after one of the agents got lost trying to find a property. If the service had simply stopped working, they might have noticed much more quickly.
After an application has been tested and deployed, paying per unit of service (storage, CPU time, and bandwidth) allows a new app to get deployed with minimal cost. But these fixed costs can take a hidden toll when a company has to do any sort of testing, and particularly load and performance testing of a new app.
The problem is that the IT department is used to paying up front for hardware, rather than being presented with a bill after services are rendered.
Another constraint is that it is significantly more difficult to manage test data across multiple services. In order to do regression testing, there is a challenge in getting data to be consistent across different systems. You might need to create a set of dummy customer accounts in EBay, Amazon, and Salesforce.com, and then delete these at the end of the test. This is especially important for testing complex functionality like using business logic to present new offers.
Michelsen explained, “In the pre-cloud world, one of the challenges a testing team faces is that every time they need to test a piece of functionality, they need the data scenario to be present. For example, if you need to trigger a special offer to a card user with a low balance, that data has to exist in the system. It gets hard to create these scenarios when these systems are off-premise. I spend two-days resetting the environment for a two-hour test cycle.”
SOA guidelines, if heeded, no doubt pave a better way to clouds. Methodologies and tools that were developed for SOA testing by iTKO and may help establish stronger service level requirements, continuously validating functionality and performance, and virtualizing away the constraints of fees and limited access to critical services in the development lifecycle.
Related SOA test information
SOA complicated by ESB proliferation -SearchSOA.com
Last week, WSO2 released an update tailored around OSGi to its SOA framework. Rob Hailstone, analyst, The Butler Group told us that he rated the WS02 implementation favorably, because it ”does seem to have built out a more comprehensive set of features than most of the competitive open source offerings.” Continued »
By Yuval Shavit, Associate Editor, SearchWinDevelopment.com
With the release of Moonlight 1.0 last week, I had the opportunity to speak with Novell’s Miguel de Icaza about OS interoperability as it relates to RIAs. Moonlight is the Linux port to Microsoft’s RIA platform, Silverlight.
One of Silverlight’s advantages over Flash is that it includes a subset of .NET, including WCF for easy communication with the servers that do the real work behind RIA widgets. When Silverlight was a Windows-only platform, interoperability wasn’t an issue; companies that wrote Silverlight apps probably had Windows servers, so the client and server ran the same platform.
But Moonlight brings Linux into the fold on the client side, and that raises the question of interoperability. For now, the question is largely one-directional: how can developers design Silverlight apps that run on Linux but talk to Windows servers? But if Silverlight comes to compete broadly with Flash as a platform, companies may start having to worry about Windows clients talking to Linux servers, too.
De Icaza suggested that one solution may be to just drop WCF, which uses SOAP. Although there’s certainly an architectural appeal to creating more cohesive communication between clients and servers, de Icaza said, he’s seen a general movement back to the simpler REST protocol. [Ed note: Microsoft offers a REST kit for WCF – but SOAP is the more common use. WCF with REST loses an important WCF characteristic: strong types.]
But de Icaza was careful to add a disclaimer: his work doesn’t focus around enterprise applications. In those tightly controlled environments, using WCF and SOAP still makes sense, he said.
Of course, SOA is all about the enterprise, so that’s a pretty big disclaimer for any architect working behind a corporate firewall. But data from many services within an enterprise eventually finds its way to the user in one form or another, and maintaining two protocols — one for internal communication and the other for external publishing — defeats one of SOA’s main objectives: modularity and reusability of services. What happens if that tax calculation service needs to communicate with a RIA shopping cart next year, for instance?
The saying goes that if you live on the cutting edge, you risk getting cut. SOA architects should keep a careful eye on RIAs, lest they have to re-architect their brand new systems.
One of the old jokes about standards was a slight variation on the joke about New England weather [“If you don’t like it, wait a while, it will change.”]
If you don’t like a particular standard, wait a while, there will be another one coming along. Sometimes this is a game vendors play, and they are not always in the wrong. You cannot support every possible standard that comes down the pike.
But, among many standards worth tracking today is OSGi. It grew in part out of some disaffection with heavy Java component containers. OSGI aims to provide a solution to a lot of the problems associated with complex Java Jar file assembly. It has the support of several notable Java vendors. But some people closely watch how much emphasis different vendors put on a standard like OSGi.
We recently wrote about Sun’s GlassFish, which it is trying to establish as a standard much as Apache server modules became standards. An interesting thread related to this appeared on our sister site TheServerSide.com.
Some of the thread was devoted to jokes about the idea of ‘lightweight’ servers – the thread participants are right; this light versus heavy metaphor can get overplayed. One threadster reduces it to its most absurd extension – that we are near the dawn of the age of ‘zero-gravity’ servers!
Our article on the new commercial GlassFish does not discuss OSGi. In our conversations Sun touched on OSGi – saying support was up and coming – but spent more time on JBI, a component architecture that does not necessarily exclude or (include) OSGI. In terms of OSGi support, in our discussion, Sun pointed to its GlassFish 3 Prelude software as an example of OSGi support. Our ombudsman notes that this should have been included in the original article. Sun did proudly note its support for languages outside Java in the commercial GlassFish.
The most vivid representation of OSGi to date has been Eclipse. Some people are no doubt concerned that Sun’s heart is more with its competitive NetBeans architecture than Eclipse, and that this may influence its view on OSGi.
By Jack Vaughan
Sun Microsystems is attempting to move deeper into the world of open source software with its Sun GlassFish Portfolio. Now included are a light-weight LAMP-style stack (which itself includes Tomcat, Memcached, Squid and Lighttpd with support for PHP, Ruby and Java) a Sun GlassFish Liferay-portal-based Web Space Server, and a JBI-base ESB. A proprietary Sun Enterprise Manager for monitoring is also available.
GlassFish has arisen as a potential lightweight alternative to established J2EE application server architecture – but Sun likes to note that GlassFish can span from the light-weight to the heavy-weight solution.
There are indications development may diverge into a lightweight, or LAMP camp, and a heavier J2EE style. “We want to break down that chasm,” said Paul Hinz, chief architect, Sun. He said GlassFish Webstack modules support LAMP development scenarios. These do not require application servers. Yet a GlassFish application server is available too, with JEE5 compliance (and a JEE6 preview in the offing). That server supports JRuby and Groovy development, marking Sun’s embrace of languages beyond the realm of Java.
Sun is hoping to leverage the success of the MySQL database, purchased by the company last year, in packaging that pairs GlassFish with MySQL.
Kevin Schmidt, manager for infrastructure strategy, said the GlassFish ESB provides routing and messaging, using JBI. The core backbone is not JMS, it is JBI,” he said. “It is a normalized message router done in-memory.”
The Sun GlassFish Portfolio is said to be available immediately via a subscription-based pricing model starting at $999 per server.
The MySQL merger appears a bit rocky, just as Sun looks to extend MySQL popularity with GlassFish-powered offerings. Both Marten Mickos, former MySQL CEO, and Monty Widenius, a MySQL co-founder, have recently left Sun.
XML tool maker Altova has a new ‘MissionKit’ for 2009. Besides native DB support for SQL Server 2008, Oracle 11g and PostgreSQL 8, it adds support for XBRL, an XML-based markup language for electronic transmission of business information.
XBRL may come into a brighter spotlight soon, as the SEC has moved ahead in mandating its use in certain documentation.
The present SEC XBRL mandate covers large U.S. companies with a ”worldwide public float over $5 billion.” That translates roughly into the 500 biggest frims. But XBRL may find much broader use.
XBRL grows out of some of the earliest XML efforts to create a ‘semantic web’ in which machines can handle data intelligently. If the statistics in data in common GAAP documents can be handled more dynamically, this could save a lot of time and money for organizations bound to process such data.
For some resources on XBRL, check out XML-based Extensible Business Reporting Language (XBRL) for accounting reports- to make financial disclosures using XBRL as part of their Generally Accepted Accounting Principles (GAAP) filings.
There are already quite a few tools available in the XBRL field, but XML tool pioneer Altova’s offering should be welcome by many. In various tools, Altova software supports XBRL development using intelligent wizards, graphical drag-and-drop, and code generation capabilities, and linking to other tools in the company’s suite.
By Jack Vaughan
The Open Group, meeting in San Diego this week, has updated the TOGAF architecture framework. A lot of the work on TOGAF 9 was around not just what architecture should have, but how to make it work, says Judith Jones, CEO of Architecting the Enterprise. Another key enhancement in TOGAF 9 is the introduction of a seven-part structure and reorganization of the framework into modules.
By Jack Vaughan, Editor-in-Chief
The global credit crunch is still just a few months old – the data says the U.S. economy has been in recession for about a year. Disheartening news of job losses are daily fare. Until recent bad reports from Intel and AMD, the technology sector’s recession experience seemed to be less harsh than that of others.
An upsurge in Business Process Modeling (BPM) and Business Process Management (BPM) may be in store given the rush of mergers, especially in financial areas. Meanwhile, in corporate HQs there is a lot of ‘what-if’ analysis going on that may result in some re-engineered business models.
Some people think software will play a big role for time to come, as corporations try to make a new way forward. Had an interesting conversation in this regard the other day with Pierre Fricke, who has both a historical and technical take on this.
Fricke, who serves as director of product management for Red Hat’s JBoss Div, said that, while businesses can be seen contracting, they still need improvement, and Business Process Management thus has a role.
“Unless you are literally going out of business, you end up with just more problems to solve,” he said.
The credit conflagration augurs new legislation and administrative procedures. Rules engines will need updating too, he said. “You can expect new regulations for mortgages, securities trading and so on,” he said. “The best way to codify these things is in rules engines.”
Fricke talked recently on his blog about the role technology played in combating the Great Depression. An IBM alumnus, Fricke knows the folk lore of Big Blue. How it sold electronic tabulators to the U.S. Government in the 1930s during The New Deal – tabulators that were used to fine-tune basic processes associated with New Deal programs.
“In the 1930s, emerging high technology in the form of tabulator machines and improved telephonic services continued to grow. IBM led the charge and had growth every year in the 1930s with expanded opportunities for tabulators to support the new regulatory environment and to help industries become more productive and reduce costs.”
He adds: “… it’s not the end of the world, just the end of an era.” That is a sentiment many would second.
What do you think? Is Business Process Modeling and Business Process Management still on the rise? Let us know.
Software AG has released an enhanced SOALink Cookbook, consisting of “recipes” that can be used for integration. In the spirit of the age of services, SOALink publishes APIs and info on extension points available in CentraSite ActiveSOA. Continued »
SOA Software earlier this month entered the world of SOA Governance planning with SOA Software’s Portfolio Manager. The objective is to help companies plan and prioritize services creation. Continued »