By Ted Neward
In the rush to embrace new technologies and demonstrate “cutting-edge awareness”, companies (and individuals) sometimes create mountains out of molehills. Such was the case with XML a few years ago, relational databases before that, object- orientation, Java, graphical user interfaces…. in fact, it’s hard to name a recent technology trend that hasn’t spawned this kind of “rush to embrace” that leads to nonsensical claims, bizarre architectural suggestions, or blatant bald-faced attempts at capturing clueless consumer dollars.
One of those more recent trends has been the SOA bandwagon. Originally, though it’s hard to remember in the frenzy around SOA, the whole point of services, at least in their original form, was about designing distributed systems to be loosely-coupled and thus easier to interoperate with. Consider these original classic “four tenets of service-orientation” (http://msdn.microsoft.com/en-us/magazine/cc164026.aspx):
1) Boundaries are explicit
2) Services are autonomous
3) Services share schema and contract, not class
4) Service compatibility is determined based on policy
Nowhere in here do we find reference to SOA governance, SOA enablement, or any of the other elements that have been attached to the SOA name. In fact, it’s gotten to the point where CTOs and CEOs are talking about SOA as a general strategy for running their entire IT department.
I always thought CTOs and CEOs had something… I don’t know… *better* to do. For some reason, I always thought it was the job of the architect to think about these things, rather than let the CTO or CEO sit around and think about how their various software systems should be designed.
Don’t get me wrong: any organization that spends time thinking about how their various software systems are going to interact with one another is already a huge step above those that don’t. Too many CEOs and CTOs just assume that their back-end IT systems are capable of talking to anything at any time–in fact, one CTO once told me, to my face, “The only reason you’re saying it’s hard is so that you can charge me more money as a consultant. Well, I’m not buying it, and I’m not buying you.”
Said CTO went on to buy a “service-oriented” software package that he claimed would solve all their integration problems. The company went under about a year later. I won’t suggest it was anything to do with my inability to convince him that he really needed to invest more in thinking about his software infrastructure instead of just buying something “service-oriented”, but I can’t help but wonder….
So what are we to make of all the “service-oriented” bells and whistles currently being hung on every product, language, or release? In the end, the same thing we eventually made of objects, XML, Java, relational databases, XML, and every other technology that’s been put in front of us.
It’s useful, but it has an “end”. There is a point in the IT implementation, where the technology simply can’t help. Consider object-orientation, for example: back in the heyday of objects, various vendors, including one three-lettered vendor who was seriously looking to displace their rival in the operating system market, slapped “object-oriented” around everything, with the full expectation that not only customers *think* the product was better, but that the product really *would* be better. Another company, headed by a would-be jazz musician, tried the same thing for its office suite, and almost jazzed itself out of existence.
Moral of the story? We learned that objects are a useful way of structuring the way programmers think. Nothing more.
Another look at the “four tenets” makes it pretty clear that services aren’t much more than that, either–it’s a way of structuring the way programmers think, and has nothing to do with “governance” or “enablement”. Service-orientation describes an approach by which we create distributed systems, and that’s all.
When a CTO starts thinking about objects, or services, or other low-level activities, she may as well be conducting code reviews, too.
CTO-driven refactoring, anybody?
By Eric Newcomer
Whether the Java Community Process has completely lost its way or not, it is increasingly influenced by external activities. The Spring Framework and Hibernate influences on EJB3 and JPA are good examples. Another influence being increasingly felt is the growing adoption of the OSGi Specification and its implementations, especially the open source frameworks Eclipse Equinox, Apache Felix, and Knoplerfish.
The OSGi Specification defines a dynamic module metadata system for Java and a service-oriented programming model with which the modules interact. The specification defines a registry for service lookup, and a collection of built-in services for common functions such as security, lifecycle management, and logging. The OSGi framework has been adopted by the Eclipse Foundation and by every major Java vendor as a platform on which to build and ship middleware products and open source projects, including application servers, enterprise service buses, and IDEs.
As the core platform has become widely adopted in products and open source projects, the OSGi Alliance began to receive requirements for more explicit support of enterprise applications. The OSGi Specification began its life as JSR 8, back in 1999, intended for use in home automation gateways. Since that time OSGi technology has achieved some level of adoption in various embedded applications for the automotive, mobile telephone, and home entertainment. By September, 2006, the OSGi Alliance had received sufficient indications of interest in an enterprise edition to hold a workshop to explore the possibility of chartering an enterprise expert group (EEG).
Since its first meeting in January, 2007, the EEG has spent the past two years creating detailed requirements and designs intended to better support enterprise Java applications. The work will result in a major update to the specification in mid-2009 (two prerelease drafts have been published) that extends core framework services and adapts existing enterprise Java technologies to the OSGi framework to meet enterprise application use cases. The major features include a mapping of the Spring Framework component model called the Blueprint Service, a mapping of existing distributed computing protocols to the OSGi service model, and mapping key parts of Java EE such as Web apps, JDBC, JPA, JMX, JTA, JNDI, and JAAS.
The industry has already embraced the benefits of OSGi-enabled modularity. Next is to improve its support for enterprise Java applications by adapting technologies already used in those applications. This goal is to help OSGi developers more easily create enterprise applications in a standard way.
Eric Newcomer is a distributed computing specialist and independent consultant. Newcomer is a chair of the OSGi Alliance Enterprise Expert Group and former CTO of IONA Technologies. He writes a blog on OSGi matters.
By George Lawton
Cloud computing offers an extremely appealing future for enterprise IT: the ability to leverage ready services with lower initial investment and a pay-as-you-need pricing model. But it will be filled with surprises, and some of these will reflect surprises encountered in the early going of SOA. The difficulties of cloud computing will parallel many of the frustrations early SOA adopters experienced. The more loosely coupled and distributed the cloud based system becomes, the more moving parts you don’t have control over. “As a consequence, the development cycle in the cloud ends up costing more money than expected,” said John Michelsen co-founder and Chief Architect of iTKO.
Is there perhaps a hidden Cost of cloud development? Companies attempting to leverage new software models such as SaaS, third-party services, data-oriented services and cloud computing need to heed the pitfalls encountered by early SOA attempts. The applications need to integrate with functionality built by outsiders. These ‘outlying’ apps can change the service without notice, As well, hidden costs and constraints are associated with cloud testing.
One of the first challenges is that cloud functionality is created outside the organization, yet it still needs to be tested. Michelsen said that just because a set of functionality worked for another similar application does not mean it will work for yours. It is important to test the functionality of applications you did not build.
The adage after Sun’s Java promise of “write once, run everywhere,” was that you had to “write once and test everywhere.” Michelsen said the challenge is that even though an application only gets written once, its performance can vary widely depending on its particular deployment circumstances.
The problem is that the developers don’t write the requirements doc for the services they are using. They need to go through some kind of analysis by preferably working with the cloud service vendor, or build their own compliance step to check their expectations against reality.
The second issue is the need to continually validate an application against a set of service functionality. After the application is deployed, the service provider can make changes that could adversely affect the application without notice. The stuff that works today could break tomorrow. Michelsen said, “It’s not like you get to participate in the user acceptance phase of the new version of Google services.”
In traditional application development, the app is tested every time a change is made. Now the software does not have a system of validation in place. Without a system of continuous validation the service becomes brittle.
For example, one real estate firm that worked with iTKO incorporated a mapping component into its web site. Initial testing indicated that the mapping service worked flawlessly. But several months after having launched the site, they discovered that the map component was displaying the wrong address. They only found this out after one of the agents got lost trying to find a property. If the service had simply stopped working, they might have noticed much more quickly.
After an application has been tested and deployed, paying per unit of service (storage, CPU time, and bandwidth) allows a new app to get deployed with minimal cost. But these fixed costs can take a hidden toll when a company has to do any sort of testing, and particularly load and performance testing of a new app.
The problem is that the IT department is used to paying up front for hardware, rather than being presented with a bill after services are rendered.
Another constraint is that it is significantly more difficult to manage test data across multiple services. In order to do regression testing, there is a challenge in getting data to be consistent across different systems. You might need to create a set of dummy customer accounts in EBay, Amazon, and Salesforce.com, and then delete these at the end of the test. This is especially important for testing complex functionality like using business logic to present new offers.
Michelsen explained, “In the pre-cloud world, one of the challenges a testing team faces is that every time they need to test a piece of functionality, they need the data scenario to be present. For example, if you need to trigger a special offer to a card user with a low balance, that data has to exist in the system. It gets hard to create these scenarios when these systems are off-premise. I spend two-days resetting the environment for a two-hour test cycle.”
SOA guidelines, if heeded, no doubt pave a better way to clouds. Methodologies and tools that were developed for SOA testing by iTKO and may help establish stronger service level requirements, continuously validating functionality and performance, and virtualizing away the constraints of fees and limited access to critical services in the development lifecycle.
Related SOA test information
SOA complicated by ESB proliferation -SearchSOA.com
Last week, WSO2 released an update tailored around OSGi to its SOA framework. Rob Hailstone, analyst, The Butler Group told us that he rated the WS02 implementation favorably, because it ”does seem to have built out a more comprehensive set of features than most of the competitive open source offerings.” Continued »
By Yuval Shavit, Associate Editor, SearchWinDevelopment.com
With the release of Moonlight 1.0 last week, I had the opportunity to speak with Novell’s Miguel de Icaza about OS interoperability as it relates to RIAs. Moonlight is the Linux port to Microsoft’s RIA platform, Silverlight.
One of Silverlight’s advantages over Flash is that it includes a subset of .NET, including WCF for easy communication with the servers that do the real work behind RIA widgets. When Silverlight was a Windows-only platform, interoperability wasn’t an issue; companies that wrote Silverlight apps probably had Windows servers, so the client and server ran the same platform.
But Moonlight brings Linux into the fold on the client side, and that raises the question of interoperability. For now, the question is largely one-directional: how can developers design Silverlight apps that run on Linux but talk to Windows servers? But if Silverlight comes to compete broadly with Flash as a platform, companies may start having to worry about Windows clients talking to Linux servers, too.
De Icaza suggested that one solution may be to just drop WCF, which uses SOAP. Although there’s certainly an architectural appeal to creating more cohesive communication between clients and servers, de Icaza said, he’s seen a general movement back to the simpler REST protocol. [Ed note: Microsoft offers a REST kit for WCF – but SOAP is the more common use. WCF with REST loses an important WCF characteristic: strong types.]
But de Icaza was careful to add a disclaimer: his work doesn’t focus around enterprise applications. In those tightly controlled environments, using WCF and SOAP still makes sense, he said.
Of course, SOA is all about the enterprise, so that’s a pretty big disclaimer for any architect working behind a corporate firewall. But data from many services within an enterprise eventually finds its way to the user in one form or another, and maintaining two protocols — one for internal communication and the other for external publishing — defeats one of SOA’s main objectives: modularity and reusability of services. What happens if that tax calculation service needs to communicate with a RIA shopping cart next year, for instance?
The saying goes that if you live on the cutting edge, you risk getting cut. SOA architects should keep a careful eye on RIAs, lest they have to re-architect their brand new systems.
One of the old jokes about standards was a slight variation on the joke about New England weather [“If you don’t like it, wait a while, it will change.”]
If you don’t like a particular standard, wait a while, there will be another one coming along. Sometimes this is a game vendors play, and they are not always in the wrong. You cannot support every possible standard that comes down the pike.
But, among many standards worth tracking today is OSGi. It grew in part out of some disaffection with heavy Java component containers. OSGI aims to provide a solution to a lot of the problems associated with complex Java Jar file assembly. It has the support of several notable Java vendors. But some people closely watch how much emphasis different vendors put on a standard like OSGi.
We recently wrote about Sun’s GlassFish, which it is trying to establish as a standard much as Apache server modules became standards. An interesting thread related to this appeared on our sister site TheServerSide.com.
Some of the thread was devoted to jokes about the idea of ‘lightweight’ servers – the thread participants are right; this light versus heavy metaphor can get overplayed. One threadster reduces it to its most absurd extension – that we are near the dawn of the age of ‘zero-gravity’ servers!
Our article on the new commercial GlassFish does not discuss OSGi. In our conversations Sun touched on OSGi – saying support was up and coming – but spent more time on JBI, a component architecture that does not necessarily exclude or (include) OSGI. In terms of OSGi support, in our discussion, Sun pointed to its GlassFish 3 Prelude software as an example of OSGi support. Our ombudsman notes that this should have been included in the original article. Sun did proudly note its support for languages outside Java in the commercial GlassFish.
The most vivid representation of OSGi to date has been Eclipse. Some people are no doubt concerned that Sun’s heart is more with its competitive NetBeans architecture than Eclipse, and that this may influence its view on OSGi.
By Jack Vaughan
Sun Microsystems is attempting to move deeper into the world of open source software with its Sun GlassFish Portfolio. Now included are a light-weight LAMP-style stack (which itself includes Tomcat, Memcached, Squid and Lighttpd with support for PHP, Ruby and Java) a Sun GlassFish Liferay-portal-based Web Space Server, and a JBI-base ESB. A proprietary Sun Enterprise Manager for monitoring is also available.
GlassFish has arisen as a potential lightweight alternative to established J2EE application server architecture – but Sun likes to note that GlassFish can span from the light-weight to the heavy-weight solution.
There are indications development may diverge into a lightweight, or LAMP camp, and a heavier J2EE style. “We want to break down that chasm,” said Paul Hinz, chief architect, Sun. He said GlassFish Webstack modules support LAMP development scenarios. These do not require application servers. Yet a GlassFish application server is available too, with JEE5 compliance (and a JEE6 preview in the offing). That server supports JRuby and Groovy development, marking Sun’s embrace of languages beyond the realm of Java.
Sun is hoping to leverage the success of the MySQL database, purchased by the company last year, in packaging that pairs GlassFish with MySQL.
Kevin Schmidt, manager for infrastructure strategy, said the GlassFish ESB provides routing and messaging, using JBI. The core backbone is not JMS, it is JBI,” he said. “It is a normalized message router done in-memory.”
The Sun GlassFish Portfolio is said to be available immediately via a subscription-based pricing model starting at $999 per server.
The MySQL merger appears a bit rocky, just as Sun looks to extend MySQL popularity with GlassFish-powered offerings. Both Marten Mickos, former MySQL CEO, and Monty Widenius, a MySQL co-founder, have recently left Sun.
XML tool maker Altova has a new ‘MissionKit’ for 2009. Besides native DB support for SQL Server 2008, Oracle 11g and PostgreSQL 8, it adds support for XBRL, an XML-based markup language for electronic transmission of business information.
XBRL may come into a brighter spotlight soon, as the SEC has moved ahead in mandating its use in certain documentation.
The present SEC XBRL mandate covers large U.S. companies with a ”worldwide public float over $5 billion.” That translates roughly into the 500 biggest frims. But XBRL may find much broader use.
XBRL grows out of some of the earliest XML efforts to create a ‘semantic web’ in which machines can handle data intelligently. If the statistics in data in common GAAP documents can be handled more dynamically, this could save a lot of time and money for organizations bound to process such data.
For some resources on XBRL, check out XML-based Extensible Business Reporting Language (XBRL) for accounting reports– to make financial disclosures using XBRL as part of their Generally Accepted Accounting Principles (GAAP) filings.
There are already quite a few tools available in the XBRL field, but XML tool pioneer Altova’s offering should be welcome by many. In various tools, Altova software supports XBRL development using intelligent wizards, graphical drag-and-drop, and code generation capabilities, and linking to other tools in the company’s suite.
By Jack Vaughan
The Open Group, meeting in San Diego this week, has updated the TOGAF architecture framework. A lot of the work on TOGAF 9 was around not just what architecture should have, but how to make it work, says Judith Jones, CEO of Architecting the Enterprise. Another key enhancement in TOGAF 9 is the introduction of a seven-part structure and reorganization of the framework into modules.
By Jack Vaughan, Editor-in-Chief
The global credit crunch is still just a few months old – the data says the U.S. economy has been in recession for about a year. Disheartening news of job losses are daily fare. Until recent bad reports from Intel and AMD, the technology sector’s recession experience seemed to be less harsh than that of others.
An upsurge in Business Process Modeling (BPM) and Business Process Management (BPM) may be in store given the rush of mergers, especially in financial areas. Meanwhile, in corporate HQs there is a lot of ‘what-if’ analysis going on that may result in some re-engineered business models.
Some people think software will play a big role for time to come, as corporations try to make a new way forward. Had an interesting conversation in this regard the other day with Pierre Fricke, who has both a historical and technical take on this.
Fricke, who serves as director of product management for Red Hat’s JBoss Div, said that, while businesses can be seen contracting, they still need improvement, and Business Process Management thus has a role.
“Unless you are literally going out of business, you end up with just more problems to solve,” he said.
The credit conflagration augurs new legislation and administrative procedures. Rules engines will need updating too, he said. “You can expect new regulations for mortgages, securities trading and so on,” he said. “The best way to codify these things is in rules engines.”
Fricke talked recently on his blog about the role technology played in combating the Great Depression. An IBM alumnus, Fricke knows the folk lore of Big Blue. How it sold electronic tabulators to the U.S. Government in the 1930s during The New Deal – tabulators that were used to fine-tune basic processes associated with New Deal programs.
“In the 1930s, emerging high technology in the form of tabulator machines and improved telephonic services continued to grow. IBM led the charge and had growth every year in the 1930s with expanded opportunities for tabulators to support the new regulatory environment and to help industries become more productive and reduce costs.”
He adds: “… it’s not the end of the world, just the end of an era.” That is a sentiment many would second.
What do you think? Is Business Process Modeling and Business Process Management still on the rise? Let us know.