It is opening day for baseball in the U.S. and blogger Dana Blakenhorn finds a suitable metaphor to describe the latest reported dealings of Sun and IBM. Last month, the Wall Street Journal reported the two computer companies were near an agreement on a merger.
The rumor now is that a faction in Sun (one led by former CEO Scott McNealy) is pushing for more money from Big Blue. Further rumors have IBM reducing its offer. This all puts the proposed deal in something less than limbo.
Blogger Blakenhorn likens the situation to the Los Angeles Dodgers’ off-season negotiations with Manny Ramierez – a talented hitter of the baseball whose self-esteem seems to know no upper boundaries.
Over the winter, Ramierez walked away from a lucrative deal with the Dodgers, only to find there was not a wider market for his skills.
Eventually, Ramierez signed with the Dodgers for a deal somewhat less rich.
Blakenhorn suggests a similar outcome may yet transpire in the negotiations of IBM and Sun.
Related computer industry news
IBM and Sun reportedly in merger talks – SearchSOA.com, Mar. 18, 2009
If McNealy thinks he is Manny Ramirez, has another think coming – ZDNet Linux and Open Source blog
Ovum analyst Tony Baer reaffirms the link between SOA and IT Service Management in a blog post – entitled ”What’s a Service? Who’s Responsible?”
He talks briefly about the role ITIL can play, and in more detail about an emerging notion that a ”Service Manager” role may have a place in the modern orgainization. He considers the notion of who today is responsible for ensuring services meet business needs and that infrastrucutre is adquate to support those services.
Baer asks if what is needed is ”a sort of uber role that ensures that the service
(1) responds to a bona fide business need
(2) is consistent with enterprise architectural standards and does not needlessly duplicate what is already in place, and
(3) won’t break IT infrastructure or physical delivery.
This is a thoughtful piece. Clearly, the sobriquet ”Service Manager” may need some tuning, as the title seems equally apt for an individual charged with scheduling oil changes or refrigerator repairs.
”What’s a Service? Who’s Responsible?” – OnStrategies blog
The recent tremendous upheaval in the world economy did not bring out the best in the large army of SOA pundits. The upheaval was going to be “the end of SOA” or SOA was going to be “the answer” to the upheaval. The comments did not run the spectrum – the comments ended up on one end of the teeter-totter or other. Some SOA pundits have been on auto-pilot so long, the auto-pilot is now the pilot. Where is SOA, really? We felt asking the audiences at TechTarget Application Development Group member sites SearchSOA.com and TheServerSide.com would shed a brighter light on the somewhat murky topic. Continued »
Web services followed on the first flush of Java in the late 1990s. They might have been called something else. ‘Services’ made a certain sense because the term ably conveyed a difference from then-reigning object technology. People were ready for the services part. People understood that a waiter did not need to grow or make the coffee – or carry a coffee canister on their back, for that matter – in order to ‘serve’ coffee to you.
The ‘Web’ part of Web services was different, somewhat exploitative. The Web was a popular success, and you have to imagine someone thinking that if they named the latest software architecture after the Web, good things might happen. It wasn’t a big reach; Web services did tend to use the Web’s bread-and-butter protocol, HTTP.
That brings us to REST, which some people feel is truer to the spirit of the web than classic Web services employing XML and SOAP albeit over http.
The value of REST architecture is that it takes better advantage of Web architecture, indicated Bill Burke, RedHat Fellow, former Chief Architect for JBoss, certainly one of the most startling successful open-source Java implementations of all time. Burke is now Contributor to JBoss and a Project Lead on the JBoss RESTEasy Project.
“The crux of it is rediscovering http – trying to understand how the web becomes so prevalent,” he said. “SOAP only uses a bit of the http protocol. It really uses http only as a transport mechanism, like a socket.” We talked to Burke on the eve of TheServerSide Java Symposium in Las Vegas. At the event, Burke will lead a session entitled ”Putting Java to REST: The New Java + RESTful Web Services Specification.” [This event is presented by TheServerSide.com, SearchSOA’s sister publication.]
Burke maintains that the whole slate of standards known as “WS-*” [“WS-‘Star’”] have become too much of a moving target. “Getting vendors to cooperate is hard – ask Apache,” he chides.
Still, http has forged on. Now every platform supports http, so you don’t need infrastructure at both ends of the pipe.
REST forgoes certain levels of interoperability, but that may have its advantages.
Burke says: “What is cool about REST is you are focused on straight http. So instead of worrying about interoperability between vendors…you worry about interoperability between applications. You let http do the heavy lifting.”
Being the SOA guy I have to ask questions like ”Is REST anti-SOA?” Not at all, says Burke, although he is ready to say REST is anti-WS-Star … and SOAP.
By Ted Neward
In the rush to embrace new technologies and demonstrate “cutting-edge awareness”, companies (and individuals) sometimes create mountains out of molehills. Such was the case with XML a few years ago, relational databases before that, object- orientation, Java, graphical user interfaces…. in fact, it’s hard to name a recent technology trend that hasn’t spawned this kind of “rush to embrace” that leads to nonsensical claims, bizarre architectural suggestions, or blatant bald-faced attempts at capturing clueless consumer dollars.
One of those more recent trends has been the SOA bandwagon. Originally, though it’s hard to remember in the frenzy around SOA, the whole point of services, at least in their original form, was about designing distributed systems to be loosely-coupled and thus easier to interoperate with. Consider these original classic “four tenets of service-orientation” (http://msdn.microsoft.com/en-us/magazine/cc164026.aspx):
1) Boundaries are explicit
2) Services are autonomous
3) Services share schema and contract, not class
4) Service compatibility is determined based on policy
Nowhere in here do we find reference to SOA governance, SOA enablement, or any of the other elements that have been attached to the SOA name. In fact, it’s gotten to the point where CTOs and CEOs are talking about SOA as a general strategy for running their entire IT department.
I always thought CTOs and CEOs had something… I don’t know… *better* to do. For some reason, I always thought it was the job of the architect to think about these things, rather than let the CTO or CEO sit around and think about how their various software systems should be designed.
Don’t get me wrong: any organization that spends time thinking about how their various software systems are going to interact with one another is already a huge step above those that don’t. Too many CEOs and CTOs just assume that their back-end IT systems are capable of talking to anything at any time–in fact, one CTO once told me, to my face, “The only reason you’re saying it’s hard is so that you can charge me more money as a consultant. Well, I’m not buying it, and I’m not buying you.”
Said CTO went on to buy a “service-oriented” software package that he claimed would solve all their integration problems. The company went under about a year later. I won’t suggest it was anything to do with my inability to convince him that he really needed to invest more in thinking about his software infrastructure instead of just buying something “service-oriented”, but I can’t help but wonder….
So what are we to make of all the “service-oriented” bells and whistles currently being hung on every product, language, or release? In the end, the same thing we eventually made of objects, XML, Java, relational databases, XML, and every other technology that’s been put in front of us.
It’s useful, but it has an “end”. There is a point in the IT implementation, where the technology simply can’t help. Consider object-orientation, for example: back in the heyday of objects, various vendors, including one three-lettered vendor who was seriously looking to displace their rival in the operating system market, slapped “object-oriented” around everything, with the full expectation that not only customers *think* the product was better, but that the product really *would* be better. Another company, headed by a would-be jazz musician, tried the same thing for its office suite, and almost jazzed itself out of existence.
Moral of the story? We learned that objects are a useful way of structuring the way programmers think. Nothing more.
Another look at the “four tenets” makes it pretty clear that services aren’t much more than that, either–it’s a way of structuring the way programmers think, and has nothing to do with “governance” or “enablement”. Service-orientation describes an approach by which we create distributed systems, and that’s all.
When a CTO starts thinking about objects, or services, or other low-level activities, she may as well be conducting code reviews, too.
CTO-driven refactoring, anybody?
By Eric Newcomer
Whether the Java Community Process has completely lost its way or not, it is increasingly influenced by external activities. The Spring Framework and Hibernate influences on EJB3 and JPA are good examples. Another influence being increasingly felt is the growing adoption of the OSGi Specification and its implementations, especially the open source frameworks Eclipse Equinox, Apache Felix, and Knoplerfish.
The OSGi Specification defines a dynamic module metadata system for Java and a service-oriented programming model with which the modules interact. The specification defines a registry for service lookup, and a collection of built-in services for common functions such as security, lifecycle management, and logging. The OSGi framework has been adopted by the Eclipse Foundation and by every major Java vendor as a platform on which to build and ship middleware products and open source projects, including application servers, enterprise service buses, and IDEs.
As the core platform has become widely adopted in products and open source projects, the OSGi Alliance began to receive requirements for more explicit support of enterprise applications. The OSGi Specification began its life as JSR 8, back in 1999, intended for use in home automation gateways. Since that time OSGi technology has achieved some level of adoption in various embedded applications for the automotive, mobile telephone, and home entertainment. By September, 2006, the OSGi Alliance had received sufficient indications of interest in an enterprise edition to hold a workshop to explore the possibility of chartering an enterprise expert group (EEG).
Since its first meeting in January, 2007, the EEG has spent the past two years creating detailed requirements and designs intended to better support enterprise Java applications. The work will result in a major update to the specification in mid-2009 (two prerelease drafts have been published) that extends core framework services and adapts existing enterprise Java technologies to the OSGi framework to meet enterprise application use cases. The major features include a mapping of the Spring Framework component model called the Blueprint Service, a mapping of existing distributed computing protocols to the OSGi service model, and mapping key parts of Java EE such as Web apps, JDBC, JPA, JMX, JTA, JNDI, and JAAS.
The industry has already embraced the benefits of OSGi-enabled modularity. Next is to improve its support for enterprise Java applications by adapting technologies already used in those applications. This goal is to help OSGi developers more easily create enterprise applications in a standard way.
Eric Newcomer is a distributed computing specialist and independent consultant. Newcomer is a chair of the OSGi Alliance Enterprise Expert Group and former CTO of IONA Technologies. He writes a blog on OSGi matters.
By George Lawton
Cloud computing offers an extremely appealing future for enterprise IT: the ability to leverage ready services with lower initial investment and a pay-as-you-need pricing model. But it will be filled with surprises, and some of these will reflect surprises encountered in the early going of SOA. The difficulties of cloud computing will parallel many of the frustrations early SOA adopters experienced. The more loosely coupled and distributed the cloud based system becomes, the more moving parts you don’t have control over. “As a consequence, the development cycle in the cloud ends up costing more money than expected,” said John Michelsen co-founder and Chief Architect of iTKO.
Is there perhaps a hidden Cost of cloud development? Companies attempting to leverage new software models such as SaaS, third-party services, data-oriented services and cloud computing need to heed the pitfalls encountered by early SOA attempts. The applications need to integrate with functionality built by outsiders. These ‘outlying’ apps can change the service without notice, As well, hidden costs and constraints are associated with cloud testing.
One of the first challenges is that cloud functionality is created outside the organization, yet it still needs to be tested. Michelsen said that just because a set of functionality worked for another similar application does not mean it will work for yours. It is important to test the functionality of applications you did not build.
The adage after Sun’s Java promise of “write once, run everywhere,” was that you had to “write once and test everywhere.” Michelsen said the challenge is that even though an application only gets written once, its performance can vary widely depending on its particular deployment circumstances.
The problem is that the developers don’t write the requirements doc for the services they are using. They need to go through some kind of analysis by preferably working with the cloud service vendor, or build their own compliance step to check their expectations against reality.
The second issue is the need to continually validate an application against a set of service functionality. After the application is deployed, the service provider can make changes that could adversely affect the application without notice. The stuff that works today could break tomorrow. Michelsen said, “It’s not like you get to participate in the user acceptance phase of the new version of Google services.”
In traditional application development, the app is tested every time a change is made. Now the software does not have a system of validation in place. Without a system of continuous validation the service becomes brittle.
For example, one real estate firm that worked with iTKO incorporated a mapping component into its web site. Initial testing indicated that the mapping service worked flawlessly. But several months after having launched the site, they discovered that the map component was displaying the wrong address. They only found this out after one of the agents got lost trying to find a property. If the service had simply stopped working, they might have noticed much more quickly.
After an application has been tested and deployed, paying per unit of service (storage, CPU time, and bandwidth) allows a new app to get deployed with minimal cost. But these fixed costs can take a hidden toll when a company has to do any sort of testing, and particularly load and performance testing of a new app.
The problem is that the IT department is used to paying up front for hardware, rather than being presented with a bill after services are rendered.
Another constraint is that it is significantly more difficult to manage test data across multiple services. In order to do regression testing, there is a challenge in getting data to be consistent across different systems. You might need to create a set of dummy customer accounts in EBay, Amazon, and Salesforce.com, and then delete these at the end of the test. This is especially important for testing complex functionality like using business logic to present new offers.
Michelsen explained, “In the pre-cloud world, one of the challenges a testing team faces is that every time they need to test a piece of functionality, they need the data scenario to be present. For example, if you need to trigger a special offer to a card user with a low balance, that data has to exist in the system. It gets hard to create these scenarios when these systems are off-premise. I spend two-days resetting the environment for a two-hour test cycle.”
SOA guidelines, if heeded, no doubt pave a better way to clouds. Methodologies and tools that were developed for SOA testing by iTKO and may help establish stronger service level requirements, continuously validating functionality and performance, and virtualizing away the constraints of fees and limited access to critical services in the development lifecycle.
Related SOA test information
SOA complicated by ESB proliferation -SearchSOA.com
Last week, WSO2 released an update tailored around OSGi to its SOA framework. Rob Hailstone, analyst, The Butler Group told us that he rated the WS02 implementation favorably, because it ”does seem to have built out a more comprehensive set of features than most of the competitive open source offerings.” Continued »
By Yuval Shavit, Associate Editor, SearchWinDevelopment.com
With the release of Moonlight 1.0 last week, I had the opportunity to speak with Novell’s Miguel de Icaza about OS interoperability as it relates to RIAs. Moonlight is the Linux port to Microsoft’s RIA platform, Silverlight.
One of Silverlight’s advantages over Flash is that it includes a subset of .NET, including WCF for easy communication with the servers that do the real work behind RIA widgets. When Silverlight was a Windows-only platform, interoperability wasn’t an issue; companies that wrote Silverlight apps probably had Windows servers, so the client and server ran the same platform.
But Moonlight brings Linux into the fold on the client side, and that raises the question of interoperability. For now, the question is largely one-directional: how can developers design Silverlight apps that run on Linux but talk to Windows servers? But if Silverlight comes to compete broadly with Flash as a platform, companies may start having to worry about Windows clients talking to Linux servers, too.
De Icaza suggested that one solution may be to just drop WCF, which uses SOAP. Although there’s certainly an architectural appeal to creating more cohesive communication between clients and servers, de Icaza said, he’s seen a general movement back to the simpler REST protocol. [Ed note: Microsoft offers a REST kit for WCF – but SOAP is the more common use. WCF with REST loses an important WCF characteristic: strong types.]
But de Icaza was careful to add a disclaimer: his work doesn’t focus around enterprise applications. In those tightly controlled environments, using WCF and SOAP still makes sense, he said.
Of course, SOA is all about the enterprise, so that’s a pretty big disclaimer for any architect working behind a corporate firewall. But data from many services within an enterprise eventually finds its way to the user in one form or another, and maintaining two protocols — one for internal communication and the other for external publishing — defeats one of SOA’s main objectives: modularity and reusability of services. What happens if that tax calculation service needs to communicate with a RIA shopping cart next year, for instance?
The saying goes that if you live on the cutting edge, you risk getting cut. SOA architects should keep a careful eye on RIAs, lest they have to re-architect their brand new systems.
One of the old jokes about standards was a slight variation on the joke about New England weather [“If you don’t like it, wait a while, it will change.”]
If you don’t like a particular standard, wait a while, there will be another one coming along. Sometimes this is a game vendors play, and they are not always in the wrong. You cannot support every possible standard that comes down the pike.
But, among many standards worth tracking today is OSGi. It grew in part out of some disaffection with heavy Java component containers. OSGI aims to provide a solution to a lot of the problems associated with complex Java Jar file assembly. It has the support of several notable Java vendors. But some people closely watch how much emphasis different vendors put on a standard like OSGi.
We recently wrote about Sun’s GlassFish, which it is trying to establish as a standard much as Apache server modules became standards. An interesting thread related to this appeared on our sister site TheServerSide.com.
Some of the thread was devoted to jokes about the idea of ‘lightweight’ servers – the thread participants are right; this light versus heavy metaphor can get overplayed. One threadster reduces it to its most absurd extension – that we are near the dawn of the age of ‘zero-gravity’ servers!
Our article on the new commercial GlassFish does not discuss OSGi. In our conversations Sun touched on OSGi – saying support was up and coming – but spent more time on JBI, a component architecture that does not necessarily exclude or (include) OSGI. In terms of OSGi support, in our discussion, Sun pointed to its GlassFish 3 Prelude software as an example of OSGi support. Our ombudsman notes that this should have been included in the original article. Sun did proudly note its support for languages outside Java in the commercial GlassFish.
The most vivid representation of OSGi to date has been Eclipse. Some people are no doubt concerned that Sun’s heart is more with its competitive NetBeans architecture than Eclipse, and that this may influence its view on OSGi.