SOA Talk

September 20, 2012  6:03 PM

Goodbye to three-tier computing?

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

Software in the original mainframe days was all glommed together. Why not? Who was looking? Sometimes, reluctantly, some structure came about. Even in the early mid-range days, code was built up into classes, objects and components that were often loosely strung together.

With standard Java and standard Java servers, fairly strict and familiar three-tier architecture came about. The question to ask now is “Will it last forever?” Like so many things, the fundamental tiers of computing do come up for reconsideration once and a while.

These breezes have been blowing subtly since people cast about for lighter versions of Enterprise Java Beans. More recently, Node.js has arisen as a JavaScript alternative to Java on the server side. Increasingly, the client is the object of interest.

Node.js and other browser-influenced technologies seem to encourage software architects to cast skyward their monolithic three-tier components. As these flying components drift down to earth, they may not settle back up in the same alignment. The sudden near-hegemony of mobile clients is pushing things ahead quickly.  A variety of new architectures are brewing.

In some ways, there seems a growing reaction to the rule of Java and the server. That view emerges from a look at a reporters’ notebook. It’s not going away, but as described in an interview with James Strachan, now senior software consultant with JBoss: “The server side is becoming thinner and thinner.” When and spoke with Strachan earlier this year at the CamelOne event, the topic of Node.JS was on the docket, but Strachan was expansive.

He said, looking forward:

The server side might just be Amazon Simple DB or Mongo DB or something; there might not be much of a three-tier architecture anymore.

Meanwhile, with flair, he continued:


….the client side is becoming bigger and more and more complex; it’s real-time now, everyone’s doing Ajax, real-time updates, and people are doing lots of single-page applications – which is when one Web page starts up and the entire app is in there. There are lots of models, containers, relationships and persistence and “yada-yada.”


Strachan notes this is highly driven by mobile applications:


In many ways the browsers won. Almost every mobile platform has Web capabilities inside it – Android, iPhone, iOS all have Web browsers and so forth. So the Web has kind of won … most browsers use JavaScript and HTML 5. Silverlight’s dead, Flash is kind of dying … the browser is really where it’s at …  with HTML and JavaScript.


Are the new approaches overblown? Is real change far off? Do you see a shift in emphasis to the client? If so, do you think services or SOA have had a hand in breaking down the status quo? -Jack Vaughan




September 13, 2012  8:43 PM

Fat Fractal enters the BaaS fray

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

What has sometimes been described as mobile middleware has taken a new tack. Now, the idea of Backend as a Service (BaaS) has begun to take off in the mobile application development space. Proponents of BaaS say it helps developers easily build mobile apps, or any other applications connected to a cloud backend. Some of their views suggest a wholly new computer architecture is in the works.

By way of example is FatFractal, a San Francisco-based BaaS provider that launched just this week. The company describes its product as offering native code support for any connected device, along with an events model and declarative security. FatFractal also says it integrates all of those components as lightweight services.

While it may be the newest BaaS player, FatFractal joins a slew of companies already in the field. Its competitors include StackMob, Kinvey, Applicasa and Parse.

Central to FatFractal’s approach is a NoServer module, which takes JSON requests and handles them via a script execution engine and a Create, Read, Update and Delete (CRUD) engine.

FatFractal CMO David Lasner thinks the new approach is needed. “It’s just hard to do a backend in the cloud and make it work,” he said. “The nature of applications is changing and you’re getting thousands of applications that use a lot of data.” – Stephanie Mann

September 7, 2012  3:16 PM

Choice Hotels rethink IT architecture – employ middleware, SOA, BPM

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

Hoteliers were among the first businesses to turn to information technology. Now, their once-new foundational computers have become legacy systems that can block in the way of delivering new products and promotions. These are not always just mainframe systems – once-shiny high-performance mid-range (or larger) Unix systems and even pre-SOA-era application servers may be standing in the way of business flexibility just as easily.

In recent years, as some systems began to show their age, Choice Hotels International, Inc., which franchises over 6,000 hotels, opted to update with middleware systems from Oracle Corp. Choice Hotels’ move is well along. SOA is part of the journey.

“Several years ago, we did an overall assessment and decided there were too many point-to-point connections,” said Rain Fletcher, vice president of application development and architecture, Choice Hotels. “Maintenance was difficult, and our business needed more velocity in delivering new functionality.”

That is the background to Choice’s selection of Oracle Fusion Middleware, SOA Suite and BPM Suite, he said. Oracle application server software also provides the base for those higher level stack elements. Fletcher described Choice’s as “a WebLogic shop,” referring to the former BEA, now Oracle, app server suite.

The legacy systems were becoming “a liability in our ability to execute,” said Fletcher. A re-thinking of IT architecture was needed, he said.

The need for quick technical flexibility is intensely apparent on the Choice Hotels’ website today. It is dotted with special book-early rate offers, gift card offerings, Privilege Point member specials, downloadable iPad and smart phone apps and more. It enables bookings on Comfort Inn, Quality Inn, Econo Lodge, Sleep Inn and other familiar hotel marquees. All this functionality must be fleet and flexible, and supported by backend systems and middleware.

A big part of Fletcher’s drive is to simplify and standardize where possible. “We wanted to default to one standard,” he said. But multiple systems are a fact of life that require developers to be supple. The that is needed is what Fletcher calls “fungibility.”

“We have thirteen different systems. And I don’t want there [to be a need for] ‘tribal knowledge’ of any one of them,” he said.

How does that pan out in operations? Some complexity is unavoidable – but simplicity must be the goal. “Every application server type we have has a different patching policy and security profile,” he said. “I may have four – I don’t want any more,” he said.

SOA and the Oracle SOA Suite have been first steps in gaining the flexibility Fletcher’s organization is looking to achieve, with BPM and modeling deployments to come, he said. Developers go through intensive training in SOA. He said the effort is built around the concept of a “SOA Services Factory.”

“We started with service domain module creation, working with partners to create several high-level service domains,” said Fletcher. Looking forward, he expects to be mapping SOA services with well-defined business processes. The domains provide a framework for the services. As the services grow up they map naturally to how the business thinks, and what the business does, he said.


September 6, 2012  6:14 PM

Can stream-based data processing make Hadoop run faster?

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

The Apache Hadoop distributed file processing system has benefits and is gaining traction. However, it can have drawbacks. Some organizations find that starting up with Hadoop requires rethinking software architecture and that acquiring new data skills is necessary.

For some, a problem with Hadoop’s batch-processing model is that it assumes there will be downtime to run the batch in between bursts of data acquisition. This is the case for many businesses that operate locally and have a large number of transactions during the day, but very little (if any) at night. If that nightly window is large enough to process the accumulation of data from the previous day, everything goes smoothly. For some businesses though, that window of downtime is small or non-existent and even with Hadoop’s high-powered processing, they still get more data in one day than they can process every 24 hours.

For organizations with small windows of acceptable, an approach that adds components of stream-based data processing may help, writes GigaSpaces CTO Nati Shalom in a recent blog on making Hadoop faster.  By constantly processing incoming data into useful packets and removing static data that does not need to be processed (or reprocessed) enterprise organizations can significantly accelerate their big data batch processes.  – James Denman

September 5, 2012  11:08 PM

Thomas Erl discusses upcoming SOA, Cloud and Service Technology Symposium

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

Later this month, experts and authors from around the globe will gather in London for the fifth annual SOA, Cloud and Service Technology Symposium. This year’s conference agenda reflects aspects of the progress of SOA – both subtle and profound.

In reviewing this year’s submissions, some vivid trends emerged, said Thomas Erl, prominent SOA author, educator and conference chair. “Many sessions are about the convergence of different areas,” said Erl, noting the original event covered SOA, then it covered SOA and cloud computing, and now it has broadened further.

“As you go through all the submissions, you kind of witness an evolution in the industry. It is a reflection as to where the industry itself is going,” he said. As the naming of the event suggests, Erl sees an emerging field that can be called “service technology.”

“In the early days of SOA, people associated SOA with Web services. There was a communications barrier [with people] who thought it was just a way of implementing Web services,” he said. “Now we are seeing many more sessions that look at how [cloud, SOA and services] are applied together, and what the implications are.”

The Symposium, set for Sept 24 –  25 at Imperial College, is slated to cover a broad variety of SOA and cloud-related topics as well. Among scheduled sessions are “Lightweight BPM and SOA,” “Moving Applications to the Cloud: Migration Options,” and “The Rise of the Enterprise Service Bus.” Also on the agenda is a series of on-site training and certification workshops. Billed as “bootcamp-style training sessions,” the workshops will provide preparation for a number of industry-recognized certifications, including SOA architect and cloud technology professional programs.

A key aim of the conference is to offer SOA, cloud computing and service technologies practitioners a look at real-world implementations and field-tested industry practices. However, the event will also cover emerging trends and innovations in the space. Continued »

August 30, 2012  2:37 PM

Connecting Hadoop distributions to ODBC

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

As more enterprises set their sights on Hadoop’s capabilities, new products aim to ease Hadoop integration. Progress DataDirect’s Connect XE for ODBC driver for Hadoop Hive is an example. It boasts scalable connectivity for multiple distributions of Hadoop.

Enterprises looking to carry out additional analysis of data contained in the Hadoop-based store need a reliable connection to their existing predictive analytic and business intelligence tools. That can prove challenging, especially when dealing with multiple versions of Hadoop—distributions include Apache Hadoop, MapR Apache Hadoop, Cloudera’s distribution of Apache Hadoop and others.

“If I’m an [independent software vendor] and I want to onboard Hadoop as a supportive platform, I can either write a bunch of custom code for each specific flavor of Hadoop that I want to talk to—which has massive cost to it, massive complexity and issues related to support—or I can try to piece together some support matrix with the existing technology that’s out there for connectivity,” said Michael Benedict, vice president and business line manager, Progress DataDirect.

The company’s newest driver provides enterprises with another option. “Customers can plug in our driver under their normal code maps [to] applications that already support ODBC today, and they are able to take advantage of Hadoop for all of their customers,” Benedict explained.

The driver offers support for several common Hadoop distribution frameworks, including Apache, Cloudera, MapR, and Amazon EMR. At the same time it provides Windows, RedHat, Solaris, SUSE, AIX, and HP-UX platform support. According to Benedict, the release of this driver reflects a growing need to analyze and process big data.

“[Enterprises are] consuming, analyzing and taking action on a much larger set of data than they have in the past,” he explained. “The reason why that’s changed is that, while you could store that data in the past, you just couldn’t really do it cost effectively. Big data/Hadoop allows you to do it in a slightly more cost-effective manner. Plus you’ve got a lot of technology that’s being built around this to enable you to better monetize and take action on data.”

By offering one unified driver, Progress DataDirect says it is filling demand for better connectivity to all the major platforms supporting the major distributions of Hadoop. Set to ship at the end of October, preview access to the product is now available on a limited basis to current customers. -Stephanie Mann

August 27, 2012  5:53 PM

Lightweight scripts bear down on Java ecosystem

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

In a recent report on the state of Java, IDC analyst Al Hilwa notes that the Java ecosystem is healthy and on a growing trajectory, with more programming languages than ever now hosted on the Java Virtual Machine (JVM).  Hilwa, program director for application development software at IDC, gives credit to Oracle for a mostly successful custodianship of Java, since its acquisition of Sun Microsystems two years ago.

There are some clouds on the horizon, as could be expected for a language and architecture that has been atop the heap of enterprise middleware for so many years. Writes Hilwa: “Java is under pressure from competing developer ecosystems, including the aggressively managed Microsoft platform and ecosystem and the broader Web ecosystem with its diverse technologies and lightweight scripting languages and frameworks.”

While looming lightweight languages, frameworks and runtimes do portend a new state of Java , Java’s ability to evolve to absorb new technologies has indeed proved remarkable to date. There is reason to believe there is still more to come.

August 20, 2012  8:22 PM

Skills for big data: Hadoop, Pig, Cassandra and more

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

Q: What is a data scientist? A: It’s a DBA from California. The joke belies the fact that the world of big data skills right now is pretty much topsy-turvy. If you would you like to look at a short list of skills associated with big data initiatives, you are out of luck. Try a long list instead.

The skills list – courtesy of the IT skills specialists at Foot Partners, LLC – includes Apache Hadoop, MapReduce, Hbase, Pig, Hive, Cassandra, MongoDB, CouchDB, XML, Membase, Java, .NET, Ruby, C++ and more.

Further, the ideal candidate needs to be familiar with sophisticated algorithms, analytics, ultra-high-speed computing and statistics – even artificial intelligence. The needs of big data, which arise in part from modern computing’s ability to produce more and more bits and bytes, mean that developers have to hone their skills significantly. Suddenly, SQL-savvy developers have to obtain NoSQL skills.

New technology like Hadoop is so raw that the developer is often forced to create his or her own software tools, which is a skill in itself.  Writes the Foote crew:

Hadoop is an extremely complex system to master and requires intensive developer skills. There is a lack of an effective ecosystem and standards around this open source offering and generally poor tools available for using Hadoop.

Foote warns that there is only more of the same to come, especially as unstructured data from sources such as sensors and social media pile up in the in-bin. Note to big data scientists of tomorrow: get ready for the deluge! – Jack Vaughan


August 13, 2012  5:57 PM

APIs in the news as trawls for dollars

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

Summer is known for vacation and relaxation, as many TV commercials attest. It also can be a time of unrest and revolution, as U.S. and French history attest. Maybe the season explains the timing of some upheaval in the fledgling field of open APIs.

Recent weeks have seen clamor in the ranks of the OAuth API standardization effort, as well as a high-visibility launch of an alternative to Twitter APIs. In the first case, an OAuth originator took exception at changes proposed for Version 2.0. In the other case, a West Coast start-up took on Twitter, promising a non-ad-supported social media platform based on an open Web API. A sidebar to all this is the earlier craigslist mini-brouhaha surrounding its attempts to close up its data listing URLs that are being repurposed by Web API-wielding third parties.

Over the weekend, the potential Twitter alternative known as garnered considerable attention by enlisting developers at as much as $100 a pop to sign up for its paid mobile app service. The company had well exceeded its $500,000 seed goal as of August 13. On one level it can be seen as an effort to enter the void caused by Twitter’s recent back-tracking on some of its API openness. On another level can be seen as an affront to Twitter’s growing reliance on advertising for revenue.

It was in the wake of Twitter’s efforts to ensure that its APIs maintain a “consistent set of products and tools” that co-creator Dalton Caldwell blogged about what Twitter could have been. (He’d early attacked Facebook.) He saw the Twitter API originally as a real-time protocol, one that became tainted by Twitter’s advertising model. Subsequently, launched its online promotion,  which seemed somewhat akin to crowd-funding undertakings such as Kickstarter.

Dalton Caldwell, who began his career at SourceForge, has seen the upside and downside of technology. His present company, Mixed Media Labs has focused increasingly on its developer store, now pitched as a social platform, as backing has run out for its Picpiz picture sharing site, now shut down. In effect, he has ridden the swells of the open API trend, and found a way to get mobile app developers to pay to be part of the effort.

These doings – both Twitter’s and Mixed Media’s – don’t much clarify the trajectory of that recently born technology known as the open, Web or public API.

An era of an open,  programmable Web may come about if non-commercial standards can be agreed to. Oauth 2.0 will provide a testing ground for that. But, Caldwell’s does not forgo commerce altogether – his business plan merely pledges to forgo advertising commerce.

It is early for open APIs. Companies that use Web APIs as part of their business will no doubt take a one-step-forward/one-step-backward approach. They will be eyeing the open API effort but continuing to use Twitter APIs where appropriate. What do you think?

August 2, 2012  7:07 PM

Banking on ESBs for agility, more value added services

Jack Vaughan Jack Vaughan Profile: Jack Vaughan

The once staid and steady field of banking is looking for greater agility in rolling out new software and services. As a result, Enterprise Service Buses (ESBs) are being deployed as part of efforts to streamline operations. By way of example there is Federal Bank of Bombay, India.

The large commercial bank will be using the Fiorano ESB in an attempt to modernize its operations, according to Fiorano. Before deploying the Fiorano ESB, Federal Bank was already using over 30 retail banking related applications from various vendors, including Infosys’ Core banking platform FINACLE, running on mixture of hardware including IBM AIX servers. The bank’s deployment of the Fiorano ESB is part of a further plan to expand the number of value added services available to its customers.

In a statement, K.P. Sunny, head of IT at Federal Bank, explained that the Fiorano ESB was chosen due to its “architectural simplicity which allows the Bank to put in place a flexible architecture that will scale linearly and allow business decisions to be speedily implemented at the IT level.”

The bank hopes the choice will result in savings in maintenance of their current integration code, as well as increased reliability and security.  The ESB is expected to power deployment of out a variety of value added services through multiple delivery channels. Those channels include ATMs, kiosks, hand-held devices, mobile and Web. -Stephanie Mann

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: