Service-oriented Integration (SOI) – a logical extension of functional integration – is where the applications integrate by using service interactions in an SOA environment. Service are provided by source application (service provider) and consumed by target applications (service consumers). Typically, SOI is implemented as systems that consume and provide XML-based Web services.
SOI addresses the issues of integrating heterogeneous and inflexible systems while overcoming the difficulties of functional Integration in terms of location and technology of the function. For example, functionality in a mainframe can be exposed as a web service implemented using a .Net framework.
In SOI, the service defines a contract – such as the technology, communications protocols, and message definitions – that all service consumers must conform to in order to communicate with the service. SOI enables loose coupling, thereby bringing flexibility and interoperability to newer heights – to the extent that the service provider as well as consumers need not be fixed – and the change in the service provider can be transparent to the service consumers.
In spite of its various benefits, Web Services using XML is not a panacea for all integration scenarios – as it involves considerable effort (for implementation) and overhead (during runtime). For example, usage of XML while providing maximum level of data independence does have a performance overhead due to transformations required.
Let us look at the W3 Definition of Web Services – “Web services architecture is an interoperability architecture that provides a standard means of interoperating between different software applications, running on a variety of platforms and/or frameworks”.
Web Service and XML make the most sense in case of Integration between Unknowns, applications running in heterogeneous environment, and applications that change frequently.
If the organization has application predominantly running on a single platform and hence do not have the complexities associated with integration between disparate platforms – which is chiefly addressed by SOI, using SOI makes most sense for integration with external world – web portals, other web sites etc. – where the flexibility and interoperability is most required.
If SOI is considered for integration of applications internal to the organization – as they provide the maximum flexibility and interoperability, use of Enterprise Service Bus concept and use of commercial tools is strongly recommended.
For basic information on SOA, Web Services and ESB, refer to earlier blogs http://itknowledgeexchange.techtarget.com/enterprise-IT-tech-trends/essentials-of-soa-web-services-and-esb-in-the-integration-context-part-i/.]]>
Traditional Enterprise application integration (EAI) provided a hub-and-spoke architecture as a better solution compared to direct point-to-point connections. ESB moved further to provide the concept of bus – where the nodes have more intelligence – and offers better flexibility and scalability.
SOA offers great promise but also has great pitfalls. Making SOA work, avoiding the pitfalls and getting ROI, is hard. Usage of commercial tools simplifies SOA implementation enabling the focus to remain on business requirements – and not on implementation platforms and protocols.
Originally, an ESB product had a core asynchronous messaging backbone supplemented with intelligent transformation and routing to ensure messages are passed reliably. In other words, ESB was seen as a shared messaging layer for connecting applications and other services throughout an enterprise.
Today ESB is seen as a collection of architectural patterns based on traditional enterprise application integration (EAI), message-oriented middleware, Web services, .NET and Java interoperability, host system integration, and interoperability with service registries and asset repositories. The commercial tools offers support for various types of integration – using EAI, BPM, SOA, Message driven, event-driven, B2B as well as adapters for communication with different protocols, databases as well as standard products like ERP, CRM etc.
Reasons for using ESB as the integration backbone include the following capabilities:
In an ESB, there is no direct connection between the consumer and provider. With an ESB, the infrastructure shields the consumer from the details of how to connect to the provider. While the service endpoints can have their own integration techniques, protocols, security models etc., ESB provides a simplified view to the service consumers. Thus an ESB allows the reach of an SOA to extend to non-SOA-enabled service providers. ESB also supports a variety of ways to get on and off the bus.
ESB supports Integration at various levels including:
Message oriented middleware (MOM), the key behind ESB, offers flexibility in application development. MOM permits time-independent responses because it operates in an asynchronous mode.]]>
According to W3, “Web services architecture is an interoperability architecture that provides a standard means of interoperating between different software applications, running on a variety of platforms and/or frameworks”.
The core technologies used for Web services are:
Web services have the following key properties:
Web services alone cannot handle the complex requirement of SOA within an enterprise. That is where ESB – the Enterprise Service Bus – seen as the Universal Integration Backbone comes in.]]>
Service-oriented architecture (SOA) is an approach of defining flexible integration architectures based on the concept of a service. SOA brings the benefits of loose coupling and encapsulation to integration at an enterprise level. Using SOA aims at enabling an organization to implement changing business processes quickly and also to make extensive reuse of components.
Services are the building blocks to SOA. Services can be invoked independently by service consumers to process simple functions, or can be a collection of functions to form a process. The key aspects of services are:
While SOA is quite useful within the enterprise, the real need for SOA is when it comes to integration with the external world – B2B, B2C etc. And this is where “Web Services” fits in. Web service is one of the key methods of enabling SOA.]]>
With the increase in the Web 2.0 applications, there are frequent situations where the application needs to insert or update data only for a subset of columns for a given table. And it becomes all the more difficult when it is not known which columns are being inserted to or updated until application execution time. For example, the customer address change request could mean changing the entire address or just the apartment number or any combination of columns (street name, city, zip code, apartment number and others).
Prior to DB2 version 10, the options for handling different combination of insert and updates included:
DB2 10 for z/OS addresses this difficulty of supporting changes to only a subset of columns in a table by introducing the extended indicator variable support. Preparing separate INSERT or UPDATE statements for every combination of columns that are being inserted is no longer required.
Using these extended indicator variables we can indicate to DB2 that the value for the associated host variable is not supplied and also specify how DB2 should handle the missing value. By specifying the extended indicator variable value of -5 (implying default), we can indicate that the target column of the host variable is to be set to its defined DEFAULT value. By specifying the extended indicator variable value of -7 (implying unassigned) we can indicate that the target value of the host variable is UNASSIGNED and the target column is to be treated as if it is not included in the statement.
By enabling the extended indicator variables, you do not need to resubmit the current value for a column, or know the default value for a column. Extended indicator variables can be enabled at a package level, by using the EXTENDEDINDICATOR option on the BIND PACKAGE (or REBIND PACKAGE) command. Extended indicator variables can be enabled on a statement level for dynamic SQL by using the WITH EXTENDED INDICATORS attribute on the PREPARE statement.
When the extended indicator variable is used as an input variable:
|value of -5 (default)||Specifies that the default value is to be used (extended indicator variable enabled)Specifies a null value if the extended indicator variable is not enabled|
|value of -7||Specifies that UNASSIGNED value is to be used (extended indicator variable enabled).Specifies a null value if the extended indicator variable is not enabled|
|value of -1, -2, -3, -4, or -6||Specifies a null value|
|value of 0 or a positive integer||Specifies that the first host identifier provides the value of this host variable reference.|
When the extended indicator variable is used as an output variable, the values have the same meanings as for an indicator variable (negative value indicates NULL and 0 or positive indicates specified value). As this new variable is meant is intended for allowing applications to skip sending data for any of the host variables specified on the statement, in case of output, DB2 does not set indicator variables to any of the special values.
The following restrictions apply in specifying the special extended indicator variable values. It can be specified only for host variables that appear in:
For the INSERT statement, setting the extended indicator variable to -5 or -7 is the same, as the result is to insert a default value for any column that is missing an input value (that is, unassigned). With multi-row insert, the extended indicator variable values of default or unassigned can be used inside the indicator array.
For UPDATE or MERGE UPDATE, setting the extended indicator variable to -5 leads to the column being updated to the default value and setting the extended indicator variable to -7 leads to the update of the column not being applied.
It is important to understand the impact of these extended variables on UPDATE statement quite thoroughly, as it could result in behaviors other than intended by the developer. IBM provides the following example to get through the point.
|memset(&hv_indicators, 0, sizeof(hv_indicators));|
|hv_indicators.hvi_name = -7; /* skip update */|
|hv_indicators.hvi_country = -5; /* use DEFAULT */|
|hv_indicators.hvi_city = -7; /* skip update */|
|hv_indicators.hvi_zip = -7; /* skip update */|
|UPDATE TRYINDVAR SET|
|NAME = :hv_name:hvi_name|
|,COUNTRY = :hv_country:hvi_country|
|,CITY = :hv_city:hvi_city|
|,ZIP = :hv_zip:hvi_zip;|
While bind the DBRM to a package, it is required to the EXTENDEDINDICATOR(YES) option to enable the extended indicator variables.
After executing the above code, if we query the table the result is: (‘Josef’, ‘Earth’, ‘Unknown’, ‘00000′). As the value of -7 is specified in the extended indicator variable, the columns NAME, CITY and ZIP are not at all updated (i.e., the value Michael doesn’t have an impact at all). As the value of -5 is specified in the extended indicator variable, the COUNTRY column was set to the DEFAULT value (‘Earth’), rather than the given value (‘Australia’).
If the extended indicator variable for this package is disabled by rebinding with EXTENDEDINDICATOR(NO) option and tested, the results would be different. Re-running the UPDATE and querying will show that all the columns were set to the value of NULL. The reason being that the disabling of extended indicator variables results in these values being treated as the “indicator variable” and hence the negative value (-5 or -7) implies a NULL value.
As can be seen in the example, the simple enabling or disabling of Extended indicator variable and setting the values of -5 or -7, the application developer can control the actual columns that would be updated – irrespective of where the function is called from and what value it gets.
Thus these extended indicator variable enable applications to use a single SQL statement – for handling different combinations of INSERTS and UPDATES – without burdening the application with the need to fill the missing columns data explicitly with the current value or default value. Thus DB2 functions exploiting these extended indicator variables can be reused across different channels accessing them and form the core services in an SOA environment.]]>
Service Oriented Architecture (SOA) that facilitates code reuse, interoperability, service management and system integrity is becoming a norm in today’s IT development. In latest version of DB2 (version 10), IBM has introduced quite a Application Integration features in DB2 version 10 aimed at further simplifying and improving the usage of distributed function of DB2 through SOA.
Single Load module across environments using DSNULI
Prior to version 10, the DB2 applications must be link-edited with the appropriate Language Interface modules – DSNRLI for RRSAF and Stored Procedures, DSNALI for Call Attachment Facility (CAF), DSNELI for TSO Attachment Facility, DSNCLI for CICS Attachment and DFSLI00 for IMS Attachment. Even if the same program logic is to run in different environments, multiple link-editing has to be done and this obviously resulted in usability issues.
DB2 version 10 provides the new Universal Language Interface module (DSNULI) to address this issue. Now you can produce a single module (link-edited with DSNULI) that is executable in any of the following runtime environments: CAF, CICS, TSO and RRS (Resource Recovery Services) and MVS Batch. DSNULI does not support dynamic loading or IMS applications.
DSNULI has nine entry points – associated with different attachment scenarios. It dynamically loads and branches to the appropriate language interface module based on the entry point used by the application (say when a attachment facility entry point was used explicitly), or based on the current environment. Naturally, this kind of execution time detection hinders performance, so there is a trade-off between ease of application deployment and the application speed.
Enhanced performance and diagnostic monitoring support
DB2 10 for z/OS enhances performance monitoring support and monitoring support for problem determination- especially so for distributed workloads. Using Instrumentation Facility Interface (IFI), it captures and externalizes monitoring information for consumption by tooling.
It introduces a unique statement execution identifier (STMTID), defined at the DB2 for z/OS server, returned to the DRDA application requester and captured in IFCID records for both static and dynamic SQL. To support problem determination, the statement type (dynamic or static) and new statement execution identifier (STMT_ID) are externalized in several existing messages (including those related to deadlock, timeout, and lock escalation). In these messages, the STMTID associated with thread information that can be used to correlate the statement execution on the server with the client application on whose behalf the server is executing the statement.
DB2 version 10 also includes new trace records that provide access to performance monitoring statistics in real time, and allow retrieval monitoring data without requiring disk access. Some existing trace records that deal with statement-level information are modified to capture the new statement ID and new statement-level performance metrics.
Java query performance improvements
With version 10, IBM Data Server Driver for JDBC and SQLJ type 2 connectivity to local DB2 for z/OS data servers uses more efficient methods for processing forward only, read-only queries:
Elimination of DDF private protocol
With DB2 version 10 onwards, DDF private protocol is not supported. A REXX tool (DSNTP2DP) is provided to prepare a DB2 subsystem to convert from the private to DRDA protocol.
DRDA support of Unicode encoding for system code pages
DB2 version 10 includes DRDA support of Unicode encoding providing improved response time and less processor usage for remote CLI and JDBC applications as it removes the need for converting DRDA instance variables between EBCDIC and Unicode.
Support for 64-bit ODBC driver
In addition to the 31-bit ODBC driver, DB2 Version 10 provides a new 64-bit ODBC driver (XPLINK only) that allows 64-bit ODBC applications to take advantage of the expanded 16 million TB address space. The 64-bit ODBC driver runs in 64-bit addressing mode and reduces the virtual storage constraint, as it can access the user data above the 2 GB bar in the application address space.
One of the most important change is the introduction of Extended indicator variable that allows inserts and updates of different combinations of subset of columns – improving reusability. I shall elaborate on Extended Indicated variable in a later blog.
Clearly these enhancements are aimed at making DB2 - especially the DB2 in zOS – as the database of choice in an SOA environment. More such changes are expected to follow in the later versions – adapting DB2 to the needs of the today.]]>
Originally COBOL didn’t support multithreading thereby heavily limiting the possibility of using COBOL subroutines as building blocks for web applications. With Enterprise COBOL, IBM started providing toleration level support of POSIX threads and asynchronous signals.
With THREAD compiler option, a COBOL programs can be run in multiples threads (i.e., it can be called in more than one thread in a single process) – under batch, TSO, IMS or UNIX (and not in CICS). A multi-threaded program needs to be recursive (RECURSIVE clause in PROGRAM-ID) and also compiled and linked with the RENT option.
It is important to note that COBOL doesn’t manage the threads and rather expect the application server or the calling program (in Java, C/C++, PL/I) to manage the same. The threaded application must run within a single Language Environment enclave.
With THREAD option, the storage and control blocks get appropriately allocated on invocation basis, rather than per program – making them thread-safe. Also additional serialization logic is automatically generated (which in-turn can degrade performance).
Recursive call is where a called program can directly or indirectly execute its caller. For example, program X calls program Y, program Y calls program Z, and program Z then calls program X (the original caller). The persistence of the data for each call depends on whether it is in local storage or working storage.
If THREAD compiler option is used, data that is defined in the LINKAGE SECTION is not accessible on subsequent invocations of the program. The address of the record in the Linkage section must be reestablished for each execution instance. Pre-initialization (if required) should be done using LE services (CEEPIPI interface rather than COBOL specific interfaces).
In a multithreaded environment, a program cannot CANCEL a program that is active on any thread (cancel results in severity-3 LE condition). A multithreaded program can be ended by using GOBACK, EXIT PROGRAM, or STOP RUN:
Multi-threaded COBOL programs can have file operations on QSAM, VSAM and sequential files. Automatic serialization happens using the implicit lock on the file definition and during the input and output operations.
To avoid serialization problems when accessing a file from multiple threads, it is recommended that the data items that are associated with the file (such as file-status data items and key arguments) are defined in the LOCAL-STORAGE SECTION.
Similarly to avoid coding of own serialization logic (using POSIX APIs), IBM suggests the following usage patterns:
In a threaded application, the COBOL program can be interrupted by asynchronous signals, which the program should be able to tolerate. Alternatively using C/C++ functions, the interrupts can be disabled by setting the signal mask appropriately.
Other factors worth noting are:
Using multi-threading feature of COBOL, existing COBOL programs from legacy applications can be effectively utilized in the web applications (directly!).]]>