AS/400: Route web service request to application job

380 pts.
Tags:
AS/400
iSeries
Web services
How can I route Web Service request to different application Jobs? I have application batch jobs (listeners) running in different subsystems with one sub system for each country on single iSeries instance. Each country has it's own Data Base library but all of them have single program library as per JobDs setup for each country. Currently, these jobs run in request-response cycle using MQ. So we have one MQ queue configured to each Country. Now, we are planning to switch to WebService due to many systems getting requirement to access my application and many of them are not interested to go on MQ. For this case, I am trying to identify a routing solution. If we configure WebService as general on iSeries, How can I apply the request-response cycle between Web Service Jobs (HTTP Jobs?) and my application Jobs? Especially the method has to serve large data buffer (1 MB) passed thru request/response and we should handle huge traffic. Can anyone help if they have proven solutions for this scenario? I have one idea, but I am little concerned about limitations and not sure if better ways available. My idea is to create Keyed Data Queues (iSeries queue, not MQ)(one for each country), we will have Java program running as part of WebService (QHTTPSVR or any WebService Job) on iSeries WAS and that can generate unique key and put request into country's queue based on county code received in request, my application job driver will pick entry from queue, process it and puts response back with same key value, Java can pick the entry based on same key value and send response to Client. But I am worried about data queue size limitations (entry can have max 64k) as I have request and response buffer upto 1 MB (not always) and high volume of requests. I can split each request or response into multiple entries with one end tag and process. But I am unsure of efficiency, performance and reliability. Does anyone know if we have to handle any critical failure situations in this approach? We could also configure different Service for each country and HTTPSVR jobs configured for each country with required library list, so sever job itself runs application instead of talking to another job. We are not inclined to this approach due to two reasons 1) Worried about COBOL being threadsafe, 2) We would like to continue to run application job as decoupled for easy maintenance/debugging, load balancing etc. Please correct me or share your ideas. Thanks!
0

Answer Wiki

Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

Discuss This Question: 9  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.
  • pdraebel
    In order to alleviate the response and request sizes you could store those in User Spaces and pass the address pointers of the user spaces over via the Data Queue. The programs that process the data from the user spaces would also need to clean up the user spaces.
    7,545 pointsBadges:
    report
  • sim400
    Hi, I never worked on user space, but I just read about them. I will have 100s of requests at a time coming in. It can grow to 1000s also. But User space can have 16MB max right? Should I create user space for each request? How can it add advantage over Keyed Data queues when I have to load and pick multiple requests? Can I have address pointer to start reading from random position in user space rather just the start of it?
    380 pointsBadges:
    report
  • pdraebel
    Data Queues have the size limitation. Using the User Spaces that limitation is overcome. It is just a means of transmitting request data and responses that go over the data Queue size limits. Keeping too much data in a Data Queue is also dangerous as the Queue can become damaged= data lost. Processing of the user space can start at the beginning, but also at any space point, just add to the pointer and the start is at another point. I was thinking of User Space per request and user space per response.
    7,545 pointsBadges:
    report
  • Splat
    Data queues can be created with SIZE(*MAX2GB).
    12,875 pointsBadges:
    report
  • sim400

    Hi pdraebel, Thanks for your response.

    I am worried about size again If user space can allow only 16 MB, Should I be creating one user space per every request?(not for service). Assume I hae product detail update service, I might get 500 requests at a time to update 500 products. Should I be creating 500 user spaces to put requests and 500 another for responses? This becomes too tedious. How can I handle all of them in single user space considering each request/response can be from 32KB upto 1 MB.


    380 pointsBadges:
    report
  • TheRealRaven
    Given element sizes ranging from 32k to 1MB, neither user space nor data queue is reasonable. It's possible that one or the other would be used to hold an index to the elements, but the elements themselves will need to be in something more appropriate to the expected sizes.

    You might, for example, use a database table and store the request elements in a CLOB column. That would mostly eliminate the need for a separate index.

    Or you might store each request in a separate streamfile. Generating a new streamfile is simple enough, but you might need to use a data queue for a convenient index.

    In theory, a data queue (or even a user index) could be used. However, size restrictions on entry lengths mean that larger entries would need to be broken into pieces when written and recombined when processed.
    35,040 pointsBadges:
    report
  • TheRealRaven
    Are you intending to have a single listener for each country/subsystem? I'd be likely to have the app increase the number of listeners as demand rose.

    Does anyone know if we have to handle any critical failure situations in this approach?

    A data queue is fast and efficient. However, it achieves much of that by leaving out bits of overhead that other objects enforce. Lock checking/enforcement, for example, is about as minimal as it gets. Some of the 'minimal overhead' characteristics can open vulnerabilities that you might need to find ways around.

    One issue that developers of data queue apps often run into is object damage. Some circumstances can leave a data queue damaged, and the only thing that can be done is to delete and recreate the the queue. Any unprocessed entries are lost. No recovery possible. ...Unless you journal the data queue.

    Of course, in that case you start to lose some of the speed/efficiency inherent in their use. The actual recovery from journal entries also requires work that increases the creation project.

    Also, once an entry is received from a data queue, it is automatically deleted from the queue. If any failure happens in the receiving program before some logging action or whatever can be done, the entry is lost.

    But there are often points in any client-server process where a request might be lost. It's a failure possibility that always must be considered.

    There are one or two other potential problems with data queues, but it's not clear if they'd be relevant in your case. Overall, data queues are excellent objects when used in the right apps. They just don't seem right as the primary data transport object here.
    35,040 pointsBadges:
    report
  • pdraebel
    Like RealRaven said : data queues are not the most suited to be used as data carrier. They would more be used as signalling/requesting some action has to be taken. How request data and results are transmitted from one program to another will need to be decided.
    7,545 pointsBadges:
    report
  • sim400

    Thanks Raven for details.

    Yes, we have couple of listeners running under subsystem when subsystem starts and it increase based on requests coming in. I had thought of using file to carry request and response, but I put down this idea again due to I/O performance comparing to data queue. Wouldn't it be same using stream file? I have to consider performance in every perspective for my application users.

    What are the advantages of using user index while I don't need request to present in queue or index once I retrieve it? It also can't hold entry length more than 2K right?

    I understand risk of loosing request for any unexpected reason, I guess this would be there in any method we use.

    If I also look for approach to run my listeners with required library list directly under Web Service Job (ex: QHTTPSVR), I can just pass this data as parameter with program to program call. If I follow this approach, do you know any memory or resource management process to keep job's performance well even after it processing too many requests and being active to handle more? How do you recommend this over My current application listeners are configured to end itself after processing X number of requests and new listener comes up under subsystem.

    380 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

Thanks! We'll email you when relevant content is added and updated.

Following

Share this item with your network: