Batch Jobs in AS/400

560 pts.
AS/400 jobs
Hi Group,
We have an application developed in RPG/400, COBOL/400, CL/400 & ILE. There are few batch jobs that are supposed to be run at the End of the Day. The execution time for these batch jobs is very high. 
What measures can be taken to reduce the down time? 
Suppose we want to move few jobs to interactive mode, how to identify those jobs?

Answer Wiki

Thanks. We'll let you know when a new response is added.

Your statements are confusing.
You want to reduce down time and yet you are asking if you can run some interactively.
I am assuming when you say reduce down time, that would be the period of time all users must be off the system.
Along with the suggestions alrady made, you can look into some jobs simutaneouls from different job queues.
Also look at moving memory to a different pool at the start of the job and reseting it at the end of the job.
The main thing I beleive to is do a good analysis of exactly what is running and how long each step is taking. Then, determine why jobs run so long and see what you can do to impove performance with coding changes. This may be more logical files, ot a host of other possibilities.
You also want to determine which of these jobs must have the users off the system. You may find that only a small portion of processing is required on a dedicated system.
Bottom line is you have to understand what is happening and why.

Discuss This Question: 9  Replies

There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.
  • slack400
    You'll need to identify if these jobs are run manually by users, submitted via a scheduler service like IBM Job scheduler or Robot/Schedule. To move a QBATCH job to QINTER you can issue a TFRJOB command, but you can also just elevate the job priority setting to 10 instead of 50 (for example) that will give the job higher priority in QBATCH. The more old school solution could be to end the QINTER subsystem locking out users to free up the server during your closing processes to keep users off the system until processing is over. I'd like you to evaluate your data retention on your server. Most of the processing headaches I hear about come from systems that never archive or delete old data. If a transaction file can be purged and reorganized reducing your processing time you may get better results than trying to lock out resources from the users during a close
    2,740 pointsBadges:
  • MDratwa
    You can run the CHGJOB command within the CL and change the priority (between 20-50), TIMESLICE, and PURGE (use *NO to save the job info going back/forth to hard drive). I do not recommend priority of 10 because that is equal to QCTL. If you want to stop users to add to files, you can add a data area to control access to files within the apps. Change data area from space to "X" at the start of the job and space when finished. Have the app check this data area when users enter the app to stop them You an submit the job with the job priority higher than other job in the same queue to bump them up in front of others already there. Be careful about adding to much priority or timeslice because this will affect the performance of your system.
    785 pointsBadges:
  • pdraebel
    Changing priorities and runtime characteristics can help some, but I think reviewing the way your end of day processing is setup (programming) should give far better results. Some things like triggers on files can cause file processing to take far longer than processing without triggers, but you will have to review your processes.
    7,545 pointsBadges:
  • DanTheDane
    I suggest you upgrade to a Power-system (I assume you are running an older system) as soon as possible, and use your programming efforts on new applications that your company can benefit from. If you are not on a Power system yet, you will have to move to one someday ahead. Look at the situation as a business challenge, and you will propably come to the conclusion that moveing to Power now, is the best solution. DanF
    2,555 pointsBadges:
  • TomLiotta
    Changing priorities and runtime characteristics can help some... As CharlieBrowne mentioned, in light of the "reduce the down time" part of the question, there wouldn't seem to be any advantage to moving into an interactive subsystem nor even to changing priorities or other job attributes. "Down time" somewhat implies lack of competition for resources from interactive jobs. If other jobs aren't taking resources away, then there aren't any resources to add that will help. Timeslice for batch is already much higher by default than for interactive for example. If "down time" is the actual problem, then improving batch programming would be a better way to go. But without knowing where the time is spent in the batch jobs, it's guesswork on what might be changed. Tom
    125,585 pointsBadges:
  • mvrkrishna
    Hi Group, Thanks for your response. Sorry if i have created any confusion. In the application, most of the programs are written in COBOL/400. All the batch jobs are submitted manually (No job scheduler is used). I request the group to give me few suggestions so that the processing time can be reduced. At application what could be done to reduce the processing time? Regards, M
    560 pointsBadges:
  • Lovemyi
    If you have Performance Tools manager on that machine you can run the job with the perfromance monitor started and then analyse the results to see why the batch job is taking so long but you would probably already have done that if you knoew how to manage the performance tools. You could look at the logical files that your program is using and see if they have the correct keys to use in your program or create a new logical views with better key management or change the program to run more efficently by changing to SQL statements to reduce a lot of unnecessary IO. Streamlining the IO in a program is the only truley way of reduning run time unless you can give the program more power by witching to new hardware or enabling more CPUs and memory on that machine. Lovemyi
    2,310 pointsBadges:
  • Aringarosa
    first of all you should improve the running status, sometamies we have jobs wich aren´t running at same time of others and they could be, in second place, and after confirming that all the depedencies are correct, you should talk to the development team and ask the to improve job running atributes.
    335 pointsBadges:
  • dino007
    You mentioned that the jobs are entered manually. Could you use wrkjobscde and an F6 to add a new job? Then you could schedule it to run during 'off hours'. The job would run everyday (or whatever) at a specific time. Parameters could be preloaded. We have a menu option that submits a weekly batch job for early in the morning so that reports are printed after hours but ready to be distributed in the morning.
    130 pointsBadges:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

Thanks! We'll email you when relevant content is added and updated.


Share this item with your network: