Reducing the AS/400 Job completion time

AS/400 jobs
Can any one help me understand the different ways of reducing the AS400 Job completion time...

Answer Wiki

Thanks. We'll let you know when a new response is added.

Not too sure what you are requesting here, can you be more specific?
Do you want to increase job performance via the system or program?

Improved Question:
Basicaly i wanted to know…is there any way i can set priorities or i can make my job run faster than the normal completion time.

for example, i know a job runs 3 hrs from Past experience. is there any way i can reduce its time to lesser than 3hrs either before submittng the job or while the job is active?


To reduce the run time, reduce its resource requirements or increase the resources available to it. This requires knowing, or learning, what the job uses or needs. We have no idea what your job might need. It would help if we had an idea what the job does. Otherwise, this is just random thoughts.

If the job becomes I/O bound, then review how I/O is managed. Ensure proper indexing, for example, rather than creating temporary indexes. Try sequential-only processing wherever it fits, and reorg files to the order you’ll be reading them. And maybe reorg simply to compress excessive deleted records out. If conflicts exist with other jobs, eliminate them as far as possible.

If the job is memory constrained, then take control of your memory pools. Review everything you have that runs in *BASE, and reroute it to shared pools instead. (This includes IBM tasks.) Begin organizing work into a set of pools. Once you have a basic organization, start the performance adjuster. (Running it without initial organization might make things actually a little worse.) Provide memory to your job when it runs by reducing use of memory elsewhere.

If the job is CPU constrained, then raise its run priority. Or reduce its competition.

If the job is performing serial operations that can be run in parallel, break the job into parts. Submit parallel sections to run in other jobs along side the main job.

If the job is accessing many objects at different times and uses private authorities, then run the job under an *ALLOBJ profile. Eliminate authority lookups. If the job is making many dynamic calls to programs that open and close files, especially if the files are accessed multiple times, then rework them so that they’re left open. Consider activating the programs into a named activation group or *CALLER and leave the programs active. If there are date “calculations” such as multiplication by (10000.01) that happen a million times, then replace them with decent conversions.

If the job is performing extensive SQL operations, then look closely at how the SQL statements are structured, how they’re executed and what they run over.

There are far too many possibilities to list.

Regardless, make certain that the work being performed happens inside an appropriate system. Do you have enough memory? Is your DASD capacity strained? Are you using the appropriate processor? Are you running programs that actually need to be run and are they performing operations that need to be done?


Discuss This Question: 8  Replies

There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.
  • philpl1jb
    lower the run priority, increase the time slice -- probably won't do much.
    54,090 pointsBadges:
  • jinteik
    if you have many processes on the queue, you would like to make it faster, use NWAY
    18,995 pointsBadges:
  • Powerprogram
    Thanks Jason and Philip! Jason could you please eloborate on using "NWAY"? I am not familiar with this.
    50 pointsBadges:
  • philpl1jb
    As Tom said, if the job uses SQL, creating the correct indexes can make a major impact on performance. Phil
    54,090 pointsBadges:
  • Powerprogram
    This helps me Tom. Thanks a lot!
    50 pointsBadges:
  • Abigail
    First, if the problem is your configuration (memory, storage) there may not be much you can do without an upgrade. When possible, schedule long running processes for non-peak hours. You may want to change the pools from *FIXED to *CALC with system value QPFRADJ at 3. You can always create jobs to change these back after non-peak hours if you want but you may want to leave these set and see if performance improves. Just monitor your faulting levels. You can also change the class the job(s) are running at (priority and timeslice). Take a look at the program(s) and make sure they are optimally coded. It may only be the program(s) that need to be changed.
    645 pointsBadges:
  • Kirubaharan89
    I have a job. The job will be run everyday as a scheduled job. The job is taking just 3 to 7 minutes to complete it's run. If I submit the job manually, it is taking more than 2 hours. I want to know the reason. Can any one help me on this?

    Note: Both jobs have run priority as 50 & time slice 5000 But "used CPU%" for Scheduled job is less than 100. When I submit the same job manually, "used CPU%" is started increasing and it crosses 100%.
    15 pointsBadges:
  • ToddN2000
    Kirubaharan89@: Running manually, it may be conflicting with file usage by interactive users if this is being run during business hours. There could also be record locks it has to deal with for example. It would also be fighting for system CPU resources with any ASYNC jobs you may have that run all day until they are ended. If you have this scheduled to run off hours there is less likely to be any bottle-necks in the processing.
    133,720 pointsBadges:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

Thanks! We'll email you when relevant content is added and updated.


Share this item with your network: