iSeries QBATCH – 3 options

65 pts.
Tags:
iSeries
QBATCH
Our site uses just the one batch job subsystem and generally this is fine, but at cerain times of the year two departments compete to run quite hefty reports. The users don't like it when their jobs get stuck in the queue as some of these jobs take a while to run. QBATCH is set to *NOMAX, but the queue will admit only one job at a time. I know I could set up a new subsystem and even assign dedicated memory (not that I've done it before...), but I'm wondering if it would be enough to simply create a second job queue into QBATCH for the other department. I'm guessing that would mean no impact for the main dept when only their jobs are running, but when both are busy, at least two jobs can be running simultaneously. I know it will be slower, but does anyone have a view on whether it's worth a try? We have a 525, not the biggest machine in the world, but reasonably capable.

Software/Hardware used:
iSeries, AS/400,
ASKED: September 13, 2012  8:14 PM

Answer Wiki

Thanks. We'll let you know when a new response is added.

You said QBATCH is set to *NOMAX, I am assuming that is for the Subsystem itself.

The Subsystem has JOBQs associated with it. You can set an activity level (Max Active) for each associated JOBQ. Then you go can go a step further and set a “Max by Priority” Level.

You can think of the Priority levels as you would think of additional Job Queues.

How is the Job Queue determined when a job is submitted? Can the user control that or will you have to change a bunch of programs?

You can write a program that monitors the Job Queue and changes priorities or move to different job queue.

You need to look at the whole picture and determine what is easyest to mantain and control.

 

Discuss This Question: 3  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • TomLiotta
    You can add a second job queue entry with ADDJOBQE. You would possibly want the very long jobs submitted to one of the queues, and regular jobs should go to the other. Job descriptions or the SBMJOB command would normally be used to set the queue. That's a fairly simple fix. Post back here if more info is needed. -- Tom
    125,585 pointsBadges:
    report
  • BritishGent
    Thanks for the answers. No, the users won't be choosing which jobq to use; that will be determined by the job description associated with their department/profiles. I was thinking I could set the maximum jobs each queue will release into QBATCH at any one time as one, so both queues together would only mean two jobs being processed simultaneously in QBATCH at the most. I have no idea how much this would slow down each job, or if things would work better if I had two separate subsystems. It's still a single processor doing the work, and a separate subsystem just means less storage for QBATCH - ? From what I've read, separate subsystems just give you fine-tuning control over time-slices, etc. and the ability to start and stop subsystems independently, should you need it. It doesn't mean jobs overall will finish any faster. But I'm happy to be put right! (That's why I'm on this forum.)
    65 pointsBadges:
    report
  • TomLiotta
    It doesn’t mean jobs overall will finish any faster.   Although that depends on what the jobs are doing, i.e., what interference exists between the jobs, it generally should mean that they'll finish sooner -- overall.   If the jobs are updating the same files, they can block each other at points. If they require exclusive use of the same objects, they can also block each other. But if they're 'normal' database processing jobs working with different sets of files, they should interleave their uses of the CPU and of disk controllers and other resources, and 'overall' they should finish sooner.   Think of one job that normally takes an hour to run and six other jobs that take 10 minutes each. When run in series, the jobs would take two hours overall to complete. But run in parallel (e.g, the one long job plus each of the six jobs in sequence), they ought to all finish in perhaps an hour and 10 minutes, maybe less, maybe a little more.   It's still up to you to determine the sequence that jobs should run. If a job requires another job to be finished first, you have to manage the scheduling. But if there are no dependencies between jobs, in general, things will go better if you have them all running at the same time no matter how many CPUs that you have.   The vast majority of jobs use very little of the CPU. Other system resources get used between short bursts of CPU activity. By having multiple jobs, the CPU gets passed around from job to job every time a job needs to use a different resource, such as waiting for records to be read from or written to disk or to a printer file.   That's a simplified view of things, but it can be used as a basis for understanding what goes on with subsystems and the management of work.   Tom
    125,585 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following