How can I split job by determine no. of records ?

Tags:
Application development
AS/400
CLP
DataCenter
IBM DB2
RPG
RPGLE
I have a problem. My transaction file is about 20,000 records. This file is used as input/update file. It uses a long time when we run job in day end. I want to use less time. How can I split job to 4 job and each job run only 5000 records? How can I determine as follows ? job 1 process data only from record 1 to record 5000 job 2 process from record 5001 to record 10000 job 3 process from record 10001 to record 15000 job 4 process from record 15001 to record 20000. If you have another idea, please help me.

Answer Wiki

Thanks. We'll let you know when a new response is added.

Hi,

Try OPNQRYF.

Nikhil

==============================================================

The best answer can’t be determined without knowing why 20000 transactions take too long. E.g., it may be because the transactions are applied against very large files with many logicals that must be updated. Breaking the file up might not make much difference if index maintenance is taking the time. A description of the files — physicals and related logicals — is a first step there.

A basic system description (model, memory, disk arms, OS VRM) would be useful too. If memory is a serious problem, running four jobs instead of one might make things worse.

In fact, we don’t even know what “too long” means. Maybe the job takes 30 minutes but the desire is to bring the window down to 15 minutes so a staff member can leave 15 minutes earlier.

Tom

Discuss This Question: 7  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Jadima
    There are several possibilities : 1) do a cpyf from rcd 1 to 5000 to a file in qtemp, ovrdbf to this qtemp file in your pgm. in next job same but from rcd 5001 to 10000 etc. 2) use OPNQRYF to do the same thing as described above. 3) If your application run that long for only 20000 records, than maybe there is something wrong in the logic, or the process is to hard to perform. Split the program into modules (ILE) to perform the I/O part and review the logic stream. Maybe you find out why it takes so long.
    0 pointsBadges:
    report
  • Michlw
    FDateiname+IPEASF.....L.....A.E/AEinh.Schlusselworter++++++++++++++++++++++Bemerkungen+++++++++ ***************** Datenanfang ******************************************************************************************* 0001.00 finput if e disk recno(rcd#) 060214 0001.01 drcd# s 9p 0 060214 0002.00 /free 060214 0004.00 read input; 060214 0005.00 if %eof(input) or rcd# > 103; 060214 0006.00 *inlr = *on; 060214 0007.00 return; 060214 0008.00 endif; 060214 0008.01 060214 0008.02 060214 0008.03 begsr *inzsr; 060214 0008.04 rcd# = 100; 060214 0008.05 chain rcd# input; 060214 0008.06 endsr; 060214 0009.00 /end-free 060214 ******************Datenende *********************************************************************************************
    30 pointsBadges:
    report
  • JBurelle
    Maybe we are looking at this from the wrong angle. Are you looking to speed up your end-of-day job? Are users accessing this file during end-of-day? Does it run interactive or batch? Is there enough memory allocated to the subsystem that runs this job? How big is your machine? 20,000 isn't a huge file, is it a big record? Tell us more about the job itself.
    0 pointsBadges:
    report
  • HUGOsarm
    Form my point of view you only need to fix your RPG Program and create a Cl to run it 4 times. You could be can add a parameter in your RPG with one parameter (parameter gone have a value of time that is run). '1', '2', '3', '4' and include some logic in your program to manage position your relative record in 1, 5000, 10000 or 15000. some thing like this... *** THIS SENTENCE POS SET UP YOUR FIRST RECORD TO PROCESS The SETLL operation positions a file at the next record that has a relative record number The file must be a full procedural file (identified by an F in position 18 of the file description specifications). you can include this instructions in your rutine *STRSR like this only run when RPG start. *INZSR Runtime ifeq '1' 1 Setll recordformat else Runtime ifeq '2' 5001 Setll recordformat else Runtime ifeq '3' 10001 Setll recordformat else Runtime ifeq '4' 15001 Setll recordformat end end end end ENDSR **** now your program start to read a file by relative record depending of one parameter 1, 2 3 or 4 after that user your normal... read recordformat ok... after fix your RPG do a Cl to run your RPG four times. example: do a CLP like this SBMJOB CMD(CALL PGM(yourprogram) PARM('1')) JOBQ(QGPL/QS36EVOKE) SBMJOB CMD(CALL PGM(yourprogram) PARM('2')) JOBQ(QGPL/QS36EVOKE) SBMJOB CMD(CALL PGM(yourprogram) PARM('3')) JOBQ(QGPL/QS36EVOKE) SBMJOB CMD(CALL PGM(yourprogram) PARM('4')) JOBQ(QGPL/QS36EVOKE) ***** QS36EVOKE it is a standar jobq that allow to run in batch many jobs at same time. i hope this can help you...
    0 pointsBadges:
    report
  • Jwebb901
    Don't bother splitting up your file, you should look elsewhere to solve your problem. 20k records isn't a big file, so you must have another bottleneck you're not aware of......
    0 pointsBadges:
    report
  • Maverick64
    I agree with several others that you could be creating a bigger problem rather than speeding anything up. Knowing details about the machine (# of processors, disk, IOPs etc) and what you are trying to accomplish (just trying to speed up one job) as well as what the current program does would be helpful. Also, have you watch the job run and look at the call stack? Many programs can have bad coding from many ends and can speed things up quite a bit. (arrays vs. data structures, chaining for each record rather than at a higher level --- indexed as opposed to record by record)
    0 pointsBadges:
    report
  • 200573
    Yeah i agree with everyone here.check whether the program is creating this file everyday with new records or, if records exits then just updates the file. check that routine if you haven't checked it as of now.
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following