How to make a batch job run faster (excpet for the run priorty), Code changes?

0 pts.
Tags:
Application development
AS/400
Billing and customer care
Billing Support Systems
DataCenter
Development
Performance/Load
RPG
RPG ILE
RPGLE
Software
HI, I have a batch RPG programs used to do some updataions and reporting, currently the job takes around 1 sec to process a record. We want to further reduce this time. Changing the run Priorty may make the job run a bit faster, are there any other parameters that can be used? also am looking at more approaches like best coding practices for an RPG Programing as per performance say. Certain code sections have a SETLL and a consecutive READE statement, (using the same key)will using a single CHAIN operation have a performance benefit.
ASKED: August 10, 2006  6:46 AM
UPDATED: December 15, 2009  12:23 AM

Answer Wiki

Thanks. We'll let you know when a new response is added.

One thing you might want to look at would be the access path you are using. For example, if you are reading a non keyed physical file, you might get better performance if you read a keyed logical or used some embedded SQL to retrieve your data set.

Discuss This Question: 10  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • MODMOD
    I've spent a good portion of the last year working to reduce disk usage, and processor usage on our system. I've managed to drop our average processor usage by around 50% overall, but, it's a big job. Sometimes cutting overhead is a simple process, but most, it requires a very intimate knowledge of how the data, and programs work, as well as available logicals, and options. I'm guessing you're dealing with a large file, and the 1 sec per record processing is beginning to add up. the easiest, and quickest solution (although rarely the best) is to archive unecessary data from any of the files accessed. A big question would be how you're looking at the data. do you have just a couple updates, and a lot to report? are you looping each record through similar logic to what needs updated reguardless of if an update is necessary? do you have a way to identify updates, and seperate the update process from the reporting process. If so, you might consider limiting your program to just doing the necessary updates, and write a seperate process to do the report (Query or SQL) Of course, I'm not an RPG programmer, so I don't know how efficient RPG is at processing large reports agains huge files. I primarily deal in SQL, so, I know that many times I can do exactly what one of our RPG processes is doing to generate a report using SQL, but take 5 seconds instead of 5 min. My guess is this is my lack of understanding of RPG talking, and that you could tune the RPG for similar results. But I would have no idea where to tell you to even begin there.
    0 pointsBadges:
    report
  • Kevin999
    You can look into different commands or even diferent languages to do the same thing. That may save you a bit. The other way of speeding up the job may be to put it in a SBS that has the memory behind it to run faster. Also separate out the update processes and reporting processes into individual programs within the program. Good Luck.
    0 pointsBadges:
    report
  • Coop4bama
    I believe the greatest amount of efficiency can be gained by modifying the RPG program that actually does the updates/reporting. Tweaking system performance and job priority and other non-RPG based parameters may help. But not much will improve if the program is not written correctly. I would look at whether or not records can be retrieved in blocks by an internal SQL command. These records would then be placed in a multiple occurence data structure in memory. Also, maybe records from some files may be read into arrays one time. Then instead of doing a chain for each primary record to these files, you can perform a lookup in the arrays that are already loaded into memory. Anytime you can reduce the number of times the program has to access the disk, you greatly reduce the processing time.
    0 pointsBadges:
    report
  • Snapper70
    There may be file optimizations that could be done as well. How many logical files do you have? Are you updating key fields for any of these? Are they unique keys, or can you defer updating those logicals? How many records in the file, and how many of those getting updated? Look at the job information while running, and figure out how many IO's to each of the files involved. Anything surprising, like 10x as many access to a table/reference file? 1 record per second sounds a little slow, but that may be just records in the master file - and you're repeatedly scanning reference files - which if small, could be stored in a table, or at least SORT the file so that processing tends to access disk better.
    920 pointsBadges:
    report
  • Godzilla
    Also, make sure all tables you are updating have the appropriate indexes. Good luck.
    0 pointsBadges:
    report
  • Mistoffeles
    It comes down to the old adage: divide and conquer. Separate the processed of storing, querying, processing, updating and reporting on data. Putting together what was previously mentioned by others, with a few additional bits, what you should do is: Consider whether using a database, with proper SQL queries, would make retrieval of the records you wish to change, and only those records, much quicker. Consider whether you need to use RPG and why, or could you possibly make use of the database's built-in programming capability to make the updates you need. This would speed things up greatly, as modern databases with properly set up indexes offer much better data processing throughput than old, report-based procedural languages, processing thousands of transactions per second across multiple massive, related tables at the high end. Also, modern databases possess greatly enhanced report generation capabilities, including advanced sorting and grouping and much more.
    0 pointsBadges:
    report
  • MrVegas
    Many years ago, IBM added an optimization feature to the compilers. In short, an optimized program is more efficient (runs faster). The default when a program is compiled is NOT to optimize (it takes longer to compile). Recompile the RPG program and optimize the program to the maximum level. You did not say if the program is ILE or OPM. There are more optimization levels for ILE. In any case, with OPM RPG the optimize option is on the GENERATION keyword (GENOPT), with ILE RPG the optimize option is on the OPTIMIZATION LEVEL keyword (OPTIMIZE). Next, file processing in a program can sometimes be made faster using a similar technique using Control Language. Before the RPG program is called, use the OVRDBF command for each file being processed. There are some parameters you should be aware of that may speed up file processing: SEQONLY Limit to sequential only (use the F1 key on this and the following parameter to read IBM's comments on faster file processing.) NBRRCDS Records retrieved at once. Using any of these features means you don't have to modify the RPG code.
    0 pointsBadges:
    report
  • JohnComp
    Since you are already using CHAIN and SETLL, it is obvious you already have the indexes built that you need. However, you are using the READE command that is one of the most error prone macros in the language, (I have seen it fail multiple times, mainly with binary key values). And if you are using both a CHAIN and a READE structure to the same file in the program, you really need to rethink your logic flow. While some of the responses suggest using SQL, if you are not familiar with embedded SQL and you are trying to speed up a program under the normal time frame of most businesses to get "something done", now is not the time to try and learn it. However, it is a wonderful tool for you to learn and use when appropriate, but just like any other tool, it has its advantages and it's drawbacks. My suggestion would also be to separate the update and reporting process to begin with, this allows you rerun reports with out affecting the data values. Then replace the READE logic with READ logic followed by a direct comparison of the key values. Then when doing the report phase, set the blocking factor using the OVRDBF to approximately twice the number of records you anticipate reporting on at single value setting of the key fields. For instance, if you are reporting on open AR where you have multiple bills for a single customer. What is the average number of bills for each customer? Set your blocking factor on the override for twice that value. Will you get to many records on each hit to the disk, yes; most of the time, but you will hit the disk only once for each customer, and disk access is still the slowest process you will be doing.
    0 pointsBadges:
    report
  • JonP
    As far as I can see nobody has so far mentioned the easiest of all the changes to make. Add BLOCK(*YES) to the F-spec for the file in question. Normally RPG supresses blocking when a CHAIN or SETLL is used on a file. But if you _know_ that the CHAIN/SETLL is only being used to position at the beginning of a group of records that you will then retrieve sequentially, then adding the BLOCK option forces the compiler to override the default behaviour and apply blocking to the file. The other suggestion you have had (using the number of records override for example) should also help. CHAIN is a better choice that SETLL _if_ you want the data from the record retrieved (as opposed to using it simply to position the file) - some recent tests that we ran indicated that it was actually faster than SETLL in some cases even if you didn't need the data - but I wans't able to quantify the exact conditions.
    0 pointsBadges:
    report
  • TomLiotta
    First, find out what takes so long. That takes some analysis. So, analysis -- have the environment respond to analysis. Which means that you better have the system already in a state that can report meaningful numbers. Which means you need to be sure that you have your subsystems with rational memory pools assigned so that they're not all stepping on each other and obscuring results. So, make sure you have memory pools created that you can segregate different elements of your workload into, then assign those pools to appropriate subsystems. Then assign your system's workload into those pools. This includes all of those routing entries and prestart jobs from IBM that are currently routing everything into *BASE. So, first thing is get *BASE back to doing what it's intended to do instead of the more likely situation where you have a few dozen or more important jobs (TCP/IP itself, TCP/IP servers, communications tasks, host servers) all clobbering *BASE at the same time. Once you have the system work management decently aligned, start tracking your problem job and note what resources it's waiting on. Once it's known what it's spending its time doing, we can suggest changes. BTW: ...if you are using both a CHAIN and a READE structure to the same file in the program, you really need to rethink your logic flow. There is no reason to avoid using CHAIN and READE together. They can in fact work better than SETLL and READE. Tom
    125,585 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following