Record Blocking Factor

IBM iSeries
tips and tricks
I have found several different equations to calculate the correct blocking factor when utilizing NBRRCDS in a CL to optimize I/O. One stated (128k)/(reclength), another (128k)/(reclength+1), another (1024*256)/(reclength) and still another (131072)/(reclength+1), with different variations in the record length and the additional one-byte. If the total record length of my file is 104 (DSPFD), how would I determine the correct and most efficient record blocking length? Does the iSeries block in 256k inrements? We have a batch job that is slowly growing longer and longer in runtime each night without a comparable increase in file size. Running out of ideas.

Answer Wiki

Thanks. We'll let you know when a new response is added.

Use 32K as the block size. Calculate the NBR of records as follow 32K/rec length = # of records for block. Disregard the decimal for the number of records returned.

Discuss This Question: 4  Replies

There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.
  • Fanbot
    P.S. Forgot to mention. In the event that a file is changed your calculation, if hard coded, will degrade performace. The best is to retrive the file definition prior to execution and use the retrived record length for the calculation.
    0 pointsBadges:
  • Rchevalier
    The 32k size was incresed to 128k several releases back. Divide it by the record size and round down. If you want it to change with file size changes retrieve the record length and recalculate as stated previously.
    0 pointsBadges:
  • Ddomic
    Check IBM Software Technical document: Blocking, Sequential Only, and the Effect on a Program,Block,Length
    0 pointsBadges:
  • BoomerSooner
    You say the file's size has been constant but processing of the job has slowed down. So what else has changed on your system ? Any other new batch jobs or system activities running at the same time, does the problem job have lower priority than other jobs, is it in a memory pool that is now too small, and when is the last time you reorg'd the file ? Any of these could affect the job -- especially if you're no reusing records from the file, but may be deleting records. The total # of (active) records won't change (much) but the total file size will -- and deleted records can slow the process. If a job's processing changes and nothing has been done with/to it, look at something besides that particular job.
    0 pointsBadges:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

Thanks! We'll email you when relevant content is added and updated.


Share this item with your network: