Straight from the help text for QPFRADJ:
A change to this system value takes effect immediately.
But this might or might not be something good for your system. Has anyone configured it for performance adjustments yet? If not, the result might be just using more CPU cycles to calculate adjustments without any significant advantage.
Basic problem starts with AS/400 OS upgrade V5R4 to V7R1. We had upgrade and since then we are seeing performance degrade. As a suggestuion we went for QPFRADJ to automatically and it went worst then what we had. Things which are running slow is 1) Delete and Insert on Huge partitioned file 2) Copy file commands in some of the jobs again huge file 3) QM queries submitted thru STRQMQRY
Addition to that we had DB2 Multisystem and DB2 SMP not installed which we have not implemented so need to understand the performance improvement possibility. Thanks Looking forward for more updates
…and it went worst then what we had.
The underlying principles of how it works make it easier to understand why that happens, but details aren’t relevant here. It doesn’t surprise me. But because it happened, it says that your system needs appropriate basic tuning to match your workload. You should find someone who can do the initial configuration, then try QPFRADJ again.
…we had DB2 Multisystem and DB2 SMP not installed…
Can you clarify that? Was it previously on V5R4? Or has it only been acquired and not yet used?
What cume level is now installed on i 7.1?
TL11270 is the Cume level installed on system. Yes mentioned DB2 products were installed earlier on V5R4….Initial configuration..If you can give hint to some of the basic configuration which needs to be monitored bfore going to QPFRADJ paramter
Thanks for ur help looking forward for ur answer on the same.
If you can give hint to some of the basic configuration …
Simplest starting point is the WRKSBS command. Look for active subsystems that run jobs in system pool 2 (*BASE). Common ones are QSERVER, QSYSWRK and QUSRWRK.
If active jobs are running in *BASE, QPFRADJ can have a more difficult time making adjustments. For most work, *BASE shouldn’t be used. Most IBM-supplied jobs run in *BASE, but there’s no absolute reason to leave them there except that many admins are nervous about modifying IBM settings.
But the entire system is initially IBM-supplied. Every tiny detail that is created or changed from the moment a system is first powered up is a change from the initial IBM-supplied state. I’m not clear why there is resistance to changing work management items other than that few understand how work management works and what QPFRADJ does.
A fundamental principle of memory pool adjustments is that (a) memory is always moved out of *BASE into other pools and (b) memory only moves out of other pools into *BASE. I.e., it never moves directly from pool X to pool Y; it must move with *BASE as an intermediate.
From that principle, it might be easy to imagine what happens when there are active jobs running in *BASE. And when the jobs are higher priority such as communications and server jobs, the memory is not only practically guaranteed to be in use, it also causes regular paging interruptions in those jobs whenever a memory page eventually does get moved.
Almost none of that happens, though, when everything is in the basic IBM-supplied condition. That’s generally because most active jobs just run in *BASE in that configuration and no other pools are requesting or releasing memory. In such cases, the QPFRADJ system processes don’t actually have much that can be done. Mostly they just run through their CPU cycles to find nothing to do.
So, first steps ought to be to configure two, three or four new shared pools and assign them to subsystems such as QSERVER, QSYSWRK, QUSRWRK, QCMN, etc., as secondary pools. Then start changing routing entries and prestart job entries to begin directing work into them.
I usually start with three new pools. One gets TCP/IP and its server jobs and general CMN jobs. A second gets host server jobs. The third gets my batch jobs. The objective is to have practically no explicitly assigned work in *BASE. (Subsystem monitors, QTSEPOOL jobs and some other work may continue to make use of *BASE.)
Once that’s done, let QPFRADJ do its work and start monitoring performance. One thing to watch is whether or not *BASE ever has spare, unused memory assigned to it. The amount of spare memory is an indication of whether or not the system’s workload is memory starved or not. Until that’s determined, there’s no really good way to know. There should be enough spare in *BASE to react to surges in demand but not so much as to be wasted.
After work is separated into a basic breakdown, the specific activities in the various pools can be watched. If one pool is showing excessive paging or faulting, the breakdown helps determine which jobs are causing a problem. The same determination cannot be made when everything is running on top of everything else in *BASE; it could be any one or any combination of jobs. If a problem area is seen, an additional shared pool (or more) might be configured for that subsystem, and that workload might be further subdivided to narrow the problem scope again.
As far as work management and performance is concerned, you can’t really measure anything if everything is in the same pool. If you can’t effectively measure, you can’t do anything but guess. If the system functions have nothing to work with, their work is almost totally wasted.
Now, if additional shared pools have been configured and work is routed into them but *BASE is still filled with the usual QSERVER, etc., active jobs, there’s a potential for things to be worse. That’s not certain; it’s just not a ‘surprise’ if it happens. That can be when it’s hardest for QPFRADJ to be effective.
The standard IBM default configuration more or less handles everything equally. No one but you knows if your batch jobs, your server jobs or any other jobs need extra resources while others can get by with less. The default handles them the same primarily based on priority.
Although the IBM defaults are a reasonable starting point, it should never be forgotten that IBM sells memory, processor upgrades and services. More than a few processor upgrades are bought when it might just be memory (and management) that’s needed. And memory purchases can happen when little more than automated management is needed. IBM is happy to sell both. That’s part of their business. To their credit, they also sell a system that has everything needed to tell you when anything extra really is needed. Relatively few customers take advantage, choosing instead to go with more and faster hardware.
In short, configure to enable measurement. Then go in the direction that measurements indicate. That might be more memory or processor upgrades or more disk arms. But it might also be that different work scheduling or other “soft” changes can make better use of what is already there. Without something to measure, it’s guesswork.
Not sure what happened to my last comment from last night. I’ll wait another day before seeing if I can recreate the whole thing. — Tom
I am see major improvement in our process running after 2 DB2 product installations. But still some jobs are running slow..most probably need to follow some code level enchancements to match the V7R1 features….
Addition to that what I see difference in the jobs for which Copy command is running slow is Wait file parameter are set to *immed …can you please put some light on impact ot this parameter in case of multiple job accessing a file and no exclusive lock applied on file by the job which is running the copy command.
Thanks for you help on the same looking forward for your response.
…I see difference in the jobs for which Copy command is running slow is Wait file parameter are set to *immed …
What “Copy” command? The CPYF command doesn’t have a “Wait file parameter”.
can you please put some light on impact ot this parameter in case of multiple job accessing a file and no exclusive lock applied on file by the job which is running the copy command.
A file copy involves two files — the file copied from and the file copied to. Which file are you asking about?
Copy file command does not have wait file parameter but the files engaged in these copy commands in there file defination (thru CRTPF) thye have wait file as *Immed. The file which has copy running slow is huge file. Daily file around 0.5 Million rows to get copied into 2 billion row file daily. Both the file has wait file paramter as “*Immed”. Similar copy commands on other similar volume files are running fine and they have 30 as wait file. This file has SHRREAD lock mostly while getting copied for TOFILE so I just wanted to know the impact of Wait file in the Copy command.
We can continue on discussion which was missed based on Cume level number on the system and Perfromance parameters which needs to be seT before setting QPFRADJ
Thanks for your help and support looking forward for ur response
I’m not aware of any reason that WAITFILE() would slow a CPYF operation. If it was not *IMMED, it might result in a slightly slower start of the operation if a wait was necessary. Once the copy starts, however, there should be no additional effect.
Any wait should happen when the file and drive(s) are being checked for availability. With *IMMED, no wait is allowed. If there is no availability, an error is signaled immediately.
I still don’t see my earlier comment, so I’ll add a small part.
If you can give hint to some of the basic configuration …
Simplest starting point is the WRKSBS command. Look for active subsystems that run jobs in system pool 2 (*BASE). Common ones are QSERVER, QSYSWRK and QUSRWRK. Remember that this relates to QPFRADJ.
In general, set work management configurations to route work into pools other than *BASE if you want QPFRADJ to have a positive effect and you want to make decisions based on performance metrics.