Performance Management helps you maintain peak performance, as well as plan for future capacity. The server routinely generates a monthly report that helps you analyze application performance, capacity planning, and system performance. Upon review of the results, you can work with your IT team to adjust your systems for peak efficiency.
For more information you can go to the IBM Website.
Systems are mostly shipped by IBM in default configurations. IBM has no clue what your applications need in terms of specific performance, so the defaults for performance are completely general. They don't favor database, interactive, computation, server nor any other element.
However, the OS/400 line (including i5/OS and later) comes with all of the elements needed to carve work management areas out of the full system image and to tune each area separately for specific types of workloads. The container type for these areas is the subsystem description object.
The need for performance tuning arises when an application area -- interactive user response time, server response time, batch job completion time, whatever -- does not provide results as quickly as needed or interferes too much in other unrelated areas. It is highly individual to each site since each has its own kinds of programming and database structures. Each business works in its own way.
Work management involves routing different types of jobs into subsystems that have been tuned to handle that type of work. The starting point would be to create a set of 'shared memory pools' with the WRKSHRPOOL command (or the related APIs) and assign different ones to different subsystems. Perhaps a minimum of three shared pools is needed. (Ones for *INTERACT and *SPOOL already exist. Review the QSPL *SBSD to see a default secondary pool assignment.) It's usually easiest to assign these as additional pools rather than as the only pools for each subsystem. Then go through every routing entry and prestart job entry in each subsystem so that work routes into the separated shared memory pools instead of into *BASE which is the default.
Note that this applies to <b>every</b> entry, including those from IBM for TCP/IP and the host and TCP/IP servers. Breaking things out of *BASE is the first necessary step in system performance tuning.
The system performance adjuster requires work not to be done in *BASE for it to work well at all. This is because of two strict principles -- memory is always moved out of *BASE to the *SHRPOOLx pool that needs it and memory is always returned to *BASE when a *SHRPOOLx releases it; memory never moves from one *SHRPOOLx to another. If work is running in *BASE, then the memory may become allocated to that work and become unavailable.
When numerous work units are running in *BASE, it is practically impossible to determine what is using it. Performance measurements for different workloads become possible when workloads are isolated in their own pools. You can then see separated values for Waits, Faults and other indicators, and you can determine which jobs are causing them. If ODBC or other numerous other server jobs are in *BASE, there's no easy way to see what work is stepping on other work.
Once all is neatly separated, it becomes a matter of detailed tuning. That would be better covered under a specific question.
The need for tuning is simply that it can often be better to improve how a system runs than to continue upgrading with faster processors and more memory. You avoid ending up with a poorly performing but very large and expensive system. The downside is that it takes a little study and some work.