find physical files with large record counts that are unused

AS/400 database
CL programming
IBM iSeries
Physical File
How can I capture the number of records (not file size) for numerous physical files via a cl command in order that I may download the information and find the ones with the largest number of records? I need to free up disk space so I need to find the files that have records but have not been accessed over the last year or so. I need to be able to capture all files into one master file and then I can analyze the information for each one, such as the number of records, the creation date, last update date, etc.

Software/Hardware used:
AS/400 iseries, RPG, CL,

Answer Wiki

Thanks. We'll let you know when a new response is added.

DspFd (Display File Description) to an OutFile.

Then you can run IBM Query across the file and generate reports and sort them how you like.


Discuss This Question: 6  Replies

There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.
  • TomLiotta
    A DSPFD command that might be useful could be:
          TYPE( *MBR )
          OUTPUT( *OUTFILE )
          FILEATR( *PF )
    After that basic file is built, a SQL statement like this could narrow it down:
    The result would be a list of all physical file members that are not source file members. One or two addition SQL DELETE statements could remove all members that are not large enough to make a significant difference, either because of their total size or because they don't have many deleted records. That part is fairly easy. The tricky parts come next, especially if you have very large files and limited spare DASD space or if your large files have excessive logical files built over them. Tom
    125,585 pointsBadges:
  • philpl1jb
    You will want to check deleted records as well as current records, they both take space. You may also discover a lot of obsolete members. To repack files use RGZPFM and remove deleted records. Phil
    54,090 pointsBadges:
  • TomLiotta
    ...find the ones with the largest number of records? Finding the largest number of records isn't what you. You want to find files with the largest amount of unused space. These might be files with the fewest current number of records. A file might have ten current records but there might be unused space from a million deleted records. That's what Phil is getting at. In an important sense, that means you might want to look for the largest files rather than files with the most records. Further, if you have SQL tables that include LOBs, or large variable-length fields, it can be much more complicated. Even with REUSEDLT(), the auxiliary space will not reuse deleted space unless a new row has less overflow than the deleted row. This can lead to significant fragmentation in the auxiliary space. It will not be apparent from looking at the counts of current and deleted records. It will only be apparent by looking at the size of the file and comparing it against the total size of all of the rows. Tom
    125,585 pointsBadges:
  • Lovemyi
    Or you can run the disk collection information and the resulting library reports to get that one in a report sorted by library size, library name or other sort values. type GO DISKTASKS and run option 1 to collect the infromation. This can be scheduied to run once a week preferably during a quite time like on Sunday and then option 2 allows you top select a variety of reports like summary only or library detail by size or name or other sort orders and paramters like include detail or just list the library size only or only files in libraries bigger then xxxMB. There are lots of options. You can also select by owner or by date and all the reports show last date used as well as creation date, etc. Lovemyi
    2,310 pointsBadges:
  • hafwhit
    Is your system ASP used slowly increasing or did you see a large jump? It may not be your physical files causing the problem. Have you checked on spool files. There could be someone running a lot of reports and not printing or deleting them. Could be IFS as well. I agree with Lovemyi about the running of the report from the DISKTASKS menu. It will give you an breakdown of how the system is being used. Here is a breakdown of percentage from one of my systems. You will get more information than this but thought it might give you an idea of what you will get from the report.
                                                    % of
    Description                                     Disk
    User libraries                                 37.66
    User directories                                8.74
    Folders and documents                           1.87
    QSYS                                            2.58
    Other IBM libraries                             5.75
    Licensed Internal Code                          1.58
    Temporary space                                 3.96
    Unused space                                   37.19
    Internal objects                                 .47
    Objects not in a library                         .00
    TOTAL                                          99.80
    1,180 pointsBadges:
  • philpl1jb
    I cannot image deleting records except in a coherent way based on the application. Surely not deleting unpaid balances and the supporting documents. Typically keeping certain subsystems data for 2 or 3 years depending on the laws and customs of your organization. Phil
    54,090 pointsBadges:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

Thanks! We'll email you when relevant content is added and updated.


Share this item with your network: