Performance improvement for building access path for LF/ Copying Data to PF

3695 pts.
Tags:
AS/400
AS/400 objects
Physical File
Dear Experts, I have a installation process. Which will install the Objects (including PF & LF) to client environment. Our Existing process will work in this manner. 1. Install Physical File 2. Copy the Data to Physical File from Archive. 3. Copy the Logical Files. As all the process will be run in sequence, there is more downtime for I need to modify the process to reduce downtime. I have an approach in my mind. 1. Install Physical File. 2. Build Logical File. 3. Copy Data. Or 1. Install Physical File. 2. Submit Job for Copying Data. 3. Install Logical Files. Please suggest me the better way to achieve this. It will be great if you can suggest me the new approach which will reduce process time. Thanks, Pradeep.

Software/Hardware used:
AS400

Answer Wiki

Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Discuss This Question: 17  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • TomLiotta
    1. Install Physical File. 2. Submit Job for Copying Data. 3. Install Logical Files. . What does "install" mean? Are you restoring? Are you running SQL CREATE TABLE or CREATE VIEW statements? Are you compiling DDS? Also, what will you copy data from if it's just being "installed"? And how much data is being copied into how many files? And how many LFs are there? And are there any multiple member files? And how are all of the files indexed? And are the indexes shared by any of the files? And if so, what order are you installing them in? . Before any good answer can be given, the actual process needs to be described.
    125,585 pointsBadges:
    report
  • deepu9321
    Tom, Please consider below example. We have product installed at customer environment. When we are providing service pack to customer, process will be like, 1. Archive the Objects being installed (Objects from Environment Library to Archive Library). 2. Install the new objects (Objects will be restored from Install Library to Environment Library). 3. Restore the Data from Archive (Copy Data from Archive Library to Environment Libraries). Objects can be *PGM, *FILE, *DTAQ etc. When it comes to the size, I have scenarios where few customers are having considerably large data and there are more than 15 logical files. I would require the process change by considering there are huge data base changes and logical files involved in it. Please let me know if you require any other details regarding this. Regards, Pradeep.
    3,695 pointsBadges:
    report
  • CharlieBrowne
    First step is to determine how long each step is currently taking. Then it is easier to make adjustments and determine how any change effects the overall time.. Can you restore the PFs before restoring all the other objects? If so, then you can start the CPYF process while the rest of the restore is going on. Then you should copy the data before building your LF. Here are some option: o Have the attributes of the LF to be MAINT(*DLY). Maybe just selected LFs. o Multithread your jobs. Have a job for the CPYF of each of the PFs. Then either submit a job to build the LFs for that PF or just do it in the same CL.
    41,380 pointsBadges:
    report
  • CharlieBrowne
    Deep I just saw your new post. Instead of using CPYF, consider using CHGPF. That merges the time of the CPYF and building the LFs into one step.
    41,380 pointsBadges:
    report
  • deepu9321
    Charlie, Your first answer seems to be similar for me. Physical files will be installed first. And then Logical Files, Modules, Service Programs & Programs. Will it affect If I Multithread Jobs for CPYF alone? (While LFs to be installed in sequence, I think it will throw error if member is locked.) Will it be fine if I can use MAINT(*DLY) on CRTLF command. Or, When it comes to your second suggestion, It might require lot of changes in installation process and for Installation Driver File Structure as well (As we are not mentioning corresponding PF for LF, It will require changes from Promotion level to populate our Audit File with all the required details). This might be the one which I will not get approval to proceed. And, I will not be able to use CHGPF command as we will be supplying Sources as part of different package. Regards, Pradeep.
    3,695 pointsBadges:
    report
  • CharlieBrowne
    Deep You really want to wait to create the LFs until the after the CPYF is done. I do not think performance is as bad as it use to be, but it is much better to wait. If you use MAINT(*DLY) you should be OK. You may consider doing a test using CHGPF to see what difference that makes for performance. If so, you would only need to have the source for the PF. Any information in that source a client can find by looking at the file definition. If you have comments in the source that you do not want to let them see, consider creating a special source PF. If you are using DDS, copy records that do not have an * in pos 7.
    41,380 pointsBadges:
    report
  • deepu9321
    Charlie, Thanks for your comments. I will give a try with CHGPF command to check performance improvement. And, I should give a try with Job Multithreading which seems to be similar with my existing process. I will give a try and get back to you. Thanks, Pradeep.
    3,695 pointsBadges:
    report
  • philpl1jb
    *DLY - logical file path will be updated or Rebuilt each time the logical is used. This may slow down processes like interactive screens with potential impacts everytime an access path is used. Phil
    49,850 pointsBadges:
    report
  • CharlieBrowne
    *DLY - If access path is built, nothing happens. If it is not built, the first time a program tries to access it, it will be built.
    41,380 pointsBadges:
    report
  • philpl1jb
    *IMMED would add/update the keys when records are added/updated. *DLY would check and update or rebuild the key when the access path is used by a job. If the data in the file static, the impact between these would be relatively minor.
    49,850 pointsBadges:
    report
  • TomLiotta
    Source for PFs and LFs doesn't need to be hidden since all attributes are visible anyway. (Comments might be an issue, but aren't likely to contain secrets.) . However, LFs could be restored or created along with PFs and data can be copied into PFs without any performance issues. That can be done by having no members in the LFs. ADDLFM can be submitted to QSYSNOMAX to add members for the LFs after the CPYF finishes for each file, and the CPYFs themselves can be done in parallel also. (But by doing those things, you're using a big chunk of the system's database processing capability. If you have large files and many files, you'll be using all DASD arms at once. Do you want all customer jobs to grind to a halt while your upgrade runs?) . Any LF indexes should be analyzed to ensure that index sharing is optimized of course. With as many as 15 LFs over single PFs, it's unusual if no sharing is possible. . Regardless, essentially none of that should be done at all unless there are actual changes to the database definitions. And then it should only be done for specific PFs or LFs that have the changes. Since it's a customer system, only the minimal use of customer CPU, memory and DASD should be used. . I've written customer upgrades where single tables range up to 100s of GBs, so it can be an issue. Other than simply taking time, the 'timing' of actions can be an issue in itself. . A busy (or just slow) customer system takes time to get work done. If members are being rebuilt, each member access path should be tested for validity before any program is allowed to process that file. One way to do that is to loop through a list of the PFs and issue OPNDBF for each one that is keyed, and then issue CLOF without actually reading any records. By opening the file, the program will wait until the access path finishes being created by DB2. (The system's DB2 processes are not instantaneous.) And if it's already finished, the CLOF happens immediately, and the next keyed PF is processed. . That should be done after the CPYFs are run and before any LF access paths are built. . For LF access paths, those can be tested after they're built by retrieving database relations over each PF, then running the QUSRMBRD API with format MBRD0300. This provides a status of the access path. If status shows that the AP is not valid or is held, the process can wait a few seconds and try again. (The number of seconds and number of retries is a matter of choice, as is the action to take if neither indication clears up.) . A customer's database cross-reference is not something that you want to disturb. It's also not something that you can guarantee will be in a valid state when your upgrade process starts. You might run RCLDBXREF *CHECK before even starting the upgrade, and look for a CPF32AC message. If the error message appears, you should notify the customer and abort the upgrade. . In short, there's really no way we can tell you how to run your upgrade. We can discuss things, but we don't have experience with your product nor with your customers' systems. We don't know exactly how your current process is written, so we don't know how much effort you'd need to put into making changes. Any number of things can be done. We don't know if they'll help or if they'll cause more trouble than they're worth. By the sound of it, the whole database needs to be reengineered. If a design has trouble, it's hard to know good ways to handle it. . Tom
    125,585 pointsBadges:
    report
  • CharlieBrowne
    *IMMED would update the indexes while the CPYF is running *DLY would wait until a program actually used that view. So the Installation would complete faster with *DLY, but you take the hit when someone tries to use the file.
    41,380 pointsBadges:
    report
  • philpl1jb
    Right, Charlie, you'll take a hit the first time the file is used. If it's a dynamic file with records being added between uses you will take a hit every time someone uses the file.
    49,850 pointsBadges:
    report
  • TomLiotta
    I added a fairly detailed comment a while ago, but it's not showing up yet. As is often the case, there's no obvious reason for a delay, i.e., no links, (no profanity,) etc. Wait until morning ITKE time to see it, I suppose. -- Tom
    125,585 pointsBadges:
    report
  • deepu9321
    Tom, Thanks for your detailed comment. There is nothing we want to hide from customer in view of PF/LF sources. But, In our existing process, we provide the Sources & Objects in two different DVDs. Customer System will be down when our upgrade is running. I want this change mainly to reduce the down time for customer. And, Our upgrade process will install only the files changes as part of service packs. But, When Customer is upgrading from one Release/Version to new Release/Version, it will install and copy the data for all database files. Recently, we had a customer where he is upgrading from Rel/Ver 3.1.8 to 11.5 which had lot of database (almost all) and functional changes. I will give a test run with your proposed approch and will analyze the risks or benifits. Thanks, Pradeep. Regards, Pradeep.
    3,695 pointsBadges:
    report
  • TomLiotta
    Customer System will be down when our upgrade is running.
    .
    It's not clear what that means. If the system is down, does that mean subsystems will be ended? What system objects will be available, e.g., what subsystems? Running through multi-threaded job queues means a batch subsystem must be active to run the jobs.
    .
    Tom
    125,585 pointsBadges:
    report
  • deepu9321
    Tom, In Our Application, We will create a Subsystem under which all application related jobs will run. During this upgrade all these jobs/subsystems will be ended. All the other subsystems will be active. Regards, Pradeep.
    3,695 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following