How to define a job to a file so when the file is updated the job is submitted

180 pts.
Tags:
AS/400 FTP
FTP
FTP Server
How do I define a job to a file so when the file is updated the job is submitted. The file must first capture all records before the job is submitted. An FTP job will populate the file.

Software/Hardware used:
iSeries, CLP & RPGLE

Answer Wiki

Thanks. We'll let you know when a new response is added.

call a subroutine in the module to update – then fire the program (call) if an update occurs based on the status indicator of write or update – basically you are making a control point in the program where all updates occur – (anyone have a job for a rusty AS/400 programmer ? lemme know – Thanks )

Discuss This Question: 14  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • TomLiotta
    Same file every time? Streamfile or database file? Tom
    125,585 pointsBadges:
    report
  • aderene
    Hi Tomliotta Thanks for taking the time to address my question. Same file everytime and it's a datafile populated via FTP. The FTP job that poplates this file is a daily job. After processing the data, this file is then cleared.
    180 pointsBadges:
    report
  • TomLiotta
    I don't have a good answer. I'm not sure one really exists. But I can supply a variety of comments that might help define the scope. The obvious solution is to have FTP session submit a command before it ends. That's not always easy to get trading partners to do. For a database file, triggers may sometimes be used; but the events don't always match up with what you need. An *INSERT trigger, for example, would fire before (or after) every individual row. And it won't tell you when the file is eventually closed. Similarly, an audit journal entry (e.g., T/ZC -- object change) or even a FTP exit program doesn't fire on the close of the file. Nor does the QIBM_QDB_OPEN exit program. The exit programs also have slight disadvantages in firing for any FTP or file-open events; you'd want extra coding to ensure that whatever action you take in the program is conditioned only for the desired program for FTP only and otherwise results in just a quick return to the exit point. One possibility could be a QIBM_QDB_OPEN exit program that (1) verified the name of the file and library, (2) checked the job name to ensure that it's running in a FTP server job, then (3) did a SBMJOB of the actual worker program and returns. The worker program might start by attempting to lock the file in question as *EXCL. As long as a lock is unsuccessful, you might assume that the file is still being written to. (The lock _might_ stay held in FTP until that FTP instance ends.) You might need to experiment with WAIT() times to learn timing behavior in your environment. Almost any of the possible ways to monitor -- audit journal, FTP exit program, DB_OPEN or trigger -- would need to coordinate with a separate 'worker' job due to the lag before any file close occurs. Someone else might have a clever approach. Also, you might consider restricting your worker job to a single instance at a time. There might be various ways that multiple FTPs get initiated one after the other -- network service drops perhaps. You don't want two different worker jobs interfering with each other. You could submit workers to either a single- or multi-threaded job queue. Single-threaded would mean that the worker would want to mark the current task as being done so that the next (unnecessary) worker would know that it should simply end. Multi-threaded would mean that the worker could set a lock on some object; perhaps it could lock the worker *PGM object itself. If another worker tried to start, it'd run into the existing lock and know that the task was already being handled. Right now, nothing else that might be useful is coming to mind. I hope I haven't made it more confusing. If specific questions on details of any of the areas I mentioned are needed, ask away. Tom
    125,585 pointsBadges:
    report
  • aderene
    Thank you for your response. I will give some of your suggestions a try.
    180 pointsBadges:
    report
  • BigKat
    If the records being FTP'd are individual transaction records, the AFTER INSERT trigger would work allowing you to submit a job to process the record and then delete it (since you said it is cleared after used) If the FPT'd records are a batch though, you will have to come up with a meethod like Tom discussed to determine when it is done.
    8,210 pointsBadges:
    report
  • aderene
    Thanks BigKat for taking the time to look over my question. I had thought of using a trigger, but I rejected it because all the records that are FTPed into the database file need to be there before a processing job is submitted to process all the records at one time. I have an approach, but it is not classy at all. 1. Have the FTP job load the database file 2. Schedule a job to process the just populated file. I tested this approach and it does work. I was just wondering if it be done from the file perspective
    180 pointsBadges:
    report
  • Littlepd
    IFSTOOL is a library containing some ILE-RPG facilities for the Integrated File System (IFS). This library is a available as a download from Easy400.net. Command MONIFS is part of the IFSTOOL library. Command IFSTOOL/MONIFS starts, in database file IFSTOOL/MONIFS, a non-ending trace of IFS events. By adding triggers (command ADDPFTRG) to this table, you may obtain the same level of control / automation on the IFS that you may achieve on files of the library system. IFSTOOL
    1,130 pointsBadges:
    report
  • BigKat
    Aderene, would it be possible for you to have the FTP'd data to contain a final record that can be recognized by the trigger program, such as all character fields = '*" and all numeric = 0. the AFTER INSERT trigger program could just look at the inserted data, and if it found that record, submit the processinig job the problem is you are dependent on being able to make the FTP jobs add the final record (which could be a problem if the senders are external to your oorganization.)
    8,210 pointsBadges:
    report
  • aderene
    Thanks for the suggestion BigKat, but the FTPing organization is external (B of A). It would take an act of God to get them to put a delimiter at the end of their FTP file.
    180 pointsBadges:
    report
  • aderene
    To littlepd, thanks for your idea. I am going to try it this morning.
    180 pointsBadges:
    report
  • BigKat
    Well, good luck Aderene! Sorry I couldn't help more.
    8,210 pointsBadges:
    report
  • TomLiotta
    Long shot -- It might be worth investigating the job call stack during a few different FTP sessions. Maybe one program in the stack becomes active when the session begins the transfer and leaves the stack when the transfer completes. Your trigger might look to see if that program is in the stack above your trigger program. If it is, then maybe you could send a scope message for that *PGM boundary. When that program ends, your scope message would go into effect. I've used scope messages for useful effects, but haven't investigated this possibility. If it works, be aware that it might not work after an upgrade. Tom
    125,585 pointsBadges:
    report
  • TomLiotta
    And be sure only to send one scope message. Your trigger would need to work out a way to remember that it had already been done. I sometimes use an extra activation group to store such data in; they go away automatically when the job ends and your scope-program probably should RCLACTGRP to delete it. Tom
    125,585 pointsBadges:
    report
  • PGMBOB
    Perhaps a looping program to find when any records have been added. RTVMBRD NUMRCDS test for an exclusive file lock. If you can't lock exclusive another job might've adding records. Good luck!
    1,085 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following