How to delete duplicate records from physical file via RPGLE Program?

Profile: PreetMaan
Physical File
Hi All, I have a requirement. I have to delete all the duplicate records from physical file Customer via RPGLE Program. Please help me. How I can achieve this? Regards Manpreet

4 Replies to this discussion

There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members reply.
  • azohawk

    Why RPGLE? I think SQL will do this much easier.

    If you have to do RPGLE, you are going to need to sort the records in the file (keyed logical).  Read the first record and put all of the fields into storage fields, read the second record. If the fields are the same as all of the stored fields, delete the 2nd the record; if not, move those fields into the storage fields. Read the next record and repeat the process until all of the records are processed.   A lot of coding since you have to compare every field, unless you can get by with just comparing the key fields.

    I'm not an SQL Expert, but I believe there is an SQL function to remove duplicate records. Quickly and efficiently.

    4,055 pointsBadges:
  • Kaisersosa
    Found this awhile ago and saved it.
    Have not tried it.
    <=> - Delete Duplicate Records, keeping the 1st one
    Maybe you found a mistake in your application and impacted
    to the duplicate key found in file that you don't need.
    Now what? Here is a short command in SQL that will delete\
    duplicate keys. In this example, FILEA = File name, LIBA=Library name,
    and INVNO=key field.
    FROM liba/filea F1
    WHERE RRN(F1) >
    WHERE F2.invno=F1.invno )

    275 pointsBadges:
    You can use Input primary for a file in keyed sequence and malke a level break on the keys. If Level break is on it is a single record. Level break off is the second or duplicate record. Good Luck.
    1,235 pointsBadges:
  • JohnD2
    When doing in RPGLE after sorting, you can do an external data structure that is based on the file you are doing.  This could be used to do the comparison instead of comparing every individual field.  You would have a field say NEWDATA that is the length of this data structure. You move this to another field called OLDDATA and when you read the next record you could compare NEWDATA to OLDDATA.  If it is the same, then you can delete this 2nd record.  I hope this makes sense.
    15 pointsBadges:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

Thanks! We'll email you when relevant content is added and updated.


Share this item with your network: