RPGLE record count

Tags:
RPG
Is there a way in RPGLE to determine if a file has 2 or more records with the same key without actually reading the records? I know we could use embedded SQL to count the number of records, but I don't think that would be very efficient in this case. Thanks for any help.
1

Answer Wiki

Thanks. We'll let you know when a new response is added.

You could copy that file to a file in Qtemp using the INCREL keyword to specify your key value.

Then execute a RTVMBRD over the temporary file which will return the current number of records in the field NBRCURRCD as
a 10,0 field.

Hope this helps.
Rex

Discuss This Question: 13  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.
  • WaltZ400
    Your request is a little vague. Is the file you are examining a keyed file? If it is keyed by what you are searching for, I have found a SETLL by key and a READE loop works a lot faster than doing a CPYF and then having to call a CL program to check the record count (as RTVMBRD command only works in a CL program).
    655 pointsBadges:
    report
  • Stepheno
    I agree the cpyf would be more time consumeing than doing the setll and reade loop. The file is keyed on the fields I'm searching for. I also knew that I could do a setll and then reade until I determined the number of records with my key. I was looking for something that wouldn't require me to actually read the records into memory. Is there perhaps some API that efficiently returns the number of record with the key I'm searching for?
    0 pointsBadges:
    report
  • Recuyer
    With sql, you can make a : select key1, key2, count(*) having count(*) >= 2 something like that
    0 pointsBadges:
    report
  • WaltZ400
    Most of the database type API's I have found do nothing more than retrieve info re: a file similar to what DSPFFD, DSPFD, etc does in CL. API's are nice but sometimes we have to fall back on the old tried and true RPGLE operations. Is there a reason you don't want to read the records into memory? If you don't want to overlay a record already read, try creating an externally defined data structure using the EXTNAME(filename) keyword and use an OCCURS(2). Always have the occurs set to 1 when doing normal processing on the file, except when you want to count records and then you set occurs to 2, count your records and then set the occurs back to 1. Your original record will be in memory again. If the reason is you don't want to have a locked record that would cause a problem somewhere else, do a READE(N) when counting records. This will read the record without locking it.
    655 pointsBadges:
    report
  • Stepheno
    Thanks for all the replies. The main reason we didn't want to have to read in all the records was to avoid the processing overhead of reading the data into memory. The only info we need from those records is if there is more than 1 record with the same key. If there is, we are setting some values in another file based on that info. Again thanks for everyones input, I think we'll stick with the tried and true setll, reade loop combo.
    0 pointsBadges:
    report
  • EXPERTJohnBrandt
    You need to use Retrieve Member Description (QUSRMBRD) API format MBRD0200 which will tell you how many records matches the key field, by keys (key1 matches, key1/key2 matches). This is only applicable for the entire file and is not based on a specific key value, only the key list of the file. John Brandt iStudio400.com Home of iS/ODBC - MSSQL access FROM iSeries
    2,530 pointsBadges:
    report
  • Daveslateree
    Hi the only way is via SQL select count(*) into :count from file where Key1 = :variable1 and Key2 = :variable2 The overhead of doin this is probable greater than a setll rollowed by 2 x reade Dave.
    0 pointsBadges:
    report
  • Rchevalier
    John, The API will return the number of unique keys for an access path but I don't see where you can specify the actual key values. So, it might return 20 distinct key values but not tell you how many fit the key list of key1 = 'iseries' and key2 = 'rpg'. I think the best way is to just set up a loop to read the file twice or until EOF is reached. Count the number of records read and check the value after the loop. If it's 2 then perform the other operations. Reading the records into memory will have a negligable effect on performance time. If that's really not an option, the SQL solution given above is the next best performer, and may even be comparable to the RPG solution given the correct indexing. To really speed it up create a vector encoded index with the same keys as the logical file. A vector encoded index is a table of how many values fit each key value, just what is wanted. The optimizer uses it to determine access plans. Since the answer is already provided in the index there is no database work for the optimizer to do. Just report back the value. HTH, Rick
    0 pointsBadges:
    report
  • Password98
    why not use sql ? It's great for doing stuff like that. I don't have access to an iSeries so this comes from memory : c/exec sql c+ declare c1 cursor for select a, b from yourfile c+ group by a, b having count(*) = 2 c+ where a= :valuefor_a and b= :valuefor_B c/end-exec c/exec sql c+ open c1 c/end-exec c/exec sql c+ fetch from c1 into :recordnumber c/end-exec c/free if sqlcod = 100; // not true else; // it's true : there are 2 records with these // values in fields a and b endIf; c/exec sql c+ close c1 c/end-exec The more recent OS/400 and DB2 version the more efficient !
    0 pointsBadges:
    report
  • Stepheno
    SQL is more efficient than RPG when you are dealing with larger sets of data. When you only need the count of a couple of records RPG is more efficient. At least that's what the performance comparisons I've seen have shown. So in this case since I only care if there is more than 1 record wiht the same key, RPG will be quicker.
    0 pointsBadges:
    report
  • Rchevalier
    Stepheno, I know a lot of people who would disagree with you. With SQL it's all in the setup. If you don't have the indexing to support what you are doing it will perform poorly. If you do, it is very fast and getting faster all the time. If you have version 5 of iSeries Access create a sample SQL statement and use the analyzer function to see how long it will take to execute. The analyzer will also suggest and indexing that might improve performance. Times are shown in the section on the right. I routinely run these types of queries over files with around 400 million records and response time is measured in milliseconds. Unless you are going to perform this operation numerous times (like 1 million or more) you are probably splitting hairs with the RPG or SQL solution. Many times it comes down to preference. Rick
    0 pointsBadges:
    report
  • MichelleDavidson
    The editors of Search400.com asked a few users to take a look at this thread and send in their recommendations. Here's what Barney Lipps from Cincinnati said: I would use the following sql statement. SELECT (Field name), COUNT(*) FROM (File name) GROUP BY (Field Name) HAVING COUNT(*) > 1
    515 pointsBadges:
    report
  • MichelleDavidson
    Gary Senkbeil also wrote the Search400.com editors with a suggestion: If I understand your question correctly you only need to know if there are duplicates keys in the file. That being the case you can simply do the following: 1. Display file description (dspfd) near the bottom of the listing you will find number of access path entries. 2. Compare that number to the number of unique partial key values using all key fields. The difference is the number of duplicate keys.
    515 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

Thanks! We'll email you when relevant content is added and updated.

Following

Share this item with your network: