Is RPG quicker than SQL to update ALL records

Tags:
AS/400
iSeries
RPG
SQL
We update a few fields in every record on a file with over 13 million records. Masking a copy of production data on a performance test environment. It is run on a two processor 595 box under OS7.1. The job runs for about 36 hours. If I wrote an RPG program, I would open the file non-keyed to process sequentially and, I assume, it would update all records in the I/O page making the I/O process more efficient. Can I make SQL do the same or do I need to write an RPG program to see if it runs quicker? The record length is 980 and it's a DDS file, before anyone suggests I can't convert to SQL as it's a package and I am limited in what I can do.


Software/Hardware used:
two processor 595 box under OS7.1
1

Answer Wiki

Thanks. We'll let you know when a new response is added.
Check out the post on this site from earlier. It addresses your question.

Discuss This Question: 11  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.
  • BigKat
    Would need to see the update SQL to see if there is something that can be coded to perform better.
    9,430 pointsBadges:
    report
  • Splat
    Use RPG & the cycle.

    Define the file as update primary, set a record identifying indicator, condition the code on the record identifying indicator, and use output specifications conditioned on the record identifying indicator.

    FFile      Upe  e           k Disk               
                                                     
                                                     
    IRecord        01                                
                                                     
                                                     
    C   01              Eval      field1 = 'A'       
    C   01              Eval      field2 = 'B'       
                                                     
                                                     
    ORecord    D    01                               
    O                       field1                   
    O                       field2                   

    12,865 pointsBadges:
    report
  • GregManzo
    Wow! RPG II code. I thought I was the last one left that used the cycle. Regardless of cycle or explicit reads, the main speed advantage comes from I/O blocking - either the default values or you can use OVRDBF NBRRCDS().
    From the bench tests I've seen, SQL is usually only faster if you can re-create the files as SQL tables, otherwise you are better off with RPG & blocking. Anything you do in memory is insignificant next to the number of times you need to go out to disk.
    How often do you need to run this 36 hr update? If you are trying to make a copy of production data, it may be faster to only update the records that have changed. We wrote something to read the journal of a file in lib A and apply those changes to a file in lib B. Depending on I/O volumes and time of day it was typically 60 to 100 times faster. The catch is, the file needs a unique key so you can be sure you are updating the right record.
    2,950 pointsBadges:
    report
  • azohawk

    My first significant exposure to SQL was a program that generated 3 (minimally meaningful) reports that took 12 minutes to run everyday (there was a lot of data).  A lot of business decisions were based on these reports. We entered the 4th quarter of the year and found out these reports were not giving us enough information as we had barely enough business for that quarter to keep the lights on.  I was asked to redo the report.

    I rewrote the report to deposit information in time buckets (1 week out projection, 2 weeks, 1 month, 2 months, quarterly type things).  I used RPG to create the new report using a lot arrays and multiple occurrence data structures. But it ran in 90 seconds and resulted in a single more meaning full report. The execs loved the new format.

    Other times I have found SQL to be faster. My point, which is faster? It depends.

    Greg has a good point, if this takes 36 hours, there might be a better way to do this. In addition to his suggestion, would a copy file command help?

    4,055 pointsBadges:
    report
  • Splat
    GregManzo, not quite RPGII (note the external definition & the EVAL) but it does hark back. Still one of the fastest ways to run through masses of data though.
    12,865 pointsBadges:
    report
  • WoodEngineer
    At a user group meeting an IBM expert told us that SQL (OPNQRYF at the time) would always be faster than an RPG program.  She said even if the RPG was reading a file as input primary and selecting a record based on testing one field, the RPG would be slower.

    There may be something else going on in the update program.  36 hours to update 13 million records is much longer than one would expect.  With the horse power you have I would expect something less than 36 minutes.

    Is the program doing a whole lot of additional I/O with other files to determine what to update in the main file?

    While the program is running check if the process is getting a lot of "lock waits" to get a lock to a record?

    The default wait is often set at 60 seconds, sometimes at 120 seconds.  It would not take a lot of lock waits to lengthen the process significantly.
    8,225 pointsBadges:
    report
  • Splat
    WoodEngineer, more than once I've had an IBM expert tell me something that was contradicted by experience.
    12,865 pointsBadges:
    report
  • ToddN2000
    I agree 36 hours to run is to long. There must be a performance problem. Without seeing the original code structure it's hard to tell you what to try. As an example, we had a user querying a DB2 database and his process took around 40 minutes to complete. I created a new LF over the data based on his key needs and the job runs in about 20 seconds now. I think is more about the keys and/or indexes of the data tables that is the biggest determining factor.
    131,645 pointsBadges:
    report
  • azohawk
    To follow up on ToddN2000's comment, look at the index advisor in Navigator (i Navigator, IBM Navigator for i, etc.)
    4,055 pointsBadges:
    report
  • WoodEngineer
    Agreed, Splat. Not all IBM experts are created equal.
    8,225 pointsBadges:
    report
  • ToddN2000
    And when those so called experts leave IBM they must go to work for the cell phone companies.....Same carrier, two cities, 10 miles apart and 2 different stories....I guess getting a third opinion may be a good idea...
    131,645 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

Thanks! We'll email you when relevant content is added and updated.

Following

Share this item with your network: