The question doesn’t really make sense since they do two different things. A WRITE is going to create a new record, and an UPDATE is going to replace an existing record.
Generally, an UPDATE will be done against a keyed record. If the key is unique, then a WRITE will fail due to an attempt to create a duplicate key.
A WRITE will commonly cause a repositioning of the file pointer to end-of-file or possibly to the first (or perhaps nearest) deleted record space. An UPDATE should keep the file pointer at essentially the same location.
A WRITE will cause all fields to be transferred from memory to the I/O buffer. An UPDATE can be done against only selected fields.
There are too many differences to make a useful comparison. The two combinations would rarely, if ever, be used for the same functions.
Here is a little more detail: The data from a third party can be placed in a staging file or directly to the destination final file. The data will need to be “Massaged”. If the data goes into the staging file first, then during the read of the staging file and write to the main
file the data will be manipulated in various fashions. If the data was
sent to the final destination file we would just read the data and perform the same data manipulation but update the record instead. That what why I wanted to understand whther a read/ write is faster or a read update. These files will contain millions of records and some of the files will be keyed unique. There are multiple files to be recieved that the data needs to be manipulated.
I’ve been thinking this over and haven’t settled on a good answer.
Is this a migration effort or part of an ongoing process?
My opinion is to go with a staging file even if there would be an advantage for direct read/update (and I’m not convinced there is.)
However, that might be influenced by knowing more about why this is being done.
This is an ongoing process with mutiple files and time is critical, especially on some of the larger fils with millions of records. This is to convert one database with different files & fields to another database with different files and fields.
If this is a read from one file followed by a write and/or update to another, a read / write combination would be faster – a read / update in this scenario implies retrieving a record from the target file which would require additional time. The down side of the read / write is the possibility of duplicated data in the target file.
The fastest method I know of to move data through an RPG program is as follows:
FInput Ipe e Disk
FOutput O e Disk
OOutputfmt D 01
If needed, and assuming the attributes are the same, the input fields can be renamed in the I specs to the name of the output field.