We probably need a little more info to fully answer this question, but here are some notes/suggestions:
1. If you are thinking of querying recent records based on the “processed flag”, the flag should be an integer or date/time value that is changed for every processed record – preferably a non-null column. If you use some form of true/false, y/n, 0/1 valued column, Oracle’s query optimizer will quickly decide the column isn’t very useful as an index and your query to get recent records will become a full table scan. Simple solutions:
a) Make it a date/time field, set to null for new records and then set to the date/time processed. This works, but I prefer my indexed columns to not contain null values, so I would probably use:
b) Make it an integer field, initially set to zero, and then set to a non-zero sequence number when it is processed.
(n.b. Both of these have advantages and disadvantages. You could also use the date/time field and set the “new” records to a fixed old date – like “1/1/1900″ to avoid the null problem – I am not thrilled with that, because then the query to get the records has to be hard coded for an exact match with this arbitrary date.)
2. You said “desirable to process them in the order” they were inserted. Is this merely desirable, or a requirement?
If you <b>have </b>to process the records in the order they were inserted, then you will probably have to implement the (non-lazy) approach of a table reader that dispatches the records in order to the other processes. You would need to do this if one of the records in a batch could be an update to an earlier record in the same time window – running chunks of records in parallel could result in the “update” being run prior to the “original” data.
If it is merely that you want to have them run more or less in the time order that they were received, that is much simpler. Include a field containing a 0/1 flag to indicate the record has been “taken”, and a date/time field containing the date added to the table (note that this would need to be separate from the “date processed” field in 1 above).
cursor get_rec is select <primary key fields> from <table>
where Processed_Flag = 0 and Taken_Flag = 0
order by Date_Inserted;
fetch get_rec into <fields>;
if get_rec%found then
update <table> set Taken_Flag = 1
where <primary_key_fields = cursor fields> and Processed_Flag = 0 and Taken_Flag = 0;
– if the record update succeeded, then this process now “owns” this record, and any other thread
– will fail and drop to the exception below
<process record here>
update <table> set Processed_Flag = MySequence.NextVal;
where <primary_key_fields = cursor fields> and Processed_Flag = 0;
when NO_DATA_FOUND then
Obviously, there is a lot missing from the above – error handling and a way to start/stop the loop, for example. But this could be a framework for each of the individual processes on the different systems.
Note that this will “tend” to fetch records in order, but there is no guarantee.
I use a separate Processed_Flag (non-zero after processing is done) and Taken_Flag (selected for processing) to allow for recovery should a process fail and leave a record in an intermediate state. If you use the same flag for both purposes, you need to beware that a process failure after a record is “taken” could be mishandled as though the processing had completed. In this case, simply resetting the Taken_Flag to zero would enable re-processing of the “lost” record(s). You could also set the Taken_Flag to the Oracle process SID, which could help diagnosis if/when problems occur.