It is kind of hard to say without more information. But a few thoughts are :
> if you are using an interpretive language, it might be faster with a compiled language.
> better, more efficient coding – tighter loops, remove unneeded processing
> does your operating system allow checkpoints? You might consider taking a checkpoint so that you could restart the job.
> ensure you are not re-reading the same records. (I have seen a program where it read the first record and processed it. Then it read the first record and the second record and processed it. Then it read the first record, then the second record, then the third record and processed it.)
> have you tested this with a hundred records? a thousand? 10,000? Are the timings appropriate for the number of records when compared to the total? I had one programmer who was “only reading” but the more records the program read, the slower it got. Well, it turned out that he also doing an INSERT into a DB2 table and later was going back to SELECT from that table. As the table grew from zero rows to hundreds of thousands, the program slowed down. (the DB2 optimizer was counting on zero rows in that table).
just a few random thoughts.