As I see it you’ve basically got two options here.
1. Add a new column to the table. Create a sequence and use a cursor to update each row within the table with its new value.
2. Create a sequence and use a cursor update each row within the table changing the call_id value to the new value. After finishing the cursor operation change the primary key to be only the call_id column.
Neither of these options will be very quick based on the number of rows you have to deal with.
The order of the numbers that you are putting into the table will all depend on the order that you process the rows within the cursor.
In this case, it will probably be better to create a new table, insert the data (using collections and bulk operations), and rename the new table, instead of perform the updates.
Something like this:
create table Your_New_Table ( … );
drop table Your_Table;
rename Your_New_Table to Your_Table;</pre>
where Your_Insert_Procedure is a procedure like this (I tested something similar in 10g, but in 8i it won’t compile because the use of a table in the bulk fetch):
<pre>create or replace procedure Your_Insert_Procedure (p_table_size in number default 2000)
cursor Your_Cursor is select rownum,date,cust_id from
(select * from Your_Table order by cust_id,date);
type Your_Table_Type is table of Your_Cursor%rowtype index by binary_integer;
fetch Your_Cursor bulk collect into Your_Table limit p_table_size;
forall j in Your_Table.first..Your_Table.last
insert into Your_New_Table values Your_Table(j);
exit when Your_Cursor%notfound;
Note that you will need enough disk space to temporarily duplicate the table.
You can try using different values for the parameter p_table_size to improve performance.
Bulk and forall operations perform many times faster than single/normal ones.