How many total rows are in the table? If not much more than the rows you want to delete, then you might copy the rows you want to keep into a new table and truncate the old table and then copy the records back.
Otherwise, you will need to make sure your rollback segment is large enough to handle that many transactions or you could place the database in no archive mode.
[kccrosser] If you are trying to empty the entire table, just use “truncate table <tablename>”. If you are removing nearly all the rows, but need to keep a “few” (as in probably < 2,000,000), then the above answer is fastest/easiest – copy to a new table, truncate the old table, then add in the rows you want and drop the temporary table.
If, however, you need to keep a lot of rows, and/or you have referential integrity requirements (i.e., other tables have foreign key links to this table), then the best method is to write a simple script to delete the rows in large “chunks”.
Deleting row by row is incredibly inefficient if you commit each delete. On the other hand deleting 12 million rows in a single transaction will probably kill your system’s rollback segments and/or block all other transactions due to resource constraints.
I would probably use the “bulk collect” to get the keys of the records to kill, then kill them in “chunks” of around 10000 records each (do a commit after every 10000 records). If your system isn’t that large, you might need to commit after every 5000.
ci_limit constant number = 10000;
SELECT <keycol> FROM <my_table> WHERE <condition>;
TYPE typetemp_tbl IS TABLE OF <my_table>.<keycol>
INDEX BY PLS_INTEGER;
BULK COLLECT INTO temp_tbl LIMIT ci_limit;
FOR indx IN 1 .. temp_tbl.COUNT
delete from <my_table> where <keycol> = temp_tbl.<keycol>;
EXIT WHEN temp_tbl.COUNT < ci_limit;
If deletion is regular i.e. on daily basis, then partition the table based on the chunk e.g daily basis. So for deletion is very easy to delete by executing a drop partition.