The problem is probably being caused by the commits you are issuing every 200,000 rows.
When the number of transactions created after the cursor was opened increase, the chances of getting this error also grow if you continue fetching data from the cursor.
From the DBA perspective, maybe increasing the number and/or size of the rollback segments could help.
Another solution could be fetching the complete cursor into a collection, closing the cursor and then processing the rows, but it could not be an option when the amount of data being processed is that big.
In this case, maybe the best option could be modifying the cursor so it includes fewer rows so that you can fetch them all and close the cursor before you process them (and commit your changes). Then you open another cursor, fetch, close it and process the rows, and so on.
The main goal is not to fetch data from a cursor that was opened before some transaction was committed.