November 9th, 2011, 03:20 AM
Updating millions of rows in a batch
Sooo, I have a table which has plenty of rows. In production, I would estimate it to be from 30 millions to 300 millions. I need to update on column (flag) in all the rows (created before certain date).
Now saying just:
Does not seem like a good idea - the commit-buffer would become too big.
UPDATE MyTable SET flag = 3 WHERE created < to_date('2010-10-08 23:59:59', 'YY-MM-DD HH24:MI:SS');
I will write a PL/SQL script for this. The question is, whether I should:
a) Update each row separately, and commit after every 10000 rows. ( WHERE RowId = [rowId] )
b) Update 10000 rows with set of dates ( WHERE rowId > [some_row_id] AND RowId < [some_row_id_2]
In the latter example the some_row_ids would naturally be fetched. The rowIds come from sequence. So which one would be more effective?
I am not too familiar with PL/SQL or Oracle for that matter.
Gabba Gabba Hey
November 9th, 2011, 03:35 AM
What exactly do you think that a "commit buffer" is? There is no such thing in Oracle.
Are you referring to the "UNDO Tablespace"?
Oracle performs best when you commit your transaction when it needs to be. Committing in chunks slows down your process and consumes more resources.
If your UNDO is to small to handle that UPDATE then you have to adjust your UNDO size.
You should not not adjust your transaction handling.
See what Tom Kyte says on this topic:
Comments on this post
November 9th, 2011, 03:55 AM
Thanks for the link. In my situation I could do commit after every update, the integrity is not an issue - only performance. There has been a situation before where a script had taken a very long time to run, and there has been some issue that the data to be updated had been too old to perform a commit.
Just to clarify, the integrity is not an issue because all records before a certain date need to be updated. There will not be a need for rollback. Unless there is a grave error in the script - so it has to be tested properly.
Gabba Gabba Hey