I have to update two Z tables from an excel file that contains 1,6 milion rows. The tables on the DB have about 30 milion rows.
I only have to update one or two columns out of 98.
The loading of the file is done and working well BUT ... in order to use UPDATE dbtab FROM itab I have to read the 1,6 milion rows. While using FOR ALL ENTRIES IN the Open-Sql generates about 1 SELECT for each ROW, which is very very bad. Also a I tried to use OPEN CURSOR and only read 5000 rows and it took about 5 min. ... very slow. i have checked all the keys and also sorted the tables.
What do you guys think? Should I write a UPDATE in a LOOP or use the SELECT ... UP TO ROWS ... FOR ALL ENTRIES and then UPDATE dbtab FROM itab.
As this is a report that will only run once I don't think I should parallelize it.