Hi
Apart from parallel processing framework, is there any other workaround to process multiple entries?
Scenario is as follows:
Retrieve all data from a master table for a certain criteria (close to 2 million entries selected).
Retrieved further data for each of these 2 million entries from another table(around 8 million entries now)
Summarise (basiacally collect) data for all these 8 million entries.
I have tried using SELECT ENDSELECT using package size to reduce the load to avoid memory issues.... but this is still a sequential approach rather than Parallel processing.
Any suggestions?
Thanks.
I suggest using OPEN CURSOR FOR SELECT with a join between your 2 tables
if you are able to use some ranges in order to separate entries into several blocks you can do it for each block in parallel
Add a comment