Hi,
My Requirement is to process a file with huge amount of data and update database table ( std or z-table).There are more than 5 million records in each file.
I think parallel processing is one of the best approach.but my question is how to handle 5million recrds at the internal table level.I want to split the data at internal table/File level for each 50-75k records and process this data using parallel processing.
Pls suggest the best approach?