In our 7.0 system, each day a full load runs from DSO X to DSO Y in which from six characteristics from DSO X master data is read to about 15 fields in DSO Y contains about 2mln. records, which are all transferred each day. The master data tables all contain between 2mln. and 4mln. records. Before this load starts, DSO Y is emptied. DSO Y is write optimized.
At first, we designed this with the standard "master data reads", but this resulted in load times of 4 hours, because all master data is read with single lookups. We redesigned and fill all master data attributes in the end routine, after fillilng internal tables with the master data values corresponding to the data package:
* Read 0UCPREMISE into temp table SELECT ucpremise ucpremisty ucdele_ind FROM /BI0/PUCPREMISE INTO CORRESPONDING FIELDS OF TABLE lt_0ucpremise FOR ALL ENTRIES IN RESULT_PACKAGE WHERE ucpremise EQ RESULT_PACKAGE-ucpremise.
And when we loop over the data package, we write someting like:
LOOP AT RESULT_PACKAGE ASSIGNING <fs_rp>. READ TABLE lt_0ucpremise INTO ls_0ucpremise WITH KEY ucpremise = <fs_rp>-ucpremise BINARY SEARCH. IF sy-subrc EQ 0. <fs_rp>-ucpremisty = ls_0ucpremise-ucpremisty. <fs_rp>-ucdele_ind = ls_0ucpremise-ucdele_ind. ENDIF. *all other MD reads ENDLOOP.
So the above statement is repeated for all master data we need to read from. Now this method is quite faster (1,5 hr). But we want to make it faster. We noticed that reading in the master data in the internal tables still takes a long time, and this has to be repeated for each data package. We want to change this. We have now tried a similar method, but now load all master data in internal tables, without filtering on the data package, and we do this only once.
* Read 0UCPREMISE into temp table SELECT ucpremise ucpremisty ucdele_ind FROM /BI0/PUCPREMISE INTO CORRESPONDING FIELDS OF TABLE lt_0ucpremise.
So when the first data package starts, it fills all master data values, which 95% of them we would need anyway. To accomplish that the following data packages can use the same table and don't need to fill them again, we placed the definition of the internal tables in the global part of the end routine. In the global we also write:
DATA: lv_data_loaded TYPE C LENGTH 1.
And in the method we write:
IF lv_data_loaded IS INITIAL. lv_0bpartner_loaded = 'X'. * load all internal tables lv_data_loaded = 'Y'. WHILE lv_0bpartner_loaded NE 'Y'. Call FUNCTION 'ENQUEUE_SLEEP' EXPORTING seconds = 1. ENDWHILE. LOOP AT RESULT_PACKAGE * assign all data ENDLOOP.
This makes sure that another data package that already started, "sleeps" until the first data package is done with filling the internal tables.
Well this all seems to work: it takes now 10 minutes to load everything to DSO Y. But I'm wondering if I'm missing anything. The system seems to work fine loading all these records in internal tables. But any improvements or critic remarks are very welcome.