Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Inserting Millions of records-Please Help!

Former Member
0 Kudos

Hi All,

I have a scenario where I hvae to query MARA and filter out some articles and then query WLK1 table(Article/Site Combination) and insert the records to a Custom (z) table.the result maybe millions of records,

can anyone tell me a efficient way to insert large number of records? This is urgent.Please help.

Warm Regards,

Sandeep Shenoy

1 ACCEPTED SOLUTION

Former Member
0 Kudos

This came up a week or so ago. Have look at Rob

6 REPLIES 6

former_member188685
Active Contributor
0 Kudos

Hi,

while selecting the records from mara use packet size, and then filter or what ever do it, and then use for all entries and select the data from WLK1 . now insert it to Ztable.

but try to run this using background job.

that is a good idea since you have lot of data.

Regards

vijay

0 Kudos

Thanks Vijay for the quick answer,

does having millions of records in internal table and then using INSERT statements on the Z table affect the performance badly...?if yes what is the efficeint way?

Thanks again!

Sandeep

0 Kudos

This is a sample code i am using in one of my programs. You can try similar way and insert into custom table with every loop pass.

I am considering 2000 records at a time. You can decide the no and code accordingly.

if not tb_bkpf[] is initial.

  • fetching the data from BSEG for each 1000 entries in BKPF to

  • reduce the overhead of database extraction.

clear l_lines .

describe table tb_bkpf lines l_lines.

if l_lines >= 1.

clear: l_start, l_end.

do.

l_start = l_end + 1.

l_end = l_end + 2000.

if l_end > l_lines.

l_end = l_lines.

endif.

append lines of tb_bkpf from l_start to l_end to tb_bkpf_temp.

  • Populating the tb_bseg_tmp in the order of the database table

select bukrs

belnr

gjahr

buzei

shkzg

dmbtr

hkont

matnr

werks

from bseg

appending table tb_bseg_tmp

for all entries in tb_bkpf_temp

where bukrs = tb_bkpf_temp-bukrs and

belnr = tb_bkpf_temp-belnr and

gjahr = tb_bkpf_temp-gjahr and

hkont in s_hkont.

refresh tb_bkpf_temp.

if l_end >= l_lines.

exit.

endif.

enddo.

endif.

Former Member
0 Kudos

This came up a week or so ago. Have look at Rob

0 Kudos

You have to keep two things in mind:

For efficiency, always do the inserts in key sequence order.

If you are inserting a lot of records (and it seems like it here), you will be in danger of exceeding the rollback area. The way around this is to insert a number of records at a time and then do an explicit commit. That's why I suggested doing it this way:


DATA: BEGIN OF ztab_int OCCURS 0.
        INCLUDE STRUCTURE ztab.
DATA: END   OF ztab_int.
 
SELECT * FROM ztab INTO TABLE ztab_int
  PACKAGE SIZE 5000.
 
  INSERT z_tab1  FROM TABLE ztab_int.
  COMMIT WORK.
 
ENDSELECT.

The package size you should use depends on the structure size of the table you are inserting and the size of the rollbacke area. Your DBAs should be able to help you in this.

You cannot simply restart this program if it fails. You either have to empty the z table first or skip the records you have already inserted.

Rob

ferry_lianto
Active Contributor
0 Kudos

Hi Sandeep,

Please use the OPEN CURSOR, FETCH and CLOSE_CURSOR statements and make sure you specify the package size.

Here is an example code.

PACKAGE_SIZE = P_SIZE.

OPEN CURSOR WITH HOLD CURS FOR

SELECT *

FROM MARA

WHERE MATNR <> SPACE.

DO.

FETCH NEXT CURSOR CURS

INTO TABLE I_MARA PACKAGE SIZE PACKAGE_SIZE.

IF SY-SUBRC <> 0.

EXIT.

ENDIF.

ENDDO.

CLOSE CURSOR CURS.

Hope this will help.

Regards,

Ferry Lianto