cancel
Showing results for 
Search instead for 
Did you mean: 

exporting a big table is slow

Former Member
0 Kudos

We are on sap erp r/3 4.7 110 (oracle 9, solaris 9). we are doing a migration based r3load and we export the source sytem. at first the exporting table speed is neary 800MB/H. but it comes to 100MB/H when it exports a big tabl name BSIS, which size is nealy 30GB. i caculate the speed via monitoring exported file size . i've ready run db update statistic job before we export db. it seems no work.

how can i accelerate the exporting big table speed ? thx in advance.

Accepted Solutions (1)

Accepted Solutions (1)

markus_doehr2
Active Contributor
0 Kudos

> how can i accelerate the exporting big table speed ? thx in advance.

You can split the table into various pieces so the table is exported in parallel with different processes:

Hinweis 952514 - Using the table splitting feature

Markus

Former Member
0 Kudos

hi, markus

the table splitting feature requires 640 kernel, migration monitor and 6.20/6.40 Installation Master according the note 952514. that means following:

1, the kernel of soure system is 620. we should update it to 640.but that means potential risk for a prodution system.

2, i've not used migration monitor. should the tool be used only for importing or both of imporing and exporting?

3, i planed to use 4.7 110 installation master on original cd to export and import db before. the installation master is for 4.7 110, while the 620/640 installation master is for 4.7 200. i don't know whether it's ok if using the 620/640 installation master to export 4.7 110 sap on solaris 9 and import 4.7 110 on windows 2003?

another question: why the big table slows down the exporting speed? thx.

btw,i've considering other methods like db tunning for r3load, package splitting, unsorted exporting and so on. a lot of changing are required.

markus_doehr2
Active Contributor
0 Kudos

> 1, the kernel of soure system is 620. we should update it to 640.but that means potential risk for a prodution system.

The potential risk is already there: Kernel 6.20 is out of support since year 2005! That means you run an unsupported system with unsupported kernel version since year 2005. You may also have security issues that were no more fixed.

There were a few very important fixes in kernel 6.40 for Unicode conversions that were not backported to the 6.20 kernel. So you will most likely run into issues if you use such old tools to do a system copy.

> 2, i've not used migration monitor. should the tool be used only for importing or both of imporing and exporting?

You can use MigMon on both export and import, you can even run them in parallel.

> 3, i planed to use 4.7 110 installation master on original cd to export and import db before. the installation master is for 4.7 110, while the 620/640 installation master is for 4.7 200. i don't know whether it's ok if using the 620/640 installation master to export 4.7 110 sap on solaris 9 and import 4.7 110 on windows 2003?

I would use the latest CDs always (not R3SETUP). I would also use the latest tools (R3load, R3ldctl, R3szchk, database interface library) do to the export.

> another question: why the big table slows down the exporting speed? thx.

It's not slowing down but it's just the fact, that a big table running alone and not in parallel will just run slower than more than one smaller table.

> btw,i've considering other methods like db tunning for r3load, package splitting, unsorted exporting and so on. a lot of changing are required.

No matter what methods or combinations you use, do not use kernel 6.20 (or any 6.20 tools to do the migration).

Markus

Former Member
0 Kudos

my current plan is updating kernel 620 to 640 first, then using features like unsorted unloading, table splitting, table splittng and db parameter tuning for r3load. i'm doing a test on another sap system. _.

aditional questions:

1.how to estimate the size of exported files?

the qas db is 50GB, the used size is 30GB, the exported size is 6GB.

the prd db is 600GB, the used size is 570GB, so the exported size should be near to 112GB.570%(30%6)=112. am i right?

2. how to estimate the exporting time?

my view is referring to the previous exporting time.for example, db exported 10GB(the exported file size) in one hour,so there would be 10.2 hour. (112-10)%10=10.2. am i rght?

thx again for your great help!

markus_doehr2
Active Contributor
0 Kudos

> 1.how to estimate the size of exported files?

It's about 10 - 20 % of the database size

> 2. how to estimate the exporting time?

> my view is referring to the previous exporting time.for example, db exported 10GB(the exported file size) in one hour,so there would be 10.2 hour. (112-10)%10=10.2. am i rght?

The export speed is dependent on:

- the number of parallel R3load processes

- the number of CPUs on the database server

- the speed of the underlying I/O subsystem

- the speed of the target disks where you put the export (e. g. NFS is slower than a disk)

It's possible to export 800 GB - 1 TB in an hour with proper configuration and if your hardware allows. This all depends on the available hardware and the configuration.

I'd do a second test run using table splitting. Our export was done 6 times before we did the actual production run to find out the best configuration for our environment.

Markus

Former Member
0 Kudos

1. i did twice db exporting test on one sap system . the db size is 53GB, used size is 30GB.

the exported file size was 5.7GB when i exported it using 4.7 110 installation master ( r3load 6.20); the exported file is size was 4.3GB when i exported it using 620/640 installation master (r3load 6.40).

maybe the r3load 640 compressin rate is higher.

2. i plan to use table splitting and package splitting to accelerate exporting speed.

now i'm confused on table spliting for i'm not sure the table splittng take effect.

take the exporting test for example. i configured a table_splitting file with line "COEP%4" to divide table COEP to 4 packages in phase 'table splitting preparation'.in my view there should be four exported files related to COEP.but in fact there were only COEP.001,COEP.STR and COEP.TOC. did the table splitting take effect? or i missed some produres?

by the way, i also used package splitting.

thx for your kindly help.

markus_doehr2
Active Contributor
0 Kudos

> take the exporting test for example. i configured a table_splitting file with line "COEP%4" to divide table COEP to 4 packages in phase 'table splitting preparation'.in my view there should be four exported files related to COEP.but in fact there were only COEP.001,COEP.STR and COEP.TOC. did the table splitting take effect? or i missed some produres?

Did you generate the .WHR files?

Markus

Former Member
0 Kudos

take the exporting test for example. i configured a table_splitting file with line "COEP%4" to divide table COEP to 4 packages in phase 'table splitting preparation'.in my view there should be four exported files related to COEP.but in fact there were only COEP.001,COEP.STR and COEP.TOC. did the table splitting take effect? or i missed some produres?

Did you generate the .WHR files?

yes,i generated the .WHR files in the directory /export_dump/ABAP/DATA after table splitting finished. there are 4 .WHR files like COEP-1.WHR and one file named whr.txt.

Former Member
0 Kudos

should i copy the WHR files from /exprot_dump/ABAP/DATA to /export_dump/DATA in order to make the table splitting takes effect? i didn't do that step.

if i used table splitting and package splitting feature during export, do i need to do some manual work like creating primary key during import? or just the migration monitor do all of them?

Former Member
0 Kudos

i copied the WHR files frorm directory /<EXPORT>/ABAP/DATA to /<EXPORT>/DATA. at the end the table spliting taked effect!

the default directory,where the WHR files are generated during phase 'table splitting preparation',is /<EXPORT>/ABAP/DATA .while the default directory, where the STA files are generated during phase 'database abap content export', is /<EXPORT>/DATA. i define the export dir as /<EXPORT> in migration monitor setting. i believe if no WHR files are in /<EXPORT>/DATA the table splitting will be ignored.i have to copy them to /<EXPORT>/DATA mannually.

Answers (0)