cancel
Showing results for 
Search instead for 
Did you mean: 

client import very slow due to high import/export swaps

Former Member
0 Kudos

Dear SAP community,

We have an issue happened during a client import action. We got 14 GB exported client data and in these times it's running for 50 hours. In first circa 14 hours import went well but later import/export start dramatically increased (now about 1.300.000) what incredibly decelerated import process. My question is if there is any possibility/solution how to decrease import/export swaps during active import (meant to not cancel import and start over ) ?   

Thank you very much for yours eventual answers...

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi Tomas,

You really should think about parallel import next time. Please check the oss note 1127194 for details.

Cheers,
Denny

Former Member
0 Kudos

Hello Denny,

Thank you for your suggestion. I think it's definitely worth of try. I'll give feedback how it was done when customer will agree.

Former Member
0 Kudos

Dear All,

Sap call was opened for this issue and finally we get some clues how to fix our issue. The most important advice (by my meaning) was to delete original client data and import data to clean client container. Second one was before exporting from source system did reorg of problematic tables. At last we set parameter "parallel" in TMS profile and import gets definitely faster than before.  Also big sized  psapundo tablespace was needed in this case.

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi,

Nothing we can do (meant to not cancel import and start over), better run update stats parallelly.

Regards,

Anil

Former Member
0 Kudos

Hi Anil,

Thank you for replay. I'm still fighting with this import. I increase sap memory parameters :

rsdb/ntab/entrycount 60000    => 80000

abap/buffersize 500000 kB      => 800000

rsdb/obj/buffersize 20000 kB   => 40000

rsdb/obj/max_objects 20000    => 40000

rsdb/ntab/ftabsize   30000 kB   => 60000

I'm not sure with values of these two : rsdb/obj/buffersize and max_objects. I suspect this is why I get in logs these msg: ETW000 twrtab reached memory limit; memory usage 2000000000 after increase to 40000.

But what is realy unusual is that I must have incredibly huge PSAPUNDO tablespace (40 GB) to even pass import of bigest table to not finish with this msg : unable to extend segment by 8 in undo t.

Are there some method which will lower requirements bit for such a big size of UNDO tablespace ?

Thank you.