on 03-22-2012 8:31 AM
Dear SAP community,
We have an issue happened during a client import action. We got 14 GB exported client data and in these times it's running for 50 hours. In first circa 14 hours import went well but later import/export start dramatically increased (now about 1.300.000) what incredibly decelerated import process. My question is if there is any possibility/solution how to decrease import/export swaps during active import (meant to not cancel import and start over ) ?
Thank you very much for yours eventual answers...
Hi Tomas,
You really should think about parallel import next time. Please check the oss note 1127194 for details.
Cheers,
Denny
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Dear All,
Sap call was opened for this issue and finally we get some clues how to fix our issue. The most important advice (by my meaning) was to delete original client data and import data to clean client container. Second one was before exporting from source system did reorg of problematic tables. At last we set parameter "parallel" in TMS profile and import gets definitely faster than before. Also big sized psapundo tablespace was needed in this case.
Hi,
Nothing we can do (meant to not cancel import and start over), better run update stats parallelly.
Regards,
Anil
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Anil,
Thank you for replay. I'm still fighting with this import. I increase sap memory parameters :
rsdb/ntab/entrycount 60000 => 80000
abap/buffersize 500000 kB => 800000
rsdb/obj/buffersize 20000 kB => 40000
rsdb/obj/max_objects 20000 => 40000
rsdb/ntab/ftabsize 30000 kB => 60000
I'm not sure with values of these two : rsdb/obj/buffersize and max_objects. I suspect this is why I get in logs these msg: ETW000 twrtab reached memory limit; memory usage 2000000000 after increase to 40000.
But what is realy unusual is that I must have incredibly huge PSAPUNDO tablespace (40 GB) to even pass import of bigest table to not finish with this msg : unable to extend segment by 8 in undo t.
Are there some method which will lower requirements bit for such a big size of UNDO tablespace ?
Thank you.
User | Count |
---|---|
86 | |
10 | |
10 | |
9 | |
6 | |
6 | |
6 | |
5 | |
4 | |
3 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.