cancel
Showing results for 
Search instead for 
Did you mean: 

iDoc performance question

Former Member
0 Kudos

Hi experts,

I have a question regarding iDoc performance. We will receive an iDoc containing more than 300 000 records (about 20 or so fields per record) every day from an external system, and I was wondering about the performance of that import. I realise that there are many factors that affect performance, but in "general", is an iDoc with more than 300 000 records considered a large batch which requires a lot of power to import?

The file coming in from the external system and converted to an iDoc by a middleware will most likely be split up due to limitations in the middleware software. We might get batches of 50 000 records instead. Not sure though if that will make a difference.

Thank you!!

kind regards,

Dionisios

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi Dionisios,

Certainly processing such a volume of data keeps the available work process busy for a longer time.

It is better to split the idoc into smaller chunks and schedule the processing.

Though the aggregate time consumed to process this IDoc remains same, this appraoch will not load the availble work processes. Also, this results in sort of a parallel processing if multiple work processes are avilable.

This can be adopted, provided, there is no restriction that the data should be sequentially processed.

Hope this helps.

Regards,

Ranga

Former Member
0 Kudos

Thank you for your replies, they were both very helpful. It's is very likely that we will split up the iDoc in smaller batches, though I am concerned about the overall performace since we will have other iDocs coming in as well from other interfaces. But at the moment there isn't much we can do about it.

What other alternatives do you recommend that have as good as traceability and error handling as iDocs? Would storing of the data in a databases table and then have R/3 read from that be a better choice? I suppose in that case we would have to build most of the error handling and logging ourselves.

kind regards,

Dionisios

Answers (2)

Answers (2)

christian_wohlfahrt
Active Contributor
0 Kudos

Hi Dionisios,

IDOC handling itself is fast - your total runtime will depend on further processing in the system.

Only if you can make rough checks beforehand and discard hugh amounts of the incoming data, own tables might make sense. Then own error handling is the price you pay...

Kind regards,

Christian

christian_wohlfahrt
Active Contributor
0 Kudos

Hello Dionisios,

in general IDOC inbound process is split into two parts: creating the IDOC and booking the data. Second part depends of course on your application.

If you (would) receive one large IDOC with 300 000 positions, application would have to handle complete data in one transaction -> hugh memory consumption, won't work probably.

Having 300 000 headers with only one position -> system is occupied with handling, bad idea either.

Splitting in 50 000 portions sounds OK. Try to imagine to change a document in SAP later: scrolling a hundred times before reaching a specific line is nasty (and slow). Only some documents in SAP are able to handle several hundred positions without to much delay for each action.

Maybe this helps a little in your investigation,

kind regards,

Christian