We have a scenario where the legacy SAP system has allot of data extraction programs that download files and ftp them somewhere. These files range from anywhere between 10MB and 350MB in size.
We are upgrading to ECC6 and XI. We need to send the data from the abaps to XI then out using the file adapter.
Our testing has shown that once we start processing messages above approx. 25MB that XI grinds to a halt (java out of memory, cpicerr, etc). These messages actually convert to around 100MB when wrapped in XML. And this happens on the transfer of the message from ECC6 to XI - we haven't even got to the file adapter yet. We are using ABAP proxy to transfer the data which seems to be able to handle the large sizes much better than RFC.
I've gone thru all the previous forum posts about increasing the heap size etc, but my question is more architectural - ie. should we use XI for these types of data extracts?
There is a weblog (/people/william.li/blog/2006/09/08/how-to-send-any-data-even-binary-through-xi-without-using-the-integration-repository) that mentions you can send any files through XI without mapping and without repository objects by simply using a receiver determination and the file adapter. We have also tested this and notice that even though processing is much quicker - the adapter engine is still slowed down and if we have several of these processing at once we're unsure of the consequences! We only tried with a 60MB message as well.
Michal Krawczyk mentioned in a comment that it would be better to use a java proxy with some java code to copy or ftp the files instead - BUT... is it even worth using XI in this case - what does it add?