Skip to Content

PI/PO JDBC Usage Best Practices for High Volume

Hi All!

We are currently using PI/PO to perform system integration with legacy systems, and most of it is receiving XML from SAP and then performing JDBC insert/updates on legacy databases.

I'm not PO specialist, but what I got from the situation is basially that, no matter how many "channels" are split in PO, in the end, the JDBC connector which commits updates on Legacy is only one, which kind of creates a queue of all messages and do not allow them to be processed in parallel.

In situations where PO receives, say, 11 thousand messages at once, and needs to perform this volume of "commits" via JDBC, all other applications are stopped - which has sometimes caused high-impact incidents and/or high-impact slowness in the system.

I know JDBC in Java can have different "threads" posting commits in parallel, but none were able to confirm whether this can be done on PI/PO as well.

On top of that, I don't know as well whether due to the huge volume of messages to legacy system, PI/PO should adopt a more "messaging" focused architecture (deliver the messages and let the "commit" work in charge of destination application).

Given that scenario, do you know what is the most advisable approach?

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

0 Answers