Skip to Content

Sender JDBC blocks the follow succesive polls with a big message size.

Hi friends,


We have noticed that when the sender jdbc select a huge register size,the channel is blocked and this doesn't do a new poll, even if we create a new channel to have the two processes in parallel, the second one is waiting as well. Only when the Adapter has sent the message to the Integration Engine, in the audit log the step "Trying to put the message into the send queue." is finished the adapter does the new poll. This step is the bottleneck, then i don't think the problem is in the SELECT/UPDATE statements.


We have notice the receiver JDBC has parameters to deal with the parallelism but we haven't found anyone to the Sender JDBC.

Transaction Isolation level is setting to Serializable also we try with read_commited.


We have tried yet:


1. Defining an Operating System Command->Timeout: the SAP.HELP documentation points that when the timeout is reached the active program will run in batch mode but we have not notice any improvement.

2. Disconnect from Database After Processing Each Message. Even with this check marked, the adapter continues blocking in the large query.


The system is a PI 7.0 SP12.


I have found this note 1084161 - Performance decrease after applying XI 3.0 SP20 7.0 SPS12 but im not sure that this note could improve the performance, anyone has tried it?


I really appreciate any help you can provide.


Regards.

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

4 Answers

  • Best Answer
    Mar 04, 2015 at 07:32 PM

    Hi all,

    With a little investigation SAP PI 7.0 writes sequentially the messages from jdbc adapter to integration engine,  it doesn't have parallelism in contrast of file adapter for example.

    Increasing one java node (although only one is active at time with this channel but in this way i avoid the channel stopping) and increasing the physical memory as well, the problem has been solved.

    Thank you all for your valuable suggestions.

    Regards.

    Add comment
    10|10000 characters needed characters exceeded

  • avatar image
    Former Member
    Jan 13, 2015 at 05:02 AM

    Hi Inaki,

    There might be chance of locking the JDBC adapter due to huge records.

    You might need to unlock the adapter and then try again.

    Thanks and Regards,

    Naveen    

    Add comment
    10|10000 characters needed characters exceeded

    • Thanks Naveen,

      I've already used that parameter to avoid the JDBC adapter lock and the issue with polls without retrieving data, but that parameter doesn't influence with the Send Queue lock. I'm wondering this behavior is the normal functionality. I haven't found any documentation to increase the queues in the adapter engine, only the threads and the threads never are all used at the same time.

      Regards.

  • avatar image
    Former Member
    Jan 13, 2015 at 05:28 AM

    Hi Inaki

    Have u checked the option 'Terminate program after timeout' in sender JDBC adapter?

    I don't think we have further more options in sender JDBC adapter to solve your probelm.

    In the worst case , you may have to optimize your select query ( may be selecting less number of rows and increasing the number of poll by reduicng the poll interval.

    Thanks,

    Indrajit

    Add comment
    10|10000 characters needed characters exceeded

    • HI Indrajit,

      Thanks for your help, but the SELECT/UPDATE statements are really fast, accessing by index. The issue is when the Adapter Engine try to copy the message to the queue in the Integration Engine, the step "Trying to put the message into the send queue".  However, in the production environment this step is quicker (3 times more or less), i don't know exactly why because the systems dimension are similar. The problem persists in PRO but is enough quick that the customer doesn't realized this time.

      I haven't tried the option that you mentioned because not timeout is really throwed, and to finish the program i think the thread with the poll doesn't seem a good choice, the time with huge files (bt 50MB and 90 MB) in Quality system is about 2 or 4 minutes.

      Regards.

  • avatar image
    Former Member
    Jan 13, 2015 at 07:17 AM

    Hi Inaki,

    As per me, the connection pooling is either not set up or not enabled at the database side to which you are connecting to.

    Also, if the connection pooling is enabled, then in the JDBC sender Connection URL, depending on the type of database you are connecting, you need to either specify the name of the pooled DataSource or for Oracle the POOLED key word is to be specified as connect type.

    Regards,

    Alka.

    Add comment
    10|10000 characters needed characters exceeded