Skip to Content
author's profile photo Former Member
Former Member

possibility of avoiding commit

hello experts,

scenario is that commit and wait statement has been used inside a loop.this looping takes place for thousands of times.due to this commit and save statement it takes nearly 1 hr to execute the report.inside the loop after this commit statement,many function modules are called.

if i tried to give the commit and wait statement outside loop.the result is that database is having only the first entry value on going to the function modules inside the loop.

please help.

Add a comment
10|10000 characters needed characters exceeded

Related questions

6 Answers

  • Posted on Mar 07, 2012 at 12:25 PM

    Hi Namitha,

    Performance wise COMMIT WORK is good. So try to use only COMMIT WORK.

    Let me know are you using any UPDATE TASK FM before COMMIT stmt .

    Kind regards

    Chetan

    Add a comment
    10|10000 characters needed characters exceeded

  • author's profile photo Former Member
    Former Member
    Posted on Mar 07, 2012 at 12:26 PM

    Hi namitha,

    I am not quite sure if I got your issue completely.

    But, try to update the database and calling the function modules in two different loops, if it is feasible.

    And you can use commit in between these loops.

    P.S: If possible try to update the database at one go(may be using an internal table) instead of doing it in a loop so that this can be solved with one loop.

    Regards

    Ram

    Add a comment
    10|10000 characters needed characters exceeded

  • author's profile photo Former Member
    Former Member
    Posted on Mar 07, 2012 at 02:47 PM

    Hello Namitha,

    Why dont you use ' INSERT Ztable FROM TABLE Internal table. COMMIT WORK '. This will definately increase the performance.

    Thanks,

    Prakash Reddy.

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on Mar 07, 2012 at 08:56 PM

    Hi Namitha,

    The problem is not that the COMMIT WORK is taking so long, the problem is likely that you are calling a BAPI, and BAPIs have a lot of overhead, including a number of other function module calls which occur at COMMIT WORK (google CALL FUNCTION - IN UPDATE TASK for more info on this). Depending on which BAPI you are calling, there may be a more efficient way to update the database, but generally that is not recommended. The reason why BAPIs take so long is because they do all sorts of checking and validating and cross-table updating to ensure that the updates you are making are consistant.

    Also, I suggest you use BAPI_TRANSACTION_COMMIT instead of COMMIT WORK when committing changes made by a BAPI.

    Good Luck!

    Alex

    Add a comment
    10|10000 characters needed characters exceeded

    • Hi Namitha,

      >

      > Also, I suggest you use BAPI_TRANSACTION_COMMIT instead of COMMIT WORK when committing changes made by a BAPI.

      >

      > Good Luck!

      > Alex

      BAPI_TRANSACTION_COMMIT is almost identical to COMMIT WORK. The difference is a subsequent call to function BUFFER_REFRESH_ALL which will not really help in this case.

  • Posted on Mar 08, 2012 at 08:42 AM

    hello experts,

    >

    > scenario is that commit and wait statement has been used inside a loop.this looping takes place for thousands of times.due to this commit and save statement it takes nearly 1 hr to execute the report.inside the loop after this commit statement,many function modules are called.

    >

    > if i tried to give the commit and wait statement outside loop.the result is that database is having only the first entry value on going to the function modules inside the loop.

    >

    > please help.

    Hello Namitha,

    well, if your subsequent function calls assume that the document is commited, you can hardly do anything about it. But I would like to propose you something that I have already proposed in another thread.

    If you process many documents in loop in the following way:

    LOOP.

    CREATE DOCUMENT

    COMMIT

    CHANGE DOCUMENT or CREATE RELATED DOCUMENT.

    COMMIT

    ENDLOOP

    Then you can make the following change of your algorythm:

    LOOP.

    CREATE DOCUMENT

    ENDLOOP.

    COMMIT

    LOOP.

    CHANGE DOCUMENT or CREATE RELATED DOCUMENT.

    ENDLOOP.

    COMMIT

    This change will allow you to minimize the number of COMMITs and significantly increase the performance.

    Of course you'll have to take care of error handling and reprocessing.

    Regards,

    Yuri

    Add a comment
    10|10000 characters needed characters exceeded

    • Hello Namitha,

      your coding example is a perfect sample for 2 loops approach as I suggested above.

      In the 1st loop you call CCAP_ASSIGN_OBJECT_TO_ALT_DATE for all objects (or with packages) w/o COMMIT and you commit at the end.

      Then you make a second loop and call CSDE_BOM_DATE_CHECK for the same list of objects.

      I hope now it's clear enough.

      Yuri

  • Posted on Apr 24, 2012 at 10:05 AM

    You could try to remove the AND WAIT, but insure that no lock will arise during execution of the next call of the BAPI, if the BAPI/transaction needs to lock some shared objects between two calls, you can get some locked records until the update tasks release the locks.

    Else you could try to parallelize the update in independent block of data, when you are insured that no lock can arise between the processes?

    Regards,

    Raymond

    Add a comment
    10|10000 characters needed characters exceeded

Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.