Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Performance Issue when Using BAPI

Former Member
0 Kudos

Hi All,

We are creating sales order via BAPI. We are expecting around 2000 per file. At the start of the processing, it creates about 20 orders in a minute. However, as the file progresses, the performance is really degrading. It creates about 5-6 orders in a minute. Is there any idea on how to improve the performance? Is it because of the memory allocation used by the BAPI? BAPI_TRANSACTION_COMMIT is used every after SO is created. I have seen that this BAPI is refreshing the buffers used. Will this suffice? How do I free up the memory used by the BAPI each time I create a new order? I am suspecting this has something to do with memory allocation in the BAPI. I have already used the STARTING NEW TASK with the BAPI, on the hope that it will improve the performance.

Any help is appreciated.

Thanks,

Louisse

Edited by: Louisse Heartsdale on Sep 13, 2011 3:17 PM

21 REPLIES 21

yuri_ziryukin
Employee
Employee
0 Kudos

Hello Louisse,

one possible solution would be to reprogram your sales order creation for parallel processing in separate dialog work processes.

Please refer to this documentation:

http://help.sap.com/saphelp_nw04/helpdata/en/fa/096e92543b11d1898e0000e8322d00/frameset.htm

Kind regards,

Yuri

0 Kudos

Hi,

Currently, this is already being done in my code:

CALL FUNCTION 'Z_V_BAPI_SO_CREATEFROMDAT2_RFC'

STARTING NEW TASK 'TASK' DESTINATION 'NONE'

PERFORMING bapi_salesorder_create ON END OF TASK

EXPORTING

order_header_in = wa_header

order_header_inx = wa_headerx

logic_switch = wa_bapisdls

TABLES

return = t_return

order_items_in = t_items

order_items_inx = t_itemx

order_partners = t_partners

order_schedules_in = t_schedules

order_schedules_inx = t_schedulesx

order_conditions_in = t_conditions

order_conditions_inx = t_conditionsx

order_text = t_texts

extensionin = t_extensionin.

This will already trigger parallel tasks. But the performance is still the same.

Any inputs?

Thanks,

Louisse

0 Kudos

Hi Louisse,

Probably a silly question but... are you refreshing your bapi tables after each call?

0 Kudos

Hi,

Of course I refresh my internal tables. I even used FREE instead of refresh.

Thanks,

Louisse

0 Kudos

Hi,

OK sorry for the stupid question...;) another one however: Have you tried to use parameter WAIT of the BAPI_TRANSACTION_COMMIT?

Otherwise could you paste your main loop section here?

Kr,

Manu.

Former Member
0 Kudos

Did you doa trace using ST05?

Sure you dont want to modify the BAPI, but still if you find out what takes him so long (probably some of the selects inside), you could get an idea if providing additional info could speed up those selects.

Maybe additional data could help you to hit a index for big selects and so speed up the process.

But until you dont know WHAT exactly is beeing slow, it´s quite hard to generate an idea on how to speed things up.

0 Kudos

Did you doa trace using ST05?

Most probably ST05 will not help in this case as I expect the database time to be constant.

Former Member
0 Kudos

Hello,

We have had to deal with a requirement to process create 12,000 orders per night, with up to 45,000 on occasions. We used the standard BAPI, BAPI_SALESORDER_CREATE_FROM_DAT2. The process creates about 15 orders a second in batch. Among the techniques we used were:

Chop the data up into 3 sets and have 3 batch jobs running in parallel on the DB server. We were able to do this as the batch load on the machine is low at the time these jobs run. We call BAPI_TRANSACTION every 100 orders.

We did extensive run time analysis of the BAPI, BAPI_SALESORDER_CREATE_FROM_DAT2 and found that program exits were doing unnecessary extra reads of tables such as KNA1 when the front end program calling the BAPI was already doing this.

We found the BAPI spent time calling routines that weren't needed (eg. LIS and Inbound Deliveries). We were unable to see how to switch these off, nor did we want to go 'inside' the BAPI to use the internal FM that SAP would use the create the sales order.

The sales orders were very simple (no more than 5 lines), but most were rush orders meaning that we were creating a delivery as well.. There was no availability checking.

volker_borowski2
Active Contributor
0 Kudos

Hi,

starting fast and getting slower suggests (sorry if some of these sound stupid, I am just brainstorming here):

- File being read from the beginning each time.

- intermediate result being selected into internal table in a loop and not cleared for next iteration (table gets bigger each loop)

- SAP - enqueues not released fast enough causing contention

Second thing:

Your calling new tasks with customer function module and call the BAPI as end-of task.

How do you keep track, that all started childs have finished?

Shouldn't you call a customer function on end-of-task to count down that alll tasks are finished

so that your memory is not released too early...

Like

...build packages

loop at packages

increase counter

start new task with package (end-of task -> call customer fm, that does processing and decreases counter)

endloop

wait until counter = 0

Volker

0 Kudos

Hi Volker,

Can you provide more context on this? I am not familiar how to do your suggestion.

Thanks,

Louisse

0 Kudos

Hi Volker,

>

> Can you provide more context on this? I am not familiar how to do your suggestion.

>

> Thanks,

> Louisse

Hello Louisse,

this is all described in the SAP documentation which I linked in my first reply.

This is just the way to control the number of parallel tasks that you kicked off.

Actually I don't see any reasons of slowing down if you work with parallel tasks (+ controlling the number of simultaniously running tasks) and properly passing all parameters to the BAPI.

If all of it is already present, then you'll have to do a series of ST12 (ABAP+SQL) trace to see where the time is spent and what functions continuously consume more and more time.

Refer to the blog of Hermann Gahm regarding ST12: /people/hermann.gahm/blog/2010/03/22/st12-150-tracing-user-requests-tasks-http

Regards,

Yuri

0 Kudos

Hi Yuri,

Let me provide you more details of the program. First BAPI call is to create the sales order. Afterwhich, we have to update the created SO with the partner function Payer. This was not included in the creation of SO since if the Header and Line Item Payer is different, this will result to an error in the BAPI. Afterwhich the payer is updated, we need to update the Billing Date for the SO again. So in one order creation, there are 3 calls to BAPI.

I have nailed down the issue. When I go to change the SO, at times it is still locked after the BAPI_TRANSACTION_COMMIT. So I've used the ENQUEUE and DEQEUEUE for SO. However, using the ENQUEUE_EVVBAKE causes performance issues. SO I've tried removing it, and just leave DEQUEUE_EVVBAKE, since my goal is toreally check if the SO is still locked before I start making changes again. However, it didn't work. SO I had to put in the ENQUEUE FM again. Is there any other way that I can check whether the SO is still locked before I start making the changes?

Thanks,

Louisse

0 Kudos

Hello Louisse,

the problem of this particular BAPI is that the document is "saved" and you cannot change it in memory before the data is committed. This is not the way for example in newer SAP products like CRM.

Therefore I would like to offer you 2 solutions.

Solution 1 - change to batch input. And in batch input simulate the change of the payer and billing date like you are in VA01 transaction. This should be possible without actually saveing and committing the document inbetween.

Please note that batch input will have overhead for the PBO/PAI processing and at the end you may end up with approximately the same speed.

Solution 2 - do not work sales order by sales order doing 3 operations, instead do 1st operation for ALL sales orders, then do your second operation for all of them, then third.

Thus you'll make sure that there are no locking issues. However you have to be more careful concerning error handling.

Regards,

Yuri

0 Kudos

Hi Yuri,

2 points based on your suggestions.

1. I am not aware of any BAPI or FM to mass change sales order. So am not sure how to carry out this change

2. We can't have the changes done at a later timing. We need the created SO to be changed as soon as possible. So I am not sure am not gonna go with this approach.

As far as other forums, the BAPI_TRANSACTION_COMMIT is supposed to clear all the buffers used by the BAPI. But I don't know why even if this BAPI is used, it seems the buffer memory of BAPI is still not freed.

Thanks for your input.

Regards,

Louisse

0 Kudos

Hello other colleagues who replied after me.

You probably missed the explanation of Louisse who already found where the problem actually is.

> I have nailed down the issue. When I go to change the SO, at times it is still locked after the BAPI_TRANSACTION_COMMIT. So I've used the ENQUEUE and DEQEUEUE for SO. However, using the ENQUEUE_EVVBAKE causes performance issues. SO I've tried removing it, and just leave DEQUEUE_EVVBAKE, since my goal is toreally check if the SO is still locked before I start making changes again. However, it didn't work. SO I had to put in the ENQUEUE FM again. Is there any other way that I can check whether the SO is still locked before I start making the changes?

Hello Louisse,

coming back to your question - the way to check if the document is still locked is correct.

Sometimes the COMMIT WORK AND WAIT in the BAPI_TRANSACTION_COMMIT is not enough to make sure that the document is unlocked after processing.

Please read the following:

The BAPI 'BAPI_TRANSACTION_COMMIT' is internally using statements COMMIT WORK (parameter WAIT = ' ') and COMMIT WORK AND WAIT (parameter WAIT = 'X'). In the following you will find the relevant part of the SAP Online Documentation for the COMMIT statement:

"...

This executes all high-priority (VB1) update function modules in the order of their registration and in a common database LUW. If you do not specify the addition AND WAIT, the program does not wait until the update work process has executed it (asynchronous updating), but instead is resumed immediately after COMMIT WORK. However, if the addition AND WAIT is specified, program processing after COMMIT WORK will not continue until the update work process has executed the high-priority update function modules (synchronous updating).

..."

From the documentation we would expect that using COMMIT WORK AND WAIT (or the BAPI with WAIT = 'X') should be sufficient to ensure that the new object had been created successfully before the next statement is executed. In some cases this is correct in other cases it is not. Following our analysis the COMMIT WORK AND WAIT does not work if:

1) there is a COMMIT WORK executed within the BAPI. This COMMIT WORK statement is also triggering the Update processing.

Examples are BAPI_MATERIAL_SAVEDATA (see customer message 461596 2009), BAPI_ENTRYSHEET_CREATE (see message 0930303 2008) and BAPI_PO_CREATE (use BAPI BAPI_PO_CREATE1 instead).

Following our analysis the COMMIT WORK AND WAIT statement does not work as expected, if there is a COMMIT WORK statement executed within the BAPI itself. The reason is that no data are committed and no Update processing is triggered by the BAPI_TRANSACTION_COMMIT if a COMMIT WORK was executed before. The BAPI_TRANSACTION_COMMIT is therefore not waiting for the Update 1 processing which was triggered with the previous Commit in the application BAPI. The behavior can be perfectly tested with BAPI 'BAPI_SALESORDER_CREATEFROMDAT1' because this BAPI has a parameter 'WITHOUT_COMMIT' which controls whether a COMMIT WORK statement is executed.

When we used the BAPI with parameter 'WITHOUT_COMMIT' = '' the BAPI executed a COMMIT WORK internally and the BAPI_TRANSACTION_COMMIT (WAIT = 'X') did not work as expected. In this case also other options e.g. 'SET UPDATE TASK LOCAL', CALL FUNCTION 'TRANSACTION_BEGIN' together with COMMIT WORK AND WAIT did not show the required result.

When we used the BAPI with parameter 'WITHOUT_COMMIT' = 'X' the BAPI did not execute a COMMIT WORK internally and the BAPI_TRANSACTION_COMMIT (WAIT = 'X') worked fine.

2) There is more than one V1 update generated by the BAPI.

An example is BAPI 'BAPI_PRODORDCONF_CREATE_TT' which creates a confirmation for a production order. If the 'Backflush' indicator is used in the confirmation there will be be a material movement generated, too. The confirmation generates a V1 update but also the material movement generates a V1 update. The COMMIT WORK AND WAIT will in this case only wait until the first V1 update is finished.

Edited by: Yuri Ziryukin on Sep 28, 2011 11:00 AM

0 Kudos

answer continued...

Now coming to my proposal from my last post (first all creates, then all changes), here is more details about it.

Grouping by BAPI method

The processing of one particular object from the beginning to the end by calling different BAPIs (BAPI_OBJECT_CREATE, BAPI_OBJECT_CHANGE) often requires that the update of one BAPI is completed before the next BAPI could be called. Below you will find a sample how this would look like in a straightforward implementation

...

LOOP AT objects.
  CALL FUNCTION 'BAPI_OBJECT_CREATE'.
  CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
    IMPORTING
      WAIT = 'X'.
  CALL FUNCTION 'BAPI_OBJECT_CHANGE'.
  CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
    IMPORTING
      WAIT = ' '.
  ENDIF.
ENDLOOP.

Instead of doing all actions object by object it could be better to use grouping by BAPI method. Below you will find a sample how this could look like:

...

LOOP AT objects.
  CALL FUNCTION 'BAPI_OBJECT_CREATE'.
  CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
    IMPORTING
      WAIT = ' '.
ENDLOOP.
LOOP AT objects.
  IF object not exists.
    APPEND object TO object_work_list
  ELSE.
    CALL FUNCTION 'BAPI_OBJECT_CHANGE'.
    CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
      IMPORTING
        WAIT = ' '.
  ENDIF.
ENDLOOP.

Reprocess the objects in object_work_list and wait

The sample will first use method CREATE to create all objects before the objects are going to be changed. Since the second loop starts with the object which was created first, it is highly probable that this object exists and could be changed. If the object does not exist yet it could be put into a work list and processed at a later point in time.

Regards,

Yuri

Former Member
0 Kudos

Maybe there are not enough processes for executing update requests?

Maybe you can try to put the command "SET UPDATE TASK LOCAL" at the beginning of your FM 'Z_V_BAPI_SO_CREATEFROMDAT2_RFC'?

Edited by: Carsten Grafflage on Sep 19, 2011 12:28 PM

0 Kudos

Hi Carlsten,

What does this command do?

Thanks,

Louisse

0 Kudos

Well, I couldn't explain it better than the documentation: http://help.sap.com/abapdocu_70/en/ABAPSET_UPDATE_TASK_LOCAL.htm

Regards,

Carsten

0 Kudos

Hi Carsten,

Sorry for asking this, but would you know what's gonna be the impact if I have more than a thousand records, and then I am calling the BAPI for 1000 times as well, and I'm setting each call to local task? Will that not disrupt the flow of the program? Sorry, as I haven't used the command before.

Thanks,

Louisse

Former Member
0 Kudos

Let's say you have only two update processes, but you start four parallel worker processes, then two of them have always to wait for their turn (if you have to use COMMIT WORK AND WAIT). But if you use SET UPDATE TASK LOCAL, each of the parallel processes can perform his own update, without the need of the update processes. This can possibly lead to a higher system load, but may be faster.