Skip to Content
avatar image
Former Member

BAPI_PLANNEDORDER_CREATE in Parallel Processing not working.

Hi Experts,

I need some help with BAPI_PLANNEDORDER_CREATE in parallel processing mode.

I can see the planned order number after the call but the commit is not working.

Any help please.

CALL FUNCTION 'BAPI_PLANNEDORDER_CREATE'
starting new task lv_task
destination 'NONE'
performing set_function1_done
on end of task

***********************************************

************************************************************************
* TYPES and TYPE-POOLS *
************************************************************************
TYPES: BEGIN OF ty_data,
pasch LIKE t460c-pasch, " Planned Order Type
plscn LIKE plaf-plscn, " Planning Version
matnr LIKE plaf-matnr, " Planning material
plwrk LIKE plaf-plwrk, " Planning Plant
pwwrk LIKE plaf-pwwrk, " Production plant in planned order
gsmng LIKE plaf-gsmng, " Total planned order quantity
avmng LIKE plaf-avmng, " Fixed quantity of scrap from production
psttr LIKE plaf-psttr, " Order start date in planned order
pedtr LIKE plaf-pedtr, " Order finish date in the planned order
pertr LIKE plaf-pertr, " Planned opening date in planned order
umskz LIKE plaf-umskz, " Conversion indicator for planned order
auffx LIKE plaf-auffx, " Firming indicator for planned order data
verid LIKE plaf-verid, " Production Version
term1 LIKE tcx00-term1. " Lead time scheduling
TYPES: END OF ty_data.

TYPES: BEGIN OF ty_log,
description(300),
err.
TYPES: END OF ty_log.

TYPES: BEGIN OF ty_pv_data,
matnr LIKE mkal-matnr,
werks LIKE mkal-werks,
verid LIKE mkal-verid,
bdatu LIKE mkal-bdatu,
adatu LIKE mkal-adatu,
mksp LIKE mkal-mksp,
END OF ty_pv_data.

TYPES: tyt_pv_data TYPE TABLE OF ty_pv_data.


**********************************************************************
* CONSTANTS *
************************************************************************
CONSTANTS: c_x VALUE 'X',
c_i VALUE 'I',
c_bt(2) VALUE 'BT',
c_sep VALUE '-',
c_s VALUE 'S',
c_e VALUE 'E',
c_61(2) VALUE '61',
c_010(3) VALUE '010',
c_1 LIKE mkal-mksp VALUE '1',
done VALUE 'X'.


************************************************************************
* Internal Tables *
************************************************************************
DATA:
HEADERDATA LIKE BAPIPLAF_I1 OCCURS 0 WITH HEADER LINE,
it_headerdata LIKE bapiplaf_i1 OCCURS 0 WITH HEADER LINE,
pit_headerdata LIKE bapiplaf_i1 OCCURS 0 WITH HEADER LINE,
" it_data TYPE STANDARD TABLE OF ty_data WITH HEADER LINE,
" pit_data TYPE STANDARD TABLE OF ty_data WITH HEADER LINE,
pit_data TYPE zdata,
it_log TYPE STANDARD TABLE OF ty_log WITH HEADER LINE.
" pit_log TYPE STANDARD TABLE OF ty_log WITH HEADER LINE.


************************************************************************
* DATA Declarations *
************************************************************************
RANGES: r_qa_valid FOR sy-datum,
r_pv_valid FOR sy-datum.

DATA : wa_headerdata LIKE bapiplaf_i1,
lt_return LIKE TABLE OF bapireturn1 WITH HEADER LINE,
lt_commit_return LIKE TABLE OF bapiret2 WITH HEADER LINE,
l_plannedorder LIKE bapi_pldord-pldord_num,
l_qty_aux(17),
lv_task TYPE char10,
functioncall1 TYPE char10.

DATA : lt_comdata LIKE TABLE OF bapi_pldordcomp_i1 WITH HEADER LINE .



*Prepare header data for Bapi
LOOP AT it_data INTO pit_data.
CLEAR pit_headerdata.
* Convert material number
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
input = pit_data-matnr
IMPORTING
output = pit_headerdata-material.

pit_headerdata-pldord_profile = pit_data-pasch.
pit_headerdata-plng_scenario_lt = pit_data-plscn.
pit_headerdata-plan_plant = pit_data-plwrk.
pit_headerdata-prod_plant = pit_data-pwwrk.
pit_headerdata-total_plord_qty = pit_data-gsmng.
pit_headerdata-fixed_scrap_qty = pit_data-avmng.
pit_headerdata-order_start_date = pit_data-psttr.
pit_headerdata-order_fin_date = pit_data-pedtr.
pit_headerdata-plan_open_date = pit_data-pertr.
pit_headerdata-conversion_ind = pit_data-umskz.
pit_headerdata-firming_ind = pit_data-auffx.
pit_headerdata-version = pit_data-verid.
pit_headerdata-det_schedule = pit_data-term1.

pit_headerdata-STGE_LOC = pit_data-lgort.
pit_headerdata-MRP_AREA = pit_data-berid.
pit_headerdata-USE_COLL_UPD = 'X'.
APPEND pit_headerdata.
ENDLOOP.


LOOP AT pit_headerdata INTO wa_headerdata.

*wa_headerdata-USE_COLL_UPD = 'X'.
*
*at LAST.
*wa_headerdata-LAST_ORDER = 'X'.
*ENDAT.

REFRESH lt_comdata.
CLEAR lt_comdata.

l_qty_aux = wa_headerdata-total_plord_qty.

" MOVE wa_headerdata-material TO lt_comdata-material.


APPEND lt_comdata.
lv_task = sy-tabix.
*Create planned order
CALL FUNCTION 'BAPI_PLANNEDORDER_CREATE'
starting new task lv_task
"FUNC1'
destination 'NONE'
performing set_function1_done
on end of task

EXPORTING
headerdata = wa_headerdata
* IMPORTING
* return = lt_return
* plannedorder = l_plannedorder
TABLES
componentsdata = COMPONENTSDATA.
* Receive remaining asynchronous replies
wait until functioncall1 = done.

IF lt_return-type ne c_e.
"lt_return-type = c_s or lt_return-type = 'I'.

*If create was successfully run commit
CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
IMPORTING
return = lt_commit_return.
CALL FUNCTION 'DEQUEUE_ALL'.

.

*Prepare the log
CLEAR: pit_log.
CONCATENATE :
"lt_return-message c_sep text-007
" wa_headerdata-material c_sep text-008
" wa_headerdata-plan_plant c_sep text-009
" l_qty_aux
l_plannedorder 'Created Succesfully'
INTO pit_log-description SEPARATED BY space.
APPEND pit_log.
ELSE.
CLEAR: pit_log.
CONCATENATE text-007 wa_headerdata-material c_sep text-008
wa_headerdata-plan_plant c_sep text-009 l_qty_aux
text-010 lt_return-message
INTO pit_log-description SEPARATED BY space.

MOVE c_x TO pit_log-err.

APPEND pit_log.
ENDIF.
CLEAR wa_headerdata.
ENDLOOP.

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

3 Answers

  • Apr 27 at 06:55 AM

    i think your commit is effecting the report LUW and not the parallel mode started the BAPI.

    Checking the documentation https://help.sap.com/doc/abapdocu_751_index_htm/7.51/en-US/abapcall_function_starting.htm

    You can see that the addon DESTINATION create a new Session/LUW, here the meaningful part

    As with every RFC, an asynchronous RFC opens a user session. If a calling program raises multiple consecutive asynchronous RFCs with different destinations or task IDs or if a connection no longer exists, the called function modules are processed in parallel in different user sessions automatically. This property can be exploited when running applications in parallel. Since the associated management tool can cause resource bottlenecks on both the client and the server, this kind of parallel processing is only recommended using the addition DESTINATION IN GROUP. 
    Asynchronous RFC triggers a database commit in the calling program. An sRFC in updates is an exception to this. 
    
    Calls using STARTING NEW TASK are always executed using the RFC interface and a destination specified as dest is always interpreted accordingly. This is why, unlike in synchronous RFC, initial string or text fields containing only blanks cannot be specified for dest. 
    
    The task ID passed as task does not need to be unique for each call. Unique tasks IDs can, however, help to identify calls within a callback routine. 
    
    If by mistake the statement RECEIVE is not used in a callback routine specified using the PERFORMING addition or the CALLING addition, the connection is persisted as when RECEIVE is specified using the addition KEEPING TASK. 

    Try to wrap both BAPIs (The creation one and the commit one) into a Z.. RFC function and call it in parallel task.

    Add comment
    10|10000 characters needed characters exceeded

    • As i said in my original answer, BAPI_PLANNEDORDER_CREATE and BAPI_TRANSACTION_COMMIT have to be in the same LUW, you cannot have BAPI_PLANNEDORDER_CREATE called in a parallel task and BAPI_TRANSACTION_COMMIT in the main flow.

      Or you parallelize the Z..RFC calling on JAVA side or you create a second Z..RFC with both BAPIs inside, so in the main Z... you call your 2nd Z... in parallel task.


      i'm curious about how many Planned Order you are going to create to have a "huge amount" of data.

  • Apr 27 at 01:26 AM

    Hi Amit,

    Try to work in this way

    1. Create a new function module with option RFC as attached (sap-se37-z-rfc.png) and call like below.

    CALL FUNCTION 'Z_NEW_RFC'

    STARTING NEW TASK ...

    DESTINATION SPACE

    ...

    TABLES

    it_headerdata = ...

    ...

    2. Call BAPI_PLANNEDORDER_CREATE inside the new RFC ...

    CALL FUNCTION 'BAPI_PLANNEDORDER_CREATE'

    EXPORTING

    headerdata = ...

    IMPORTING

    return = ...

    plannedorder = ...

    TABLES

    componentsdata = ...

    ...

    *BAPI returned successfully

    CALL FUNCTION 'BAPI_TRANSACTION_COMMIT' with WAIT = 'X'.

    ...

    Regards,

    Add comment
    10|10000 characters needed characters exceeded

  • avatar image
    Former Member
    Apr 25 at 06:43 PM

    CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
    EXPORTING
    WAIT = 'X'

    BAPI commit use the Wait option.

    Add comment
    10|10000 characters needed characters exceeded