Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

ABAP Z Table Lock in Update task monopolizes lock

former_member221827
Active Participant

On sales order save I'm calling an update task to update a Z table. I'm looping over some standard SAP records and then enqueueing with the wait parameter the Z table on each iteration and deleting records/adding new records, then finally dequeueing.

What I'm finding is that when multiple update tasks are running the first update task will lock and unlock the table over and over again until it has processed all records before the second update task can get the lock. This effectively makes the second update task wait until the first is done where as I would expect them to be sharing the lock as the 1st task releases it.

Could someone point me in the right direction on what to check for/update to allow for this to happen. I had initially tried refining the lock parameters, but continued to run into deadlocks due to some modify/delete where clauses unless I locked the entire table.

Thanks!

-Chris

10 REPLIES 10

Jelena
Active Contributor
0 Kudos

I can only guess this might be related to DB commit. Not sure I understand multiple lock situation though. Standard SO update transaction (VA02) applies lock to the whole sales order by document #. If I were updating Z tables for that document # I'd also lock the same way. Why do you have / need multiple locks?

I'd suggest to add more details and a code example.

0 Kudos

I had initially attempted to lock by document number, but in testing it looked like my delete statements with where conditions (although they were only deleting on the locked document #) were causing deadlocks in that case.

former_member220028
Active Contributor

what do you mean by sharing the locks? if it is locked and u want to lock it again you normaly will get sy-subrc 4.

if you you are calling functions in update Task it may matters if u have different app Servers. though if you have multiple Tasks running and they run parallel it may be a problem that it takes time to populate the deque to the other app Server. so the other app Server still think it is locked when programm comes to the coding-point.

I read the normal behavior is that the 2nd Task waits till the lock is released (when using wait Statement at enque).

regards

Stefan

former_member186746
Active Contributor

Hi,

Check the SAP help on LUW for info on how database updates work

https://help.sap.com/saphelp_nw73ehp1/helpdata/en/41/7af4bfa79e11d1950f0000e82de14a/frameset.htm

Kind regards, Rob Dielemans

0 Kudos

I will certainly read up on this, thank you.

srikanthnalluri
Active Participant
0 Kudos

Are you locking the table by row level or entire table?

pokrakam
Active Contributor

As I understand, you're expecting a kind of lock queue and all enqueue requests to be granted in call order.

This is not how it works, see doco for the WAIT parameter: https://help.sap.com/viewer/ec1c9c8191b74de98feb94001a95dd76/7.5.12/en-US/cf21eebf446011d189700000e8...

If a lock attempt fails because there is a competing lock, the system repeats the lock attempt after a certain time. The exception FOREIGN_LOCK is triggered only if a certain time limit has elapsed since the first lock attempt. The waiting time and the time limit are defined by profile parameters.

So your unlock-lock cycle is too fast for another program to retry in the brief moment it's unlocked.

You can validate this behavior with a simple program:

report zlock_test.

call function 'ENQUEUE_EDEMOFLHT' exporting _wait = abap_true.
wait up to 10 seconds.
call function 'DEQUEUE_EDEMOFLHT'.

call function 'ENQUEUE_EDEMOFLHT'.
if sy-subrc = 0.
  write: 'Got second lock'.
endif.

Run in two sessions starting within <10 seconds of each other and you will see that the first run gets both locks.

My suggestion would be to collect your updates and process them in a single go. Or lock by a job-specific key (e.g. company code or whatever).

0 Kudos

This sounds like it could likely be the case. I will look into the suggestions you have provided. Much appreciated.

As to the job-specific key. Any thoughts as to why I would have been getting a deadlock when trying to delete with a where clause that contains the key I locked on? I.e. Session 1 has order # 1 locked, session 2 has order #2 locked and then does a Delete ztable where order = 2 and key = blah1 and key = blah2... etc. ?

The delete would never complete and would eventually short dump with a deadlock.

former_member221827
Active Participant
0 Kudos

I'll add some psuedo code. Mind you, this is oversimplified.

when running this for test I'm calling the update task for 40 orders so 40 times in a test report. Then starting the report again from another session for different orders with progression updates showing me how many orders have processed. The first report will process through all 40 before the second one begins to process whereas I would expect the second report to grab the lock in between and process semi-concurrently. Hopefully this helps to clarify some of the questions above.

When running in the debugger the second session will grab the lock, but will hold on the delete statement as though the table is still locked to the first session still has it after it has run the dequeue.

"this function is called in a loop over a dataset in an update task started in sales order save

FUNCTION zupdate
  DO 5 Times.
    CALL FUNCTION 'ENQUEUE_EZNO_SLSINC_ORD'
      EXPORTING
        mode_zno_slsinc_order = 'E'    " Lock mode for table ZNO_SLSINC_ORDER
        mandt                 = SY-MANDT    " 01th enqueue argument
        order_number          = '0000000001'    " 08th enqueue argument
        _wait                 = 'X'
**        _collect              = ' '    " Initially only collect lock
      EXCEPTIONS
        foreign_lock          = 1
        system_failure        = 2
        others                = 3
      .
    IF sy-subrc = 0.
      EXIT.
    ENDIF.
  ENDDO.
  IF sy-subrc <> 0.
    EXIT.
  ENDIF.

  DELETE zno_slsinc_order WHERE "conditions ....
  " Do processing to get new order data
  INSERT zno_slsinc_order FROM wa_zno_slsinc_order.

  CALL FUNCTION 'DEQUEUE_EZNO_SLSINC_ORD'
  EXPORTING
    mode_zno_slsinc_order = 'E'    " Lock mode for table ZNO_SLSINC_ORDER
    mandt                 = SY-MANDT    " 01th enqueue argument
    order_number          = '0000000001'    " 08th enqueue argument
  .

ENDFUNCTION.
  

former_member221827
Active Participant
0 Kudos

Entire table, I'd like to do by order #, but was running into deadlocks on delete statements with where conditions (I was including the order # that was locked as part of the where clause so I'm not certain why it deadlocked on that).