Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

update table fails in background since many jobs trying to execute same update command.

0 Kudos

Hello All,

I have a function module having update statement, and i tried to execute the same FM from 2 sessions by keeping breakpoint at update statement.

In first session record updated in table, for second session sy-subrc = 4, since there is same record available in table.

for second session also record should be inserted into table with count increasing(we have count field in table as keyfield)

Actually i have taken two sessions as example, in real time there are N number of sessions...

Could any one give effective solution.... i dont want to miss any record.

Ex :

the table looks like this :

Date

Time

Count

Insert into XYZ values WA_XYZ

At same time(12:00:00) different jobs are trying to insert records in XYZ. in this case Some times there are chances that WA_XYZ would have same records, so one will be inserted and one will be rejected.. But i dont wanted to lose any record. so there is one field in table called serial..

there is field called count, it will increase when same record tries to insert.. when time changes(12:00:01) count will start from zero.

Many Thanks,

Praneeth Ch.

14 REPLIES 14

GK817
Active Contributor
0 Kudos

Hi,

One alternative would be to read the last record from the table and increase the counter accordingly.

I would also suggest you to explore locking the table before updating. You can write the code to keep trying to update until lock is released. It will surely work this way and this is also the recommended solution from my side.

Regards

GK

0 Kudos

Hello Gaurav,

If you take N number of sessions, and keep break point at insert, all sessions will try to update same record having same count, only one will be inserted rest will lost...

I have tried to use write lock, but record is missing to update in table when table is locked.. any other way of locking the table??, dont wanna lose any record.

Its actually function module and for a second 10-15 records will try to insert.. we dont want to lose any record.

Regards,

Praneeth Ch.

GK817
Active Contributor
0 Kudos

When table is locked, then you will get an error message while trying to ENQUEUE the table, you can put a loop around the update and keep trying to ENQUEUE until lock is released to update. You can set the loop to try for may be 5 times, to avoid endless loop. You tried this approach? still issues?

Sandra_Rossi
Active Contributor

The issue is that your primary key has only date, time and count columns. Add a column like "process number" whose value will be different for every parallel process, and you won't be subject to duplicate lines anymore.

srikanthnalluri
Active Participant

May be you can add another field as primary key something like GUID id and you can use the class - CL_SYSTEM_UUID to generate the unique key using date and time

0 Kudos

Hello S Nalluri,

Found its very useful, but one doubt..

We are having two jobs executing parallel in same date with same time, it might not work, because If this class is generating unique key using date and time.

Please correct me.

BR,

Praneeth Ch.

praneeth.chintapalli IF_SYSTEM_UUID~CREATE_UUID_X16 creates an unique uuid without any input

Praneeth Chintapalli Don't worry that won't be a problem. It will be very much unique.

former_member1716
Active Contributor
0 Kudos
Praneeth ChintapalliIdeal solution is ensuring your key fields are always unique irrespective of updating the table from multiple ends. As suggested kindly go on to add one more field which can be a incremental serial number, now you will have four fields as primary key combination which will never disturb or clash with other updates as the key combinations are always unique. Regards

Rashid_Javed
Contributor

Do you have experience with Number range objects? Such kind of issues can be easily avoided by using a number range object.

You can use transaction SNRO to create a number range object. All you need is a number length domian like CHAR010 that is needed in number range object. Next you need to define the intervals and you are ready to go.

You can use standard function module NUMBER_GET_NEXT to get next number in range and it returns a unique number always.

RJv

0 Kudos

Hello Rashid,

Based on time count will be increased, if date change count will start over from zero.

In fact I have not worked on number ranges, but will it be generated from zero, one, two so on.... and when time change(12:00:00 to 12:00:01) number range should start from zero.

BR,

Praneeth Ch.

Hi Praneeth,

Number Range Objects are used by SAP to generate numbers for Documents like Finance Documents, Purchase Orders, Sales Orders and many many other types of documents.

Depending on how a number range interval is defined, you can mention the starting number and it will provide numbers from starting point onward (if starting point is 1, it will provide numbers as 2,3,4......)

With reference to your requirement about resetting the count, there are standard function modules available to reset the number range interval. For example function NUMBER_RANGE_INTERVAL_INIT can be used for this. So you can include it in your main program or develop a separate program to be executed at 12:00:01 (background job) that can reset the number range interval.

RJv

FredericGirod
Active Contributor
0 Kudos

There is a way to protect job to run several time at the same moment, it is used for example in IDoc reprocessing.

see program RSBTONEJOB & RSBTONEJOB2

gabmarian
Active Contributor
0 Kudos

As others pointed out the decent solution is to ensure that every record is assigned with a unique key by default (via using a UUID, enhance the existing key structure, etc.).

I don't like the idea of a time dependent counter field. During the time between the counter value determination and the insert operation there is a chance that the same value has already been used up by a different process.

If you really can't change the table structure for some reason and not dealing with a vast number of records you can try "brute-force", attempting the insert operation on each record until it succeeds:

" Prepare records to be inserted
...
 
LOOP AT records ASSIGNING FIELD-SYMBOL(<record>).
  
  DO.
    
    INSERT custom_table FROM record.
    
    IF sy-subrc = 0. 
      " Success
      EXIT. 
    ELSE.
      " Already exists, increase count
      <record>-count = <record>-count + 1.
    ENDIF.
    
    IF <record>-count > maximum_count.
      " Handle reaching maximum count (error handling & exit for example)
      ...
    ENDIF.
    
  ENDDO.
  
ENDLOOP.