cancel
Showing results for 
Search instead for 
Did you mean: 

BODS job failing due to database table contention

0 Kudos

Hi Team,

BODS jobs were failed initially with below error :

----------------------------------------------------------------------

error message for operation <SQLExecute>: <[Microsoft][SQL Server Native

                                                        Client 10.0][SQL Server]Transaction (Process ID 100) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.>.

-----------------------------------------------------------------------

Later after sometime job ran successfully aftersometime.

Please let me know how to avoid this situation of deadlock for BODS jobs

Thanks

K.Chandrakanth

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Try creating some form of job locks. Some sort of table or set of tables that get written to when one job starts and if another job tries to kickoff during the same time it fails. However, this may not be contention with yourself. We have seen this most often with DB2 databases when we are trying to write to a table when another teams batch job is trying to utilize it as well. In that case, you may need some sort of higher authority or Job Control Language to avoid conflicts with another team.

Job locks work well to avoid contention with yourself, though. 

0 Kudos

Hi Joshua

Thanks for the reply , can you please suggest/elaborate on how to implement job locks

Regards

K.Chandra

Former Member
0 Kudos

No problem!

The simplified version of what we do is maintain a table on the database that essentially has a unique ID for each run of the job, a start time, and an end time. Then we have a second table, which contains active 'job locks'. These are identified by the unique ID of the job running and the type of lock acquired. We use one job lock 'type' per database that we source or target (for us that is 3).

The Job itself will need an initialization script that writes out to the run table that it is starting, and writes out to the lock table that its acquiring locks. But before it does that it must look at the table to make sure a  not currently running (null end time anywhere on the run table) and that no job locks exist from any other job (any entries on the lock table).

As for the specific implementation, that is up to you and how you would like to do it. Our way requires a second 'restart' version of every job which changes what initialization script we use because it needs to find a job lock for a job that already exists and 'acquire it' for itself. If you do not do this, if a job ever fails for some reason, you will need to write a database script to clear the locks table and remove the entry on the run table in order to re-run the job.

It sounds complicated, and takes some time to set up, but the improvements to concurrency issues and reduced data corruption should more then make up for it.

Answers (1)

Answers (1)

former_member208402
Active Contributor
0 Kudos

Hi Chandrakanth,

Usually this error occurs when 2 or more jobs run parallely and the target table is common in those 2 jobs. This deadlock error comes when 2 separate jobs try to insert data into a table at the same.

Please check if this is the cause. This may be because of 2 parallel jobs or WF's/Df's also.

Thanks,

Ravi kiran.