Skip to Content
avatar image
Former Member

Select and update with lock exclusive

Hello Everybody

In our application we have a table autonum to handle unique keys for all other tables.

In Autonum we have a column "SUCHBEGRIFF" char(40) unique KEY and a column "WERT" fixed(10,0).

Suchbegriff has values like Rechnungsnr,Auftragsnr,Bestellnr ...

Example:

Befor inserting a record into table rechnungen we do:

Select wert from autonun where suchbegriff = "Rechnungsnr" with lock exclusive.

l_rechnrneu = wert + 1

update autonum set wert = ?l_rechnrneu where suchbegriff = "Rechnungsnr"

commit

(then l_rechnrneu is used for an insert into rechnungen command)

This technic works for all tables (250) in our application.

We have about 400 concurrent users working with maxdb 7.6 via odbc 7.6

No problems since 2 years!

Now we start some backgroundjobs from an xp-workstation.

We have scheduled 5 jobs, starting every 5 minutes, same time.(Same user,same odbc-connection)

Each job inserts records into a table joblogs and therefore needs a unique JOBLOGNR from autonum.

Now we run into problems 2 or 3 times a day?

<Sometimes the backgound jobs are running (waiting?) without inserting a record into joblogs (deadlock?)

And more worse:

Sometimes the insert into joblogs failes with "duplicate key" ??

We don't know where to begin? Maxdb Problem ? Workstation problem?

Any help welcomed

Best regards

Albert

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

1 Answer

  • Nov 07, 2008 at 11:45 AM

    > Befor inserting a record into table rechnungen we do:

    > Select wert from autonun where suchbegriff = "Rechnungsnr" with lock exclusive.

    > l_rechnrneu = wert + 1

    > update autonum set wert = ?l_rechnrneu where suchbegriff = "Rechnungsnr"

    > commit

    > (then l_rechnrneu is used for an insert into rechnungen command)

    Why do you do that?

    This is the programmed guarentee for not beeing able to scale your workload.

    > This technic works for all tables (250) in our application.

    > We have about 400 concurrent users working with maxdb 7.6 via odbc 7.6

    > No problems since 2 years!

    Ok, there are 400 users logged on at the same time. But they are not doing the same inserts at the same time. Otherwise you would have faced problems much earlier.

    > Now we start some backgroundjobs from an xp-workstation.

    > We have scheduled 5 jobs, starting every 5 minutes, same time.(Same user,same odbc-connection)

    > Each job inserts records into a table joblogs and therefore needs a unique JOBLOGNR from autonum.

    Well, that's what I meant - a little increase in the workload and this design shows its limitations.

    > Now we run into problems 2 or 3 times a day?

    > <Sometimes the backgound jobs are running (waiting?) without inserting a record into joblogs (deadlock?)

    Do you see any -60 errors? If not, there are no deadlocks.

    See a 'deadlock' is just one, but not the only one, possibility where it may look like the database is doing 'nothing' anymore.

    But fortunately, MaxDB recognizes deadlocks and cancels one of the affected sessions to resolve it.

    > And more worse:

    > Sometimes the insert into joblogs failes with "duplicate key" ??

    > We don't know where to begin? Maxdb Problem ? Workstation problem?

    Coding bug in the application.

    Perhaps one of your inserts had to rollback but your application did not handle this correctly and fetches the same ID twice.

    Without having the option to see the DB and the coding, it'll be quite hard to tell what is going wrong here.

    Anyhow, your 'homebrew number provider' - why don't you use sequences instead.

    Where is the benefit for you here? With sequences you'll get unique numbers without locking. Everytime and very fast.

    regards,

    Lars

    Add comment
    10|10000 characters needed characters exceeded

    • When you do use Isolation level 0 than your SELECT ... WITH LOCK EXCLUSIVE does not protect the data from beeing read by another session.

      It's easy to try it out.

      Open two SQL Studio processes, set "Autocommit:OFF" and Isolaition level to "NOT COMMITED".

      Now perform your SELECT .. WITH LOCK EXCLUSIVE.

      If you see the data in your session A, try to select it in session B - you'll get the row back.

      Only when you also use the same SELECT .. WITH LOCK EXCLUSIVE you will get a lock wait situation.

      > The idea of using a database procedure instead of our function for this was "performance" !?

      > Users have to wait for previous users accessing the same suchbegriff.

      The users don't wait for the "translation" of the query (which is usually called 'parsing').

      What they wait for is the release of the lock. That's what I was talking about, when I wrote that this "homebrew-sequence" is a non-scalable solution.

      > Until now we fire 3 commands via odbc (select with lock,update,commit) which the database has to "translate" first.

      > Wouldn't a database procedure be faster ?

      The statements are rather small, don't change a lot - no, the parsed versions should be found in the shared SQL cache. And even if this is not the case and a full parse is necessary, than all relevant catalog data is likely to be found in the catalog cache.

      I don't see any sign here that parsing is an issue performance-wise for you.

      regards,

      Lars