Skip to Content

ASE has run out of LOCKS

Hi,

The following problem. I have a some tables with >200 million entries, during a client copy I have excluded these tables to export/import them via R3trans, with Oracle the best way to finish the client copy quickly and export/import the tables also in a faster way.

So, I exported the tables with the command to delete the existed entries first before the new entry will be imported. But that fact brings Sybase everytime to the error message "run out of LOCKS".

I shutdown SAP to give ASE all the memory the hardware has, but to get over 200 million locks, I need much more memory than the hardware has. The same is, if I want to delete the table entries with "delete from", it runs out of locks.

So can somebody tell me, whats the different to Oracle and why I need that size of locks to delete that amount of entries? Truncate is possibly the best way, or via SE14 drop and create the table again, but unfortunately I have to delete the entries for a special client!

Thanks for any help!

Best regards,

Tobias

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

2 Answers

  • Oct 18, 2016 at 01:34 PM

    One option would be to use the LOCK TABLE command to ensure you used just a single table lock.

    http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc36272.1600/doc/html/san1393051041181.html

    begin tran
    lock table mytable in exclusive mode
    delete mytable where ...
    commit tran

    Add comment
    10|10000 characters needed characters exceeded

  • Oct 18, 2016 at 02:37 PM

    ASE will start by accumulating individual page-level (or row-level) locks.

    Once the DML statement reaches a threshold number of locks (200 by default) ASE will attempt to escalate to a single table-level lock.

    It sounds like your delete is unable to escalate to a single table-level lock so it continues to accumulate individual page-/row-level locks ... until it runs out of locks.

    Lock escalation failure can happen if a) your lock escalation threshold is > 200 million or b) other processes are holding their own locks on part of the table (eg, shared locks, exclusive locks, etc). I'm guessing you're running into scenario b => some other process is holding some locks on the table so your delete is unable to escalate to a table lock.

    You have a few options to ensure your delete doesn't run out of locks:

    1 - make sure you have an exclusive table lock on the table prior to running your delete (see Bret's reply for details) (keep in mind that if the 'lock table' can't obtain an exclusive lock it may time out after awhile)

    2 - make sure there are no other processes holding locks on the table while running your delete

    3 - delete rows in batches such that the total number of locks never reaches your number of locks configuration value

    4 - if you're looking to delete *ALL* rows from the table then consider using the truncate table command (instead of using a DELETE statement)

    Add comment
    10|10000 characters needed characters exceeded