Skip to Content

ASE has run out of LOCKS


The following problem. I have a some tables with >200 million entries, during a client copy I have excluded these tables to export/import them via R3trans, with Oracle the best way to finish the client copy quickly and export/import the tables also in a faster way.

So, I exported the tables with the command to delete the existed entries first before the new entry will be imported. But that fact brings Sybase everytime to the error message "run out of LOCKS".

I shutdown SAP to give ASE all the memory the hardware has, but to get over 200 million locks, I need much more memory than the hardware has. The same is, if I want to delete the table entries with "delete from", it runs out of locks.

So can somebody tell me, whats the different to Oracle and why I need that size of locks to delete that amount of entries? Truncate is possibly the best way, or via SE14 drop and create the table again, but unfortunately I have to delete the entries for a special client!

Thanks for any help!

Best regards,


Add a comment
10|10000 characters needed characters exceeded

Related questions

2 Answers

  • Posted on Oct 18, 2016 at 01:34 PM

    One option would be to use the LOCK TABLE command to ensure you used just a single table lock.

    begin tran
    lock table mytable in exclusive mode
    delete mytable where ...
    commit tran

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on Oct 18, 2016 at 02:37 PM

    ASE will start by accumulating individual page-level (or row-level) locks.

    Once the DML statement reaches a threshold number of locks (200 by default) ASE will attempt to escalate to a single table-level lock.

    It sounds like your delete is unable to escalate to a single table-level lock so it continues to accumulate individual page-/row-level locks ... until it runs out of locks.

    Lock escalation failure can happen if a) your lock escalation threshold is > 200 million or b) other processes are holding their own locks on part of the table (eg, shared locks, exclusive locks, etc). I'm guessing you're running into scenario b => some other process is holding some locks on the table so your delete is unable to escalate to a table lock.

    You have a few options to ensure your delete doesn't run out of locks:

    1 - make sure you have an exclusive table lock on the table prior to running your delete (see Bret's reply for details) (keep in mind that if the 'lock table' can't obtain an exclusive lock it may time out after awhile)

    2 - make sure there are no other processes holding locks on the table while running your delete

    3 - delete rows in batches such that the total number of locks never reaches your number of locks configuration value

    4 - if you're looking to delete *ALL* rows from the table then consider using the truncate table command (instead of using a DELETE statement)

    Add a comment
    10|10000 characters needed characters exceeded

Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.