Skip to Content
avatar image
Former Member

Error "-9400 AK Cachedirectory full"

Hello,

I'm writing back following an old thread in 2009 on this forum, related to a problem with MaxDB and "AK Cachedirectory full" problems. You can find the previous thread here:

The problem was actually never resolved: we could more or less live with it, and we managed to reduce it for a while, but we are having that problem again almost every day now. We actually fixed various points since 2009 and our system has changed quite a lot.

We use MaxDB 7.8.02 (BUILD 038-121-249-252) with the JDBC Driver sapdbc-7.6.09_000-000-010-635.jar. Note that we don't use MaxDB in a SAP environment as we have our own business application.

Following some very helpful feedback from Lars Breddemann, we fixed various points in our system: for example, result sets were not always properly closed, this is now done immediately after the query has been executed and the result rows were read. We also follow the advise from Elke Zietlow to always close a connection and its associated prepared statements when the error occurs. This also helps in most cases, but sometimes when the error occurs, even closing the connection and its prepared statements does not help and the problem "escalates" until we have to restart the db to fix the problem.

Back to the discussion in 2009, I used the two statements given by Lars to monitor the catalog cache usage: when I run this multiple times, I see that all result sets are properly closed as I only see the ones currently being used and they disappear.

One important point is that our java application keeps many prepared statements open in a cache, to have them ready to be reused. We can have up to 10'000 prepared statements open, with up to 100 jdbc connections. Actually the AK Cachedirectory full problem happens sometimes very soon after we restart our system and db, so at that time the number of prepared statements can be very low, which seems to indicate that the number of prepared statements being open is not necessarily linked to the problem.

Also in the discussion in 2009, Lars mentioned the fact that we use prepared statements of the type TYPE_SCROLL_INSENSITIVE and he was asking if we could not use TYPE_FORWARD_ONLY. Would this really make a difference? We need the TYPE_SCROLL_INSENSITIVE in many cases because we use some iterators to scroll up and down the result sets, so using TYPE_FORWARD_ONLY would require changing quite some code. I also saw in the MaxDB code that using the type TYPE_SCROLL_INSENSITIVE adds the string "FOR REUSE" to the sql statement, what does it exactly mean?

Amy help to fix that problem would be greatly appreciated.

Christophe

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

2 Answers

  • avatar image
    Former Member
    Nov 28, 2014 at 12:33 PM

    Hi again,

    More useful information: we had today again a few db crashes and they all seemed to be related to one specific table where we could insert data, but trying to update any row failed with an "AK Cachedirectory full" error. In an attempt to find a solution, we temporarily disabled one "update" db trigger on that table, and this has immediately solved the problem. We reactivated the trigger to see if this was not a coincidence, and the problem immediately reappeared (although this time we got an "[-9206]: System error: AK Duplicate catalog information" error which quickly lead to a "Restart required" error). De-activating the trigger again solved the problem. Note that this trigger usually works without any problem, so it's also unclear why it suddenly lead to that problem.

    Actually we also noticed in the past that adding or removing a constraint on that particular table could lead to suddenly having some "AK" errors. Also adding or dropping columns, or adding/removing foreign keys seemed to sometimes make that table "unstable".

    Maybe that's a silly question, but is there a way we could check if the internal "structure" of a table is somehow corrupted? For example we also get a strange error that when a constraint is violated, the exception usually reports the wrong constraint, having an offset of 1 with the real constraint (I don't remember if it reports the previous or next constraint instead of the right one).

    Any idea or hint about a possible problem?

    Thanks,

    Christophe

    Add comment
    10|10000 characters needed characters exceeded

    • Former Member Thorsten Zielke

      Hello Thorsten,

      Yes this was an UPDATE in the table PERSON. Our application does SQL UPDATEs in a "brute-force" manner in the sense that we re-set all the fields: UPDATE PERSON SET FIELD1=VALUE1, FIELD2=VALUE2, and so on, even if just one field has changed.

      I also remind you that we didn't manage to reproduce the -9111 errors when the constraints on tables PERSON and ACCOUNTTRANSACTION were absent. Once they have been added, the -9111 could be generated very easily. I also told you in a previous post that we don't get AK errors any more on our 7.8 DB: this is actually the case since we removed all the DB constraints from these two tables PERSON and ACCOUNTTRANSACTION. This looks like that problem with constraints generated "AK errors" in 7.8 and -9111 move errors in 7.9. Could it be that the previous bug fix (for the AK errors) inserted a side-effect that generated these -9111 errors?

      Best regards,

      Christophe

  • Mar 18, 2015 at 11:21 AM

    Hello Christophe,

    this case is more complicated than expected. We were not able to recreate the bug here (even using your exported catalog schema) and the kernel trace did not give us enough information.

    So the only option I see is that we build another kernel with enhanced trace output for you. Should work the same way as before, but write more info into the trace file.

    Of course, it would also help us to have a copy of all your data, but I do assume that this something you would rather not do...

    Regards,
    Thorsten

    Add comment
    10|10000 characters needed characters exceeded

    • Hi Simon,

      planned release date is in about 4 to 6 weeks - I do not even have a PTS bug trackiing ID yet, I just wanted to let you know that we have located the bug as soon as I got the news...

      Regards,
      Thorsten