cancel
Showing results for 
Search instead for 
Did you mean: 

ASA9 terminating abnormally with message about a "null" table!?

0 Kudos

Hi,


Since this morning we have an ASA9 database terminating abnormally. 


This below is a portion of the console log right before the database terminates abnormally.


E. 06/01 11:17:56. *** ERROR *** Assertion failed: 102203 (9.0.2.3924)

E. 06/01 11:17:56. Row count (-2147365485) in table ((null)) is incorrect

I. 06/01 11:17:56. *** ERROR *** Assertion failed: 102203 (9.0.2.3924)

I. 06/01 11:17:56. Row count (-2147365485) in table ((null)) is incorrect

I. 06/01 11:17:56.

I. 06/01 11:17:56. Attempting to save dump file at 'C:\Users\ADMINI~1\AppData\Local\Temp\1\sa_dump.dmp'

I. 06/01 11:17:56. Connection terminated abnormally

I. 06/01 11:17:56. Dump file saved



Out of desperation we brought the server back up using "dblog -t" to start a new log  (I initially thought that the transaction log had become corrupt).


The database worked for about an hour, and then we got the same error (actually the console log extract above is from the 2nd time the datase went down).


Any ideas of what this error means, and what I could do?


Thanks,

Edgard

Accepted Solutions (0)

Answers (1)

Answers (1)

former_member194571
Active Participant
0 Kudos

Hi Edgard,

Assertion failed to my experience often indicates a .db corruption. The .db file is assumed to be more exposed to corruption than the .log file. To validate the .db file, use the dbvalid tool, ideally when no one else uses the database (or on a file system level copy).

If there is corruption, the general recommendation is to restore the database from a clean full backup and the backed up and / or active transaction log(s), as if you had lost the .db file.

If you don't have a clean backup, there was a white paper available (I hope it still is, but it was from a ....sybase.com URL, so I don't know if it's still available, and where) advising how to salvage as much as possible from the broken .db. It basically is about unloading and reloading the database (without ordering by PK). If you're lucky and there are no table pages infected, you may get along without loss of data. Otherwise, exclude the infected table(s) from dbunload and try to export the data from the infected table(s) separately by exporting them in sort order using the output statement. Use a combination of ascending and descending queries to get the clean rows on either side of those broken.

This is the reason why you should combine dbvalid with every full backup of the database one way or the other. This way, you immediately recognize whether the full backup you are about to create or just have created is clean or not. Otherwise, a corrupt page may go undetected for months, making it hard or even impossible to restore the database.

There are more verbose directions out in the net. HTH for the moment regardless.

Volker

0 Kudos

Hi Volker,

Thanks for helping me out.

Right now I'm restoring from backup.   I have a full backup from May 31st at 5am, and then a LOG backup from today at 3am.   Right now I'm applying this morning's backup log (dbeng -a).   The backup log is 1.95GB and it has been over an hour and it is still applying changes.   Does that sound right?   It is working, but is strange that it is taking so long....


Edgard

PD. 

I found the article.  1959030 - How To Salvage Data When There are Corrupt Pages in the Database

However it seems to be available only to SAP ERP customers, because it is in another site and my password doesn't work on it.   We are customers of SAP Pos; however, I'm going to contact our provider to see if they can get me my SAP customer number to register in that web site.

former_member194571
Active Participant
0 Kudos

Hi Edgard,

If you have reason to believe that the backup from May-31 was clean, it sounds good.

Did you run dbvalid against a copy of the backed up database first? Btw, never do this with your original database backup (I know it sounds weird). Always use a copy of your database backup that you can drop afterwards. The database copy you've run dbvalid against won't allow you to apply transaction logs using the -a option any more.

Is the 1.95 GB .log from one day? Then it may take some time, because this looks like a very busy database. If you run the database on a network server, make sure you use the same executable for -a.

I don't precisely remember which defaults were active in v.9. If it already uses automatic cache sizing, you should be fine on this, If you use a non-default number of server threads for operation, it may be helpful or even necessary to adjust that for the -a run, too. These, I'm quite sure, were not dynamic in v.9.

I don't currently have access to the SAP support site, so I can't verify if the note you mentioned is the correct one. With SAP, many documents are hidden behind a barrier requiring a support contract to get through, which used to be available publicly or after a free registration in the Sybase days. The same is true for the SQL Anywhere software patches. But if you have any SAP support ID, you should be able to access them (none of v.9, of course).

Good luck & good success for your restoration

HTH

Volker

0 Kudos

Hi again,

Sadly I just found out that my backup database is also corrupt.   It looks like dbunload time

I learned that definitely it is important to do a dbvalid against the backup.   Thanks for the tip about not using the actual backup database.

This particular database is almost 300GB in size.   This complicates everything, because everything takes a long time to do....

Looks like I'm sleeping (not sleeping) at the office today.

Hopefully dbunload works...

Thanks,

Edgard

former_member194571
Active Participant
0 Kudos

Hi Edgard,

Sad news. If it's not too late yet for this hint, you may at least slightly reduce the reload time by pre- allocating the database. I'm quite sure that the -dbs option of dbinit was not available with v.9, but the ALTER DBSPACE SYSTEM ADD ... SQL statement definitely was. Enlarging the .db file may help to reduce file level fragmentation and generally makes the LOAD operation faster (by more than the extra time required to run the statement).

HTH

Volker

0 Kudos

Thanks!

I just used your tip just now, as I'm rebuilding from scratch the database....

Edgard

former_member194571
Active Participant
0 Kudos

So your DBUnload completed w/out problems or lost data?

That would be the most encouraging news from this thread so far...

- Volker

0 Kudos

Hi  Volker,

We finished rebuilding today at 5am.    We had to do the process manually UNLOAD/LOAD and with some other tables we used proxy tables, and with some smaller ones we used a program called "Cross-database Studio".

The system is operational !   We still need to work on some subsystems, but the major core is working.  

A good side effect of this problem is that we trimmed old data that wasn't necessary and the database is now UNDER 100GB.    I was surprised at the change in size.    Everything is easier with 100GB compared to 300GB.

Thanks for all your help Volker!

Edgard