cancel
Showing results for 
Search instead for 
Did you mean: 

Checking the size of EVERY MaxDB table

Former Member
0 Kudos

Good morning,

we have a SAP Netweaver 04s installation based on MaxDB 7.6.10

In the last month we are experiencing a very strong DB size growth (about 10Gb a day).

XI messaging deletion is working fine, both for the Adapter Framework and the Integration Engine, but there should be something that we're missing.

Checking the tables size and growth (both for the ABAP and the JAVA schemas) from inside the DB50 transaction, gives a total size of about 70Gb (including indexes) , but the reported size of the DB space usage is way ahead of this: 205Gb.

The most growing tables show a total grow of about 500Mb/day but, as said, the DB grows by 10Gb..

Now we are trying to identify were in the DB structure (system tables?) are those more 130Gb contained, so that we may have a clue about were to act to take down the DB to a more reasonable size..

The problem is that until now we have not found a way to check the size of EVERY object contained in the DB, both from the SAP interfaces (SAPGUI/nwa/rwb) or via SQLstudio.

This problem is quickly becoming very urgent, because the max allocable space to the DB without adding more hardware is about 300Gb and with this growing ratio we'll soon been stuck

Thank you in advance for your kind help.

Carlo Ferrari

Accepted Solutions (1)

Accepted Solutions (1)

markus_doehr2
Active Contributor
0 Kudos

What you see by default in DB50 is only the size of the ABAP part, the Java part is not included there.

You can try the following:

- DB59

- Add you J2EE schema (SAP<SID>DB)

- execute program RSADAT6M

Markus

Former Member
0 Kudos

Thank you Markus, you are right and I did mention the wrong transaction.

What we used was actually the DB59 and we got there the reported sizes ( 55Gb for JAVA and 15 for ABAP schemas).

We've also already used the report RSADAT6M, but also checking the ADATABSIZE table leads to the same results.

I also wish to specify that we had a scheduled "check data structure" once a week (at least until last week, when the DB became too big and the check started to require too much time for a production environment) and we regularly update all statistics once a day.

Also - if it may be useful - we're running on top of a Linux SLES 9.

Carlo

Edited by: Carlo Ferrari on Jun 1, 2008 4:09 PM

Former Member
0 Kudos

Hello,

you can check the size of tables in SQL Studio with the statements:

select * from sysinfo.files order by treeleavessize,lobsize,treeindexsize

//

select tables.* from domain.tables,sysinfo.files where domain.tables.tableid=sysinfo.files.fileid

In Database Studio there is a Tables Editor. You can open it with the context menu of the Tables folder.

In the editor you can sort the tables by size.

Best Regards

Wolfgang

lbreddemann
Active Contributor
0 Kudos

Hi Carlo,

Usually the system tables don't require much space in the database.

Thus it's more likely that one of your application tables occupies the space or that there is a huge demand for temporary space.

Also one possible reason for a huge space demand is the usage of snapshots. Do you use snapshots for that database?

Please post a readable version of the output from thesestatements:


select * from sysinfo.datastatistics

select * from superdba.userstatistics
 (attention: this one runs long !)

Once you know the overall stats, you can digg deeper:


SELECT  r.owner, r.tablename, r.indexname,  f.type, (f.treeleavessize + f.treeindexsize + f.lobsize) AS KB
from 
	roots r,
	files f
where r.root = f.root
order by KB desc

This statement needs to be run for each Schemauser as the systemtables only provide access to catalog information for tables the user has at least SELECT privileges.

If you don't figure it out yourself, you may also open a support message to get help directly on your system.

KR Lars

Answers (1)

Answers (1)

Former Member
0 Kudos

Good morning, I have an update:

yesterday, as the production environment was mostly unused as in Italy it was a national holiday, I tried the query suggested by Lars:

select * from superdba.userstatistics

I submitted it via SQl studio at about 4pm. After 7 hours the query was still running...

As I was connected via VPN at home, I decided to close the SQL studio and to shut down my Pc.

Immediately after closing SQL studio, a SAP Gui connection that I had on the desktop closed with error as the "connection to the DB went lost".

A quick check showed that after closing the SQL studio the DB decided to go OFFLINE.

I immediately put it ONLINE again and everything seemed to be working fine.

Then I finally went to sleep, leaving the DB with a size of 210Gb.

This morning at my workplace I was prepared to deal again with the problem but, to my great surprise, the DB size was decreased to 160Gb (and it is still gaining space right now)...

My diagnosis is that there should have been some kind of lock on some tables that were stuck and that avoided the deletion of data. Restarting the DB had probably cleared those locks and flushed the sink.

Is this actually a possible explication?

Warmest regards,

Carlo

lbreddemann
Active Contributor
0 Kudos

Hi Carlo,

I cannot agree with your reasoning.

If there would have been any kind of lock preventing the deletion of data content, then you would have experienced hanging sessions waiting for the lock.

To me it seems to be more likely that there had been a huge amount of temporary data that got released with the restart.

Could you post the result of the first query?

Please do also post the KNLDIAG of the restart?

KR Lars

Former Member
0 Kudos

Here's the result of the query select * from sysinfo.datastatistics

(it was ran just now, while the DB is still deleting)

Usable size: 331611888

Used size:140170224

Used size percentage: 3

DATA full: NO

Used size on volume: 140170224

Used size not on volume 71368

Used permanent size: 139817024

Used temporary size: 1488

Max used size: 210330008

Max used permanent size: 210330008

Max used temporary size: 846152

Incremental Backup size: 3319840

Last emergency history cleanup: 2008-06-03 17:00:15.000000

Converter version: 28357

Max temporary data page number: 42162816

Max permanent data page number: 42162816

Max permanent static data page number: 42162816

and here are the latest lines from the knldiag.err

2008-06-02 22:56:29 0 ERR 12005 DBCRASH Kernel exited with core and exit status 0x8b

2008-06-02 22:56:29 0 ERR 12012 DBCRASH No stack backtrace since signal handler was suppressed by SUPPRESS_CORE=NO

2008-06-02 22:56:29 0 ERR 12009 DBCRASH Kernel exited due to signal 11(SIGSEGV)

2008-06-02 22:56:34 ___ Stopping GMT 2008-06-02 20:56:34 7.6.00 Build 018-123-119-055

2008-06-02 22:58:06 --- Starting GMT 2008-06-02 20:58:06 7.6.00 Build 018-123-119-055

about the klndiag, is there a way to attach it as a file? I think that's simply too big to be pasted (830K)..

Thank you!

Carlo

Edited by: Carlo Ferrari on Jun 3, 2008 5:34 PM

lbreddemann
Active Contributor
0 Kudos

Hi Carlo,

hmmm... all I can say from this data is that the space had been occupied by permanent data.

So we cannot say what happened before the restart of the db. Actually it cannot be currently doing a kind of rollback or the like since it's already ONLINE again - the only thing I can imagine here is that e.g. some large indexes had been dropped or tables had been truncated and the garbage collector (for whatever reason) did not remove the allocated pages then - but does it now...

Well - the best thing here would have been to open a support message so that we would have been able to check directly on the database before the restart.

Perhabs next time.

BTW: You're using a very old MaxDB patch. Make sure you install the current patch MaxDB 7.6.04. Build 11 (or above) rather soon.

KR Lars

Former Member
0 Kudos

Thank you anyway, Lars.

Actually on last Friday we opened a support request with high priority but, as on monday still we didn't receive an answer, I decided to post it also on this forum.. you know, being a linux guy I am more proned to be confiant in the help of the "community"

Yesterday anyway we've closed the support request as the problem was solved.

Now my idea is: we do have a backup of the database when it was 200Gb and I'm planning to restore it on another maxdb instance on another server (without any SAP software on it), so that I can try at least to understand where were the data contained.

Thank you so much again,

Carlo