cancel
Showing results for 
Search instead for 
Did you mean: 

Table Change log deletion(DBTABLOG) Job Running for 3 Days

Former Member
0 Kudos

Dear all,

My DBTABLOG table size is 220GB. Nobody bothered to check it previously. Now I am doing the cleanup activity. I scheduled a Job RSTBPDEL to run in the background to delete this Table Logging, keeping 1 year data.

It is now running continuously, on the third day.

Why its taking this much time....

Any help and suggestion to improve the process speed will be <removed_by_moderator> suitably....

Read the "Rules of Engagament"

Thanks

Regards

Kasi

Edited by: Juan Reyes on Apr 16, 2009 8:50 AM

Accepted Solutions (0)

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi

Check this may help you [Link|;

Regards

Uday

Former Member
0 Kudos

Sorry Juan Reyes.... Wont happen again...

Former Member
0 Kudos

Usually the problem is, that either in the select or the delete statement is a number of rows specified, something like WHERE ROWNUM <= 1000 (1000 is just an example). This guaranties that you don't exhaust your rollback space by keeping each transaction at 1000 rows max.

Now what happens? The first statement fetches 1000 rows from the beginning of the table, in most cases by full table scan. So the beginning of the table will be emptied more and more, and it will take longer to find the next 1000 rows with every statement, because the empty blocks at the beginning are scanned everytime.

So what to do? Well, my explanation is just one out of others, but a very likely one. I just checked how, the report is working. In my system this select is made:

SELECT /*+ FIRST_ROWS (2000) */ "LOGDATE" , "LOGTIME" , "LOGID"

FROM "DBTABLOG" WHERE "LOGDATE" <= :A0 AND ROWNUM <= :A1

So packets of 2000 rows are made, but the access is not a full table scan. It is a index range scan on DBTABLOG~0. It looks like the packet size cannot be increased. So i suggest you verify your system is not doing a full scan and online reorganise the index with BR*Tools.

By the way, you should check which tables are logged in DBTABLOG, because only low traffic customizing tables should be logged. I have seen cases where Z-tables had accidentally the logging flag set, causing millions of rows in DBTABLOG.

Best regards, Michael