Skip to Content
avatar image
Former Member

Table Change log deletion(DBTABLOG) Job Running for 3 Days

Dear all,

My DBTABLOG table size is 220GB. Nobody bothered to check it previously. Now I am doing the cleanup activity. I scheduled a Job RSTBPDEL to run in the background to delete this Table Logging, keeping 1 year data.

It is now running continuously, on the third day.

Why its taking this much time....

Any help and suggestion to improve the process speed will be <removed_by_moderator> suitably....

Read the "Rules of Engagament"

Thanks

Regards

Kasi

Edited by: Juan Reyes on Apr 16, 2009 8:50 AM

Add comment
10|10000 characters needed characters exceeded

  • Follow
  • Get RSS Feed

1 Answer

  • avatar image
    Former Member
    Apr 16, 2009 at 04:48 AM

    Hi

    Check this may help you [Link|;

    Regards

    Uday

    Add comment
    10|10000 characters needed characters exceeded

    • Former Member Former Member

      Usually the problem is, that either in the select or the delete statement is a number of rows specified, something like WHERE ROWNUM <= 1000 (1000 is just an example). This guaranties that you don't exhaust your rollback space by keeping each transaction at 1000 rows max.

      Now what happens? The first statement fetches 1000 rows from the beginning of the table, in most cases by full table scan. So the beginning of the table will be emptied more and more, and it will take longer to find the next 1000 rows with every statement, because the empty blocks at the beginning are scanned everytime.

      So what to do? Well, my explanation is just one out of others, but a very likely one. I just checked how, the report is working. In my system this select is made:

      SELECT /*+ FIRST_ROWS (2000) */ "LOGDATE" , "LOGTIME" , "LOGID"

      FROM "DBTABLOG" WHERE "LOGDATE" <= :A0 AND ROWNUM <= :A1

      So packets of 2000 rows are made, but the access is not a full table scan. It is a index range scan on DBTABLOG~0. It looks like the packet size cannot be increased. So i suggest you verify your system is not doing a full scan and online reorganise the index with BR*Tools.

      By the way, you should check which tables are logged in DBTABLOG, because only low traffic customizing tables should be logged. I have seen cases where Z-tables had accidentally the logging flag set, causing millions of rows in DBTABLOG.

      Best regards, Michael