cancel
Showing results for 
Search instead for 
Did you mean: 

TR Log file size for a single record

former_member458281
Participant
0 Kudos

Hi Experts,

I have total number of 31,02,541 records in cube(0IC_C03) in my BI production system. I'm compressing those records which is generating some TR log files in LOG_DIR folder. when it(log_dir ) reaches 6GB it is throwing a dump like TR Log backup is Full.

Could u please advice on this issue how much i need to increase the log directory size.

Thanks,

SHAN

Accepted Solutions (0)

Answers (5)

Answers (5)

Manas_Vincit
Active Participant
0 Kudos

Hi Shan ,

COuld you paste here output of command ,

db2 get db cfg for <DB> | grep -i log

and what is size of log_dir and log_archive ? if we can change the size of some logging paramter your issue can be solved.

Thanks

Manas Behra

maria_shen
Contributor
0 Kudos

Hello Shan,

Regarding of log space consumption for infocube compression, it is possible to make some estimation.

This will need some more information like the average row size of the fact tables, and the size of the indexes and etc. Anyway, a detailed analysis is needed. I suggest to create a CSS ticket for further help.

Thanks and kind regards

Maria

marc_saulnier
Participant
0 Kudos

     Hi Shan, I suspect the Primary log becoming full on your database. If the file system "log dir" does not reach 100% when you compress the data, check the parameter LOGFILSIZ and LOGPRIMARY and LOGSECOND of your database.

  On my BI server, I had to grow up the LOGFILSIZ to 196608 (768 mb) per log when doing compression stuff. But I raise these parameters par slice of 256 mb. You have to restart the database when changing these parameters.

  For the size of the file system you have to calculate LOGFILSIZ * LOGPRIMARY. As an example, my LOGPRIMARY=60 and my LOGFILSIZ=196608 (4 KB).

     If you have LOGSECOND, you have to put this in the calculation too but it's zero by default.

     The size of my my file system will be 47 gig.

Let me know

former_member182505
Contributor
0 Kudos

Hi

Please check your logarchmeth1 parameter. if its not set use below command and set.

db2 update db cfg for SID using LOGARCHMETH1 DISK:/db2/SID/log_archive

You can delete logs from log_archive after getting backup if its getting fill.

Thanks

Sadiq

former_member188883
Active Contributor
0 Kudos

Hi Shan,

Please check whether log_archmeth1 parameter is set in your database.

This parameter should point to another partition log_archive.

When this parameter is active, at the time log_dir reaches threashold value , system will transfer the log files from log_dir to log_archive.

You need to actively take backup from log_archive and ensure this partition  has enough space.

Hope this helps.

Regards,

Deepak Kori

MarcinOzdzinski
Participant
0 Kudos

Hi Deepak

It is case for exceeding transaction log size for single operation without commitment. In db2diag there is "saturation point reached error".It is not possible to archive those logs cause they contain uncommited transactions.

We had such issue in the BI system while compressing cubes and SAP asked us to multiply t-log by 5 to 10 !!!

Regards

Marcin