on 09-03-2013 8:02 AM
Hi Experts,
I have total number of 31,02,541 records in cube(0IC_C03) in my BI production system. I'm compressing those records which is generating some TR log files in LOG_DIR folder. when it(log_dir ) reaches 6GB it is throwing a dump like TR Log backup is Full.
Could u please advice on this issue how much i need to increase the log directory size.
Thanks,
SHAN
Hi Shan ,
COuld you paste here output of command ,
db2 get db cfg for <DB> | grep -i log
and what is size of log_dir and log_archive ? if we can change the size of some logging paramter your issue can be solved.
Thanks
Manas Behra
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Shan,
Regarding of log space consumption for infocube compression, it is possible to make some estimation.
This will need some more information like the average row size of the fact tables, and the size of the indexes and etc. Anyway, a detailed analysis is needed. I suggest to create a CSS ticket for further help.
Thanks and kind regards
Maria
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Shan, I suspect the Primary log becoming full on your database. If the file system "log dir" does not reach 100% when you compress the data, check the parameter LOGFILSIZ and LOGPRIMARY and LOGSECOND of your database.
On my BI server, I had to grow up the LOGFILSIZ to 196608 (768 mb) per log when doing compression stuff. But I raise these parameters par slice of 256 mb. You have to restart the database when changing these parameters.
For the size of the file system you have to calculate LOGFILSIZ * LOGPRIMARY. As an example, my LOGPRIMARY=60 and my LOGFILSIZ=196608 (4 KB).
If you have LOGSECOND, you have to put this in the calculation too but it's zero by default.
The size of my my file system will be 47 gig.
Let me know
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi
Please check your logarchmeth1 parameter. if its not set use below command and set.
db2 update db cfg for SID using LOGARCHMETH1 DISK:/db2/SID/log_archive
You can delete logs from log_archive after getting backup if its getting fill.
Thanks
Sadiq
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Shan,
Please check whether log_archmeth1 parameter is set in your database.
This parameter should point to another partition log_archive.
When this parameter is active, at the time log_dir reaches threashold value , system will transfer the log files from log_dir to log_archive.
You need to actively take backup from log_archive and ensure this partition has enough space.
Hope this helps.
Regards,
Deepak Kori
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Deepak
It is case for exceeding transaction log size for single operation without commitment. In db2diag there is "saturation point reached error".It is not possible to archive those logs cause they contain uncommited transactions.
We had such issue in the BI system while compressing cubes and SAP asked us to multiply t-log by 5 to 10 !!!
Regards
Marcin
User | Count |
---|---|
84 | |
25 | |
12 | |
9 | |
6 | |
6 | |
5 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.