Skip to Content

How big must your transaction log be

Hi,

i would like to hear from some other people about their opinion about the size of the transaction log

Our general rule, we set the transaction log at a certain size (seperate device of course), so that the system could run for a 10-12 hours without completley filling up. We configure thresholds so that a transaction dump is periodicly taken and a schedule is fixed transaction every hour just to be sure.

But for the last couple of years as our ASE database is growing and procedures are getting bigger and bigger, it is now impossible to execute an unlogged transaction on a very popular table. Every object (stored procedure, trigger, function) is recompiled when executed giving very much transaction log in a very short time. As we need to perform a 'dump database' that takes up 30 minutes, the 2 GB transaction log is not enough (our DB size is 110 GB with a 240MB transaction every hour).

So if i double our transaction log to 4GB, is there a downside of doing this (space on the disk is not an issue)

i thought i once read some time ago that the 'recovery time' would be bigger, but i'm not sure what that means exactly.

Give me your thoughts please...

Greetz

Pieter

Add a comment
10|10000 characters needed characters exceeded

Related questions

1 Answer

  • Best Answer
    Posted on Aug 23, 2016 at 03:50 PM

    Generally speaking:

    - the size of the transaction log has no effect on recovery time

    - the volume of records in the transaction log has no effect on recovery time

    - what does affect recovery time is the volume of records in the transaction log that were not checkpointed (ie, associated data pages were not flushed to disk) prior to the last time the dataserver was shutdown

    Net result is that increasing the size of the log (on its own device(s)) shouldn't affect your recovery time assuming log records are being checkpointed in a timely manner (to include ensuring no excessively long running/open txns that would preclude a timely checkpointing of log records).

    -----

    I'm not sure I understand your comment about procedural objects (eg, procs, triggers, functions) being recompiled and an associated decrease in the amount of time it takes to fill the log ... ?

    While I can think of a few run-time reasons why these objects may be undergoing high volumes of recompilations, that would require a bit more analysis of the objects and their queries.

    As for the 240MB/hr of log activity ... that may be ok ... or not ok ... depends on the volume of (expected) write activity and the associated query plans. If 240MB/hr sounds high for your environment, you may want to spend some time analyzing query plans; in particular, look for plans that are performing deferred (or deferred_varcol) updates; it's not uncommon for deferred updates to require upwards of 4x-10x more log space that a comparable direct update (and assuming a direct update is possible with a change in SQL coding or table schema).

    -----

    As for the log filling up ... have you determined that more frequent log dumps don't help free up space?

    Since 'dump transaction' is relatively cheap to perform I tend to a) have regularly scheduled 'dump transaction's running every 10-15 minutes and b) multiple thresholds on the log set at something like 60%/50%/40%/25%/15%/10% free pages (and LCT) (a bit excessive? perhaps, but the thresholds print messages to the errorlog that are picked up by my errorlog monitor which then generates alerts of various severities based on the % free/full),

    Add a comment
    10|10000 characters needed characters exceeded

    • Hi Mark,

      thx for the response.

      A bit of clarification at my side is needed. I don't have an issue with the amount of transaction every hour. Its very normal so no optimization needed here. i didn't know about the 'direct/deferred' update so thank you for that new information 😉

      We also use thresholds so no problem here. We dump the transaction log when it hits 10% of the transaction log.

      About the recompilation part; this is another topic i created once (i dont think you participated in that one):

      But my main concern was the impact of increasing the transaction log from 2GB to 4GB. As i can see in your answer the effect (for us) is none, so there is no reason for me not to increase it.

      Greetz

      Pieter

Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.