Skip to Content

How to limit the usage of the transaction log

Dec 04, 2017 at 03:39 PM


avatar image

Good afternoon,

I'm looking for any improvement to reduce the usage of my transaction log during the execution of some batches inserting millions of rows in a table My batch is composed of independant child processes (6), each process work on a set of 2000 references knowing that each reference can insert 50 to more that 100 rows in a table.

Example of metrics

time ; # of chunk of 2000 refs ; number of refs ; number of rows inserted

15:24 ; 29 ; 58000 ; 3190000
15:25 ; 31 ; 62000 ; 3410000
15:26 ; 31 ; 62000 ; 3410000
15:27 ; 13 ; 26000 ; 1430000
15:28 ; 20 ; 40000 ; 2200000
15:29 ; 41 ; 82000 ; 4510000
15:30 ; 33 ; 66000 ; 3630000
15:31 ; 64 ; 128000 ; 7040000
15:32 ; 34 ; 68000 ; 3740000
15:33 ; 30 ; 60000 ; 3300000
15:34 ; 29 ; 58000 ; 3190000
15:35 ; 17 ; 34000 ; 1870000
15:36 ; 21 ; 42000 ; 2310000
15:37 ; 29 ; 58000 ; 3190000
15:38 ; 21 ; 42000 ; 2310000
15:39 ; 7 ; 14000 ; 770000
TOTAL = 49500000

Today the batch executes correctly but my syslogs get filled at 95%.

Knowing that my batch is tested on a subscope, the final scope might generate more data and fill my syslogs.

Moreover I'll probably have other sets of batches that will be launched in parallel targeting the same dataserver and database they can also fill my syslogs.

In order to prevent to have a syslogs full, my unique option might be to increase the log size.

On the other hand I'm running to unit test to see how to tune my dataserver to limit the fill of my syslogs.

From my understanding

  • spid is working some data stored in ULC.
  • When the modification are finished or when the ULC is full, data are flushed to cache but also in syslogs
  • because we're talking about 'log' information, the flush on the cache and disk is done in serial then every minute, checkpoint wakes up and check if the number of information modified are > than the 'recovery interval'. If this is the case then he'll try to write all dirty pages to disk.

Based of that I configured my 4k-page dataserver as follow

  • create a logonly cache with a 4K pool and 32K pool
  • bind the syslogs of my database to the new cache
  • configure 'ULC' to 32k
  • configure sp_logiosize for my db to 32k

By doing so, I was able to limit the time it takes for the data to go from ULC to syslogs (By increasing ULC I limited the number of ULC flush because of full ULC)

Then I increase the number of 'disk i/o structure' and 'i/o batch size' with the desire to submit as much IO as possible to the system in order to try to limit the checkpoint time.

I don't know if I succeeded but from what I saw with ASETUNE is:

  • syslogs is getting filled up to a limit of 95% before finishing
  • I can see a lot of 'physical_io' for the HOUSEKEEPER process
  • HK processes are doing IOs but not in the same scale than the housekeeper.

Having said that, ASETUNE showed me:

  • device attached to syslogs are stressed
  • my syslogs get filled, I can see it get filled then empty but at some point it continues getting fill before being emptied leading to the end of my batch
  • data devices are also stressed and I can see main object is my target table.

But now I don't know How I should proceed to see if there are anything else I could to to improve my batch execution without increasing the syslogs.

What would be your suggestion to continue working on this topic?

Thanks for your feedbacks


10 |10000 characters needed characters left characters exceeded
* Please Login or Register to Answer, Follow or Comment.

2 Answers

Best Answer
Mark A Parsons Dec 04, 2017 at 04:11 PM

Sounds like you have a couple different issues ...

- amount of log activity; possible solutions ... increase log size or decrease volume of log activity

- filling up the log 'too fast'; possible solutions ... increase log size or truncate log more frequently

"Duh, Mark!" ?


No definitive 'solutions' at this point, just a few questions/ramblings ...

- how big is the log? [if only a couple 100 MBs then by all means consider bumping up to 1-2GB]

- are you replicating out of this database and if so, have you looked at SRS performance and tuning options?

- are you explicitly truncating the log (eg, 'dump tran ... to file or truncate only') and if so, can you truncate more frequently?

- if log truncations are taking too long, is there a performance issue with writing to disk (assuming 'dump tran' to disk)?

- are these batches **only** doing inserts, or are they also performing other operations (eg, updates/deletes) on the data?

- any chance these batches are rebuilding previous data sets (eg, EOD/EOW processing that rebuilds clients' holdings/positions/P&L)? [I've worked with some firms - usually banks - where they daily rebuild client data sets - to speed up intraday queries - where 95-99% of the data typically doesn't change from day to day; by replacing the rebuild-everything-from-scratch process with a replace-only-the-recent-changes process I've been able to reduce log activity from 3-5GB down to a few 100MBs; so, wondering if a rework/rethink of the current process could reduce the volume of log activity?]

- does the entire batch process need to be recoverable, ie, do you need to write the log to disk? [wondering if some portion of the activity could be performed in a separate 'staging' database where logs could be discarded/truncated more efficiently ... to include disabling HK on an associated cache, enabling 'delayed commit', etc]

- have you reviewed the query plans for one of the batches and if so, any sign of deferred operations? [deferred operations tend to generate an excessive volume of log records, so eliminating/reducing deferred operations can often times reduce the volume of log activity]

- do you have triggers or FK constraints on the table(s) in question and if so, would it be feasible to disable these options (triggers, FK constraints) during the batch processing window? [again, looking at ways to reduce log activity, though eliminating these options could speed up batch processing which in turn could fill the log even faster]

Show 8 Share
10 |10000 characters needed characters left characters exceeded

First of all, As you stated I was able to reduce the 'pressure' on the syslogs by reducing the number of child processes launched in parallel to feed my table but the downside is my batch is taking more time to execute.

As I'll have an ETA to produce my data, reducing the parallelism might not be a solution to 'solve' a technical concern (see image table-usage-syslogs-metrics.png - please note the metrics were taken from output dbcc checktable(syslogs) executed every minute. Is it the right approach to get metrics about syslogs?).

During this run, I also executed a sar on my dataserver to see if I reached any IO bottlenecks (see image table-usage-sar-io.png). we can see a correlation between the time the syslogs is getting filled continuously and the 100%pct busy on one fs

- from 4:53 to 5:02
- then a small drop of syslogs usage before it gets filled again starting at 5:03~04

I though the drop was due to a threshold action
1> use DB
2> go
1> sp_helpthreshold
2> go
segment name free pages last chance? threshold procedure
-------------- ------------ -------------- -------------------- logsegment 80840 1 sp_thresholdaction
(1 row affected, return status = 0)

CREATE PROCEDURE dbo.sp_thresholdaction
@dbname varchar(30),
@segmentname varchar(30)
if @dbname = 'tempdb'
select lct_admin('abort', 0,2)
dump transaction @dbname with no_log
print "Last-chance threshold reached! LOG DUMP: '%1!' for '%2!' dumped", @segmentname, @dbname

But looking at my dataserver log. I couldn't find any message around that time.

Now regarding your questions:

the database I'm "feeding" is a 200Gb database with a log segment of 9Gb.

This database/dataserver is not connected to any replicated system.

I'm not issuing any "dump tran" manually but you're right, I can give a try by setting a threshold to issue some dump tran.

My batch is mainly inserting millions of rows in one table. Initially my batch is supposed to delete rows from previous execution but in order to speed-up my test-case I directly truncate the table. As you seggested, my batch is part of a EOD process of a bank that is feeding a table used later to generate a report. the code unfortunately only only does insert of data and not updates so there's no way to only modify data that would change.

Your question about recoverability is interesting because I was thinking about some 'words' you suggested as (disabling HK on an associated cache, enabling 'delayed commit', etc) but I didn't see the impact they have on recoverability.

Regarding the plan, I can only see that the code is doing some 'bulk insert' and the showplan doesn't show anything QUERY PLAN FOR STATEMENT 1 (at line 1).
The type of query is BULK INSERT
Using I/O Size 4 Kbytes for data pages.

Now I'm wondering if there are any way to the usage of the large pool? As I'm inserting a lot of rows. Maybe a dedicated data cache for this database with a large pool would be more efficient?
FinallyThere are no triggers on those table


If your primary concern is the volume of log activity, and seeing how you're performing relatively small batches of inserts (ie, small, short-lived txns), I'd probably just configure several thresholds on the logsegment (eg, 25/45/65/85/90/95% full); another option would be to configure the db with 'trunc log on chkpt' enabled (though I'd still keep the multiple thresholds in place, too).


You've stated that you truncate the table before performing the inserts, and that you're doing 'mostly' inserts, soooo, have you considered dropping the indexes on the table, inserting the data, then (re)building the indexes (and updating stats)? [objective is to see if you can switch to a fast bcp ==> much less logging; possible downside is that a (re)build of a clustered index may cause log problems, too ... unless you can switch clustered index to non-clustered; another downside is that any intermediate processing against the table will likely perform poorly without indexes being in place]

The graph of log-activity/contention/rows-inserted shows a lot of variation for rows inserted, so wondering if there's any blocking going on ... Are you inserting to a (semantically) partitioned table and if so, are the processes inserting to their own partition? [would be curious as to what, if any, blocking may be going on; would probably help to review monProcessWaits for any bottlenecks; while reducing blocking would be good, the (possible) downside is that log usage could go up more quickly]

The sar data doesn't look good re: log device performance; I'd probably want to get the system/disk admins to review the device configurations (eg, disable FS journaling, don't use RAID-5, make sure directio/cio is enabled (as available), etc) to see if there's anything they can do to 'speed up' disk performance for the log devices.

I'd probably want to review sp_sysmon for any out-of-the-ordinary issues (ie, a high-level view of where to focus additional attention).


30 minutes to insert 50 million rows sounds a bit slow, though I'm sure we're not privy to all the details.

Since you've likely got some tight SLAs (don't all bank EOD processes ?!?!?), it might be interesting to see if any of the processing could be done before the process window opens, eg, pre-loading the data into a staging database that's been configured for fast bcp; then when your process window opens run a (relatively) fast insert/select (to non-indexed table).


re: recoverability and disabling a cache's HK, and enabling 'delayed commit' ...

The general idea was to use async log writes (delayed commit) at the risk of losing some recoverability (eg, system goes down before all log pages written to disk), in case processing is being slowed due to slow log device writes. [though this wouldn't have an effect on the volume of logging].

As for disabling a cache's HK ... probably not a helpful item for this case. [I was having a flashback to one EOD process that had a lot of tempdb-like activity in a staging database ... setting up a separate cache with HK disabled allowed us to eliminate some excessive disk thrashing; I recall we lost recoverability in that db but that was likely due to some other tweaks we had made to said db]


Hi Mark, Thanks for your valuable information.

Your idea of doing the insert in my table without the indices was an excellent one because my table doesn't contain any clustered index. I ran some test and the gain is good on my reduced test-case. I'll try to do a test on my full scope and see the improvement.

I haven't tested yet adding some thresholds to DUMP TRAN automatically.

There's a point I don't get, my database is already set with the option 'dump tran on checkpoint', therefore when the checkpoint flushes dirty pages every minute (I take the assumption that the number of dirty page is > parameter 'recovery interval in minutes') my syslogs usage should be reduced, am I correct? maybe the syslogs get filled right after being flushed so it looks like it didn't free some space.

I have a question, I'm using the command 'dbcc checktable(syslogs)' to see the number of free pages. Is it the best way to get the metrics?

Regarding the insert, my table has no partition. I only have several spid doing bulk inserts on it but you're right again, I'll run a new test and check the wait event for those spids.

Now regarding the sar output and the potential slowness on the devices, you're probably right but I'm not sure our system team has the expertise to diagnose such issue, as of now all I can say is the sybase devices are defined on 3 zpools nevertheless I did an interesting test:

I created a test database with each segment on its own device so:
-default segment on devA
-system segment on devB
-logsegment on devC
The 3 devices have directio OFF
Then I launch 3 times the test below:
drop table test_num
create table test_num(id numeric identity,quan time)
insert test_num values(current_time())
go 100000
declare @premier time
declare @dernier time
select @premier = quan from test_num where id = (select min(id) from test_num)
select @dernier = quan from test_num where id = (select max(id) from test_num)
select datediff(ms,@premier,@dernier)

The timing I got are: 35760; 34066;34423
Then I set directio ON for devC, restart my dataserver and execute the same test.
The timing weren't good at all: 243006; 175003; 180283.
As I was expecting directio to be faster maybe it highlights that my zpool/zfs aren't configured correctly. It's an interesting area but I'm not sure if I can do it right now as IO investigation is an area I'm not familiar with.

Best regards



How often the checkpoint process truncates your log will depend on how much other activity is has to perform, eg, if you've got 500 databases then it could take awhile for a single checkpoint process to work its way (serially) through 500 databases ... and under this type of scenario it would be normal behavior to configure the dataserver with more than one checkpoint process; heck if you've only got a single checkpoint process running then it probably woudn't hurt to configure another 1/2 checkpoint processes to see if that helps.

One other obvious(?) issue to check is to make sure you don't have a long running txn sitting in the database during your process ... if one exists, said txn would keep the log from being truncated. ("Duh, Mark!" ?)

As for checking log space usage ... I usually run `sp_helpsegment logsegment` and look at the last few lines (shows total size, used size, unused size). If you really do have short-lived txns, and you're not seeing the log truncated in a timely manner, I'd suggest you try issuing a manual 'dump tran/truncate_only' and then check log space again.

re: ZFS/zpools ... I don't think (don't quote me on this) ZFS supports directio ... it might be interesting to see what happens if you configure devC with dsync=true (and directio=false); otherwise you might query tech support about issues/recommendations for ZFS (eg, any suggested config/settings? suggestion to use different FS type?).


Hi Mark,

For the checkpoint, my test dataserver has only 13 databases and I think I'm the only one to really generating activities on this database. During my use-case, I was also collecting '5 minutes' sp_sysmon in a loop and the checkpoint section is showing long checkpoint duration

Recovery Management
Checkpoints per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
# of Normal Checkpoints 0.0 0.0 6 100.0 %
# of Free Checkpoints 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ----------
Total Checkpoints 0.0 0.0 6
Avg Time per Normal Chkpt 37.16667 seconds

So I'm not sure if in my test case, adding several checkpoint process will help.

You're right for directio and zfs (at least it's my understanding from note 1878876).

I think I'll have to dig a bit in IO subsystem and DTrace to better understand and improve my IO sub-system

I'll keep you informed (and Avinash also) if I find any interesting points.




What you state is not clear : "my database is already set with the option 'dump tran on checkpoint"

Are you confusing this with database option "trunc log on chkpt" ?

If that is so, dump transaction by hand or by threshold is not useful.

Run this query in and see what it returns

select tran_dumpable_status('your database name')

Output should be 0, to run manual/scheduled/threshold dump tran command meaningfully.

The transaction logs are useful if you need point-in-time recovery after loading database from a full dump.




Hi Avinash,

My bad, you're right my database has the option 'trunc log on checkpoint' already set.




NP Simon.

Looks like database recovery is not a show-stopper issue for your use-case.

So your threshold procedure (if and when it gets triggered) should just truncate the log (a dump tran option) for this type of database where 'truncate log on checkpoint' is configured. If you are doing large scale inserts via bcp (or otherwise) just have multiple thresholds on logsegment to truncate log as often as possible.



Avinash Kothare Dec 04, 2017 at 11:10 PM

Transaction log is shared by all SPIDs running DML in the database. So 100% of it may not be available to your SPIDs.

Normally "run" and "sleep" approach should work.

Check the log space available "before" you start DML SQL.

sp_spaceused(syslogs) being the simplest way to get the number of free pages.

After running your "reasonable" number of DML operations and a commit, check the transaction log space again.

You can build your own thresholds to "start" next set of DMLs when free space in syslogs is above certain percent e.g. 50% free ( or a hard limit of pages are free) and stop after your reasonable DML transactions are done. (You can start with 1K DMLs and raise the throttle in chunks of 1K)

Then enter a "sleep" cycle for random number of time units. (e.g. 10 seconds to 100 seconds)

During this the log will be truncated or tran dump may happen by way of schedule or threshold trigger.

Then start the next "run" cycle.

Following will be important factors in building your logic

- are tran logs are automatically truncated

- if not how frequently the tran dumps are done or are they fired by thresholds

- how wide is the table you are inserting (fewer wide rows versus more short rows for the log space)

- do the tables have DML triggers, RI and other constraints to slow down the DML

You may have to test this in controlled way to get the right mix.



10 |10000 characters needed characters left characters exceeded