Skip to Content
author's profile photo Former Member
Former Member

rigth way to find free space on log segment

-- Server information

-- Adaptive Server Enterprise/15.0.3/EBF 17690 ESD#1.1 RELSE/P/Solaris AMD64/OS 5.10/ase1503/2681/64-bit/FBO/Thu Aug 20 15:37:40 2009

Good afternoon,

I tried the following command to find out the free space in the log segment

1.dbcc checktable(syslogs)

2.sp_helpsegment logsegment data_pages(db_id('DB_NAME'),8,0)

4.sp_spaceused syslogs

solution 1/2/4 give about the same result but solution 1 might be long to compute when the log segment is full.

How do you efficiently compute free space in log segment?

Thanks all


Add a comment
10|10000 characters needed characters exceeded

Related questions

2 Answers

  • Best Answer
    Posted on May 23, 2013 at 07:37 PM

    In ASE 12.5 we used

    select lct_admin('logsegment_freepages', 9 )

    where 9 is the dbid of the database; that is, db_id('DATABASE_NAME') Result is expressed in pages.

    I think we extracted that query from the text of sp_helpdb. You could do the same, read the sp_helpdb SQL definition run by installmaster and see how it manages to get the logsegment free space figure.


    Mariano Corral

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on May 19, 2013 at 04:35 PM

    How far off was option #3?

    Using solely the options you've listed I'd say the most efficient method is the one that runs the quickest and is easiest for the front-end to process (eg, the output from checktable is going to be a bit tough to process in a stored proc). ("Duh, Mark!" ?) For the 2 stored proc options I'd probably want to write my own stripped down versions that eliminate any queries/processing that's not needed specifically for your case.


    However, stepping back a bit (errrr, taking a look at the bigger picture) ... What's your ultimate goal? What are you going to do with the space usage data? How accurate does the data need to be?

    If you're looking for a means of sounding an alarm if the log gets 'too full' ... or if you're looking for a historical perspective of when you have peaks in log space usage ... have you considered setting up multiple thresholds on the logsegment?

    Instead of constantly polling the database with your resource-intensive query (how often do you need to run your query for it to provide the necessary benefit?), you could let the database's threshold monitoring capability provide you with the data you need without the overhead of constantly running the space-calculation query.

    The thresholdaction stored proc can be coded to a) automatically dump the log, b) insert the datetime/space-usage data into a table and c) send messages to the errorlog:

    a) with excessive thresholds, and especially with long-running queries, you can end up re-dumping a lot of the log over and over, so you may want to add a bit of logic to insure you only dump the log for certain threshold crossings

    b) with enough thresholds you can get a pretty good idea (from the historical data) of when you have spikes in log activity

    c) with errorlog monitoring (eg, monitor the physical errorlog; repeated polling of the monErrorLog table) you can get warnings when the log starts to get 'too full'

    At one of my current clients I've scrapped the resource-intensive polling queries in favor of using thresholds, a threshold action stored proc that auto-dumps the log (and dumps messages to the errorlog), and real-time monitoring of the errorlog (capable of sending emails and/or pages depending on environment and % of segment in use). Historical reporting isn't a big issue so I've skipped dumping the data to a table (I can always do a quick scan of the errorlog for threshold firings to get historical date/time/segment/threshold-level data). I've got 3 (or 4?) thresholds defined on every segment of every database in the dataservers I'm responsible for (a post-startup shell script assigns thresholds to all segments in all tempdbs).

    Granted, the threshold 'solution' requires some initial design and infrastructure buildout but it eliminates the overhead issues of a polling-based, resource-intensive query solution (IMO). [NOTE: A good errorlog monitoring solution provides a lot of benefits above and beyond just monitoring thresholds.]

    Add a comment
    10|10000 characters needed characters exceeded

    • The actual firing of the threshold isn't a big issue. What you have the threshold do could be an issue (eg, you wouldn't want dozens of thresholds, only @@thresh_hysteresis pages apart firing in quick succession and each one running one of your expensive queries to verify log space usage).

      To track log usage of the various upgrade steps you could do something like placing thresholds on the 10GB log at 5% intervals, with the threshold action proc dumping a message to the log stating the used/free pages/percentage-of-log (easily calculable with proc's input parameters and values from sysusages); keep in mind that each message dumped to the errorlog has a datetime stamp. Then assuming your upgrade tool has the ability to print a status message with a datetime stamp for each step, you could match the tool's datetime stamps with the threshold datetime stamps to figure out which step was running at the time a particular threshold message was generated.

Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.