Skip to Content
author's profile photo Former Member
Former Member

Waitevent 250 in ASE157 and ASE16

Say for example if you are executing a huge sql file with each statement separated by a "go".

select 1

go

select 1

go

.......go this way up to 50000

go

select * from master..monProcessWaits

where SPID = @@spid

go

The amount of Waits for WaitEventID will be equal to the number of "go" statements in the file. We have measured the Waits and WaitTime between ASE16 and ASE157. We see the WaitTime in ASE16 is 1/3rd of what it is in ASE157.

Why is this so important?

If you are running some benchmarking tests between ASE157 and ASE16 then this becomes an important factor. It will never be like for like.

However, we think this is a good improvement in ASE16 and will help improve elapse times.

Version ElapsedTime_ms Waits WaitTime_ms ASE16 113,416 2,906,998 983,803 ASE157 276,943 2,907,216 2,581,027
Add a comment
10|10000 characters needed characters exceeded

Related questions

3 Answers

  • Posted on Sep 17, 2016 at 05:21 PM

    Too many questions at this point to tell if this is a valid test ...

    --------------

    WaitEvent=250 => waiting for incoming network data


    WaitEvent=250 indicates the dataserver has nothing to do and is waiting for the client.

    If you're seeing a reduction in the total time the dataserver is waiting for the client (WaitEvent=250) then this would tend to indicate (to me) that there's something different with the client ... and has nothing/little to do with the dataserver version.

    NOTE: I'm assuming ASE 16 has not changed how it measures WaitEvent=250; obviously if ASE 16 *has* changed how WaitEvent=250 is measured then you won't be able to make an apples-to-apples comparison of just WaitEvent=250.

    I'd want to know more details on your client sessions ... tool(s) used to run tests, tool version(s), if client sessions were run from the same host, were client sessions writing results data to the same file system, etc. [For example, with 50K separate 'select 1' clauses, 50K result sets will need to be written to the client - either the console or a file; if writing to a file then disk/file IO times become a major component of how long it takes the client to submit a new batch to the dataserver.]

    It also wouldn't hurt to know if the dataservers are running on the same host, are network ping times comparable between the clients and dataseervers, etc.

    At this point I'm curious as to where the time savings are coming from and why the one client appears to be 'faster' for the test run against the ASE 16 dataserver ... ?

    --------------

    - you issue 'select 1 / go' 50K times

    - you mention this should show up as 50K waits [I agree with this observation]

    - your table shows 2.9M waits

    Why the discrepancy between the expected number of Waits (50K) and the observed number of Waits (2.9M)?

    What was the actual test case you used that generated the numbers in the table?

    --------------

    What is your definition of ElapsedTime_ms?

    If this is a measurement of the total time to run the test (from the client perspective), then I'd expect ElapsedTime_ms to be greater than WaitTime_ms (ie, ElapsedTime_ms is a combination of WaitTime_ms, plus network time, plus ASE cpu time, plus ?).

    Where did your ElapsedTime_ms numbers come from and what do they represent?

    --------------

    - your test case shows 'select * from monProcessWaits where SPID = @@spid' (show all WaitEvents)

    Can you confirm your Waits and WaitTime_ms numbers are only for WaitEvent=250?

    --------------

    Here's my test case:

    ================== test case

    -- eliminate overhead of writing 50K result sets to the

    -- client (console or file) by using 'go 50K' construct


    select 1

    go 50000


    select * from master..monProcessActivity where SPID = @@spid

    select * from master..monProcessNetIO where SPID = @@spid

    select * from master..monProcessWaits where SPID = @@spid

    go

    ==================

    I ran 2 sets of test cases:

    NOTE: Each test case was run from the same host and from the same filesystem

    NOTE: All isql and dataserver processes were running on a single Solaris 11 (x86) host

    NOTE: All test cases were run 3 times and the averages are displayed below

    ================== test results

    - isql (15.7 SP136)

    ASE 15.7 SP136 WaitEvent=250 Waits=49986 WaitTime=2248

    WaitEvent=215 -- no measurements --

    ASE 16.0 SP02 PL03 WaitEvent=250 Waits=49988 WaitTime=2112

    WaitEvent=215 Waits=50015 WaitTime= 112

    ===================

    total WaitTime=2224

    - isql (16.0 SP02 PL03)

    ASE 15.7 SP136 WaitEvent=250 Waits=49988 WaitTime=2319

    WaitEvent=215 -- no measurements --


    ASE 16.0 SP02 PL03 WaitEvent=250 Waits=49985 WaitTime=2162

    WaitEvent=215 Waits=50008 WaitTime= 164

    ===================

    total WaitTime=2326

    ==================

    Test case analysis:

    - there does appear to be a small reduction (~6%) in the total WaitTime for WaitEvent=250 for ASE 16.0 SP02 PL03

    - there is a noticeable measurement of WaitTime for WaitEvent=215 for ASE 16.0 SP02 PL03 (non-existent for ASE 15.7 SP136)

    Total WaitTimes (sum of WaitEvents=215/250) appear to be comparable for both ASE 15.7 SP136 and ASE 160 SP02 PL03 when using the same client session/test case.

    NOTE: All other WaitEvents (besides 215/250) had WaitTimes ranging from 0-4ms and were thus discarded from the results

    NOTE: Other monProcessActivity/NetIO measurements (eg, cpu, memory used, # packets, # bytes sent/received, etc) were comparable (within 1% range) across all tests

    From this (very) simplistic case it would appear (to me) that ASE 16.0 SP02 PL03 may have refined how waits are measured (at least for WaitEvents=215/250), but there's no noticeable reduction in overall WaitTimes when using a common client and SQL test script.

    --------------

    Contrary to how the above comes across, I'm not doubting that you've seen what looks like some reductions in WaitTimes in your tests, but at this point I'd be curious as to how I could go about reproducing your results.

    Add a comment
    10|10000 characters needed characters exceeded

    • If I've got my math right, we're talking about 0.34ms per WaitEvent for ASE 16.0, and 0.89ms per WaitEvent for ASE15.7.

      That works out to a difference of 0.55ms per WaitEvent, well within the differences that could be introduced with slight variations in network components (eg, difference in router/packet processing, NIC settings, OS/network stack configs, perhaps a difference in firewall rules, etc).

      Can the network/hardware folks rule out any difference in the network components?

      Any chance of running some tests against other ASE instances (any version) running on other machines? [Curious if you're seeing comparable differences in WaitTimes for ASE 15.x vs ASE 16.0 on different hosts/network configs.]

  • Posted on Sep 16, 2016 at 09:17 PM

    "<snip>

    We see the WaitTime in ASE16 is 1/3rd of what it is in ASE157.

    </snip>

    That is music 😊

    Cheers

    Avinash

    Add a comment
    10|10000 characters needed characters exceeded

  • author's profile photo Former Member
    Former Member
    Posted on Sep 19, 2016 at 01:04 PM

    Out of interest, what kernel mode are you using in Sybase 15.7 and Sybase 16 ?

    What version of Sybase 16 are you using ?

    Add a comment
    10|10000 characters needed characters exceeded

Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.