cancel
Showing results for 
Search instead for 
Did you mean: 

Slow updates

Former Member

Hello,

We are having a terrible time at a bunch of client sites.  I am away for this week but return back in office next week.   When I do, I will open a support call directly with SAP to resolve it at high level -- but I need some help/suggestions now so I can be prepared.

We have thousands of sites and we never had issues.   Since we've started upgrading client sites to the latest Sybase, their nightly processes are killing us.  For example, the nightly process would take about 8 minutes.  Now it is taking 3 and 4 hours!!!   A process that would take 3 seconds is now taking 10 minutes.   The logs show that processing IS happening (it's not frozen) but for some reason things have been pretty bad.   In fact one process took from 6:01AM to 6:59AM. 

We've already opened a previous call with Sybase and resolved it on our own.  We got tons of suggestions, none of them helped - and on my own, I wrote the query myself and got it back to 3 seconds.  However - this same query ran at 100's of sites for years without issue and then upgrading to the latest sybase (4204) caused it to take an hour!!   Then, I did rewrite it without an access path to one table using a Temp Table and got it back to 3 seconds.   BUT YOU HAVE TO REALIZE IT WAS FINE FOR YEARS AT MANY SITES!

One client is extremely upset - a 30 minute job now taking 3.5 hours.   It's crazy!   We are thinking best plan of attack is simply removing SQL Anywhere and going back to the older version to see if the problem goes away. 

Before I open calls - are there any known issues with recent builds of SQL Anywhere with regards to performance or a new way of processing.   For us (InnQuest) - our Optimization_Goal is "First_row" and we have had it like that for 10 years or more.   I've had sites go back to default and no change just for ha'ha's - and trying it out.

We do not have control over the queries and update statements as we use a language to write our applications and that driver writes all the statements to access SQL Anywhere (ODBC) --- and nothing has changed on our end.   But, in other sites, we have NO/ZERO issues.   It just makes no sense.   We have sites that have issues and some that do not.   We take the database to our system and run it, it runs without issue.  We run it on their system and it takes an hour.  There is no rhyme or reason to what is going on but it is killing our company in terms of customer satisfaction.

I will open a call directly next Monday when I get back into the states ... but what job can I run now to monitor a full performance of a nightly process to give someone here this week to look at it?  Again, we dont have control over the statements - the compile/language does that - and none of that has changed.

Our only changes on the SQL Anywhere side  (but have been like this for a decade) are:

Optmization_Goal is First_row

Max Cursor Count is 0

Max Statement Count is 0

All the rest is standard default settings.

1.  Are there any known issues or reported issues with speed/performance

2.  Are there any known changes that would cause these issues in the latest few builds of Sybase

3.  Has Microsoft done anything with regards to their updates that could cause this

4.  What log would be the best right now to give you - it's not a SINGLE PLAN / Query - this is thousands of queries that run during a nightly process (read a record, update, etc.) -

We've been using SQL Anywhere 9 and 12 for decade with zero performance issues at thousands of sites.  These issues have just recently popped up and the only changes we've made on our side is updating our clients SQL Anywhere 12 to the latest builds. (4216)

Thank you very much - we need help!

Accepted Solutions (0)

Answers (1)

Answers (1)

jeff_albion
Employee
Employee
0 Kudos

Hello Robert,


We are having a terrible time at a bunch of client sites.  I am away for this week but return back in office next week.   When I do, I will open a support call directly with SAP to resolve it at high level -- but I need some help/suggestions now so I can be prepared.

Yes, it sounds like we will need to work with you directly in support to investigate this issue. Opening an incident (or multiple incidents for each end-customer) is best, to help keep the information organized.


We have thousands of sites and we never had issues.   Since we've started upgrading client sites to the latest Sybase, their nightly processes are killing us.  For example, the nightly process would take about 8 minutes.  Now it is taking 3 and 4 hours!!!   A process that would take 3 seconds is now taking 10 minutes.   The logs show that processing IS happening (it's not frozen) but for some reason things have been pretty bad.   In fact one process took from 6:01AM to 6:59AM

Yes, this is quite possible and a general hallmark of a performance issue - query execution can go from typical seconds to inexplicable minutes (or minutes to hours in your case).

There are a number of factors operating here: the database optimizer must make calculations to figure out the proper ordering of tables when creating a plan. With N joined tables in a query, there are N! possibilities for joining them. The larger the join magnitude, the more difficult it is to come up with a stable and optimal ordering. In earlier editions of SQL Anywhere, there were fewer optimization nodes/algorithms/options which meant a smaller search space for possible plans (and indirectly meant more stable plans). User-defined functions used to join tables can also cause instability in plans as many times we generally do not know what the join cardinality of a function is and need to make a guess. Combined with a changing cache status and data statistics in the database, some tables may be placed in different in orders in plans, leading to drastically different performance numbers due to the number of rows that are being processed in the internal plan nodes. Poor table join order choices can become large performance problems, very quickly.

All of these changing conditions have to be looked at for each individual query you can identify that has a performance discrepancy. To solve the issue, sometimes breaking up the query, recreating statistics, adding an index, or changing the costed join conditions (as you have above with the temp table solution from before) is a viable solution to ensure proper table ordering and join estimates in the optimizer. If you are not able to change the query, that does slightly limit our options for resolution and may require you to employ a more situational based approach that works for your situation (i.e. change the schema, add an index, statistics creation, etc.).

We do note that there are also interim updates to the optimizer in Support Packages to achieve correctness or better performance on real customer queries that are coming in to our development group. While these changes may be advantageous to most or many queries overall, there may be outlier queries where this change impacts their performance significantly and we need to then understand those queries better in response. Our internal QA tests try to minimize these impacts in Support Packages before being released to customers.

The way to generally prevent this scenario from happening is to create a performance test ( Performance and Tuning - SQL Anywhere - SAP SQL Anywhere - SCN Wiki ), under 'production' load conditions that mimic your real life users and data sets, before deploying any software changes. See our whitepaper on Capacity Planning:

---


What log would be the best right now to give you - it's not a SINGLE PLAN / Query - this is thousands of queries that run during a nightly process (read a record, update, etc.) -

Database Tracing ( SQL Anywhere Trace Database Setup via Database Tracing Wizard - SAP SQL Anywhere - SCN Wiki ) is the best method to collect information for multiple statements over a longer period of time. Enabling a 'High' level of trace (i.e. including plans with statistics) is the best way to collect detailed performance data so that we can then drill down into the actual queries that are slow. If this type of monitoring is too impacting to the performance, try removing counters that are not related to capturing the plans and performance.

We would then encourage you to take a look at this performance information collected as it will most likely help you to understand the areas/queries of concern. We can then work with you to take a look at the identified issues with the queries individually in the tracing database and look for resolutions to the performance issues.

From a system perspective, understanding the CPU, I/O queue, and memory usage for each customer during these slow times would also be beneficial to see if there is a common system limitation that SQL Anywhere is encountering (in possible combination with an optimizer issue).

Regards,

Jeff Albion

SAP Active Global Support

Former Member
0 Kudos

Hello,

OK - here is something we found out today.

If this runs on the workstation, the entire night process takes 3 to 3.5 hrs

If this runs on the server, the entire process takes 10 minutes.

In terms of the process that is being held, it takes 3 hrs on the workstation and 1 minute on the server.  There are more than 50,000 writes to the database, all seperate - as it is a program that is doing the update statements - it's not just an UPDATE.

The slowness (logging) is done during the LOGOUT, WRITE, WRITE,WRITE,WRITE, COMMIT

When we look at the LOCKS, we see about 10,000 things in there - and they are there for about 3 minutes and then they disappear.  When running from the server, it goes so fast, we don't see them at all.

I saw the workstation was using the ODBC 9 driver to connect to the SQL Anywhere 12.01 database.   I had the user fix this so it was a 12 driver and they are going to run the process tonight.

The issue has nothing to do with a query or query plan - we are just updating thousands of forecast/inventory records one at a time - within a transaction frame.  Again, we run the process on the server and it's lightning fast.

Former Member
0 Kudos

By the way --- when I do the logging, it didn't help much because it doesn't show "times" it only shows data.  Is there a way for the log file to show time/timers?  Mind you this log just says the same thing over and over because we are updating 10's of thousands of records through a batch program.

303 015027.158,>,3576,EXEC

=,<,3573,EXEC,1932066

=,<,3576,OPEN,721082

=,W,3576,111,Statement cannot be executed

=,>,3576,OPEN,0

=,H,3573,0,varchar,'*52' 

=,H,3573,1,timestamp,2015-05-14 00:00:00.000000 

=,H,3573,2,smallint,1 

=,H,3573,3,smallint,0 

=,H,3573,4,smallint,0 

=,H,3573,5,smallint,0 

=,H,3573,6,smallint,0 

=,H,3573,7,smallint,0 

=,H,3573,8,smallint,0 

=,H,3573,9,tinyint,0 

+1,P,3573,[S]INSERT ROWS

=,>,3573,EXEC

+48,<,3573,EXEC,1932066

=,P,3573,[S]INSERT ROWS

=,H,3573,0,varchar,'*DD' 

=,H,3573,1,timestamp,2015-05-14 00:00:00.000000 

=,H,3573,2,smallint,5 

=,H,3573,3,smallint,0 

=,H,3573,4,smallint,0 

=,H,3573,5,smallint,0 

=,H,3573,6,smallint,0 

=,H,3573,7,smallint,0 

=,H,3573,8,smallint,0 

+1,H,3573,9,tinyint,0 

=,P,3573,[S][C]INSERT ROWS

=,>,3573,EXEC

+31,<,3573,EXEC,1932066

=,P,3573,[S]INSERT ROWS

=,H,3573,0,varchar,'*DDP' 

=,H,3573,1,timestamp,2015-05-14 00:00:00.000000 

=,H,3573,2,smallint,6 

=,H,3573,3,smallint,0 

=,H,3573,4,smallint,0 

=,H,3573,5,smallint,0 

=,H,3573,6,smallint,0 

=,H,3573,7,smallint,0 

=,H,3573,8,smallint,0 

=,H,3573,9,tinyint,0 

=,P,3573,[S][C]INSERT ROWS

=,>,3573,EXEC

+55,<,3573,EXEC,1932066

=,P,3573,[S]INSERT ROWS

=,H,3573,0,varchar,'*HK' 

=,H,3573,1,timestamp,2015-05-14 00:00:00.000000 

=,H,3573,2,smallint,1 

=,H,3573,3,smallint,0 

=,H,3573,4,smallint,0 

=,H,3573,5,smallint,0 

=,H,3573,6,smallint,0 

=,H,3573,7,smallint,0 

=,H,3573,8,smallint,0 

=,H,3573,9,tinyint,0 

=,P,3573,[S][C]INSERT ROWS

=,>,3573,EXEC

Anyway - that's the code that is running - it goes on and on, but the logging and the writes are extremely slow.   But, again, if run on the server, the updates, log file, etc. are all written completely in a minute (or less)

jeff_albion
Employee
Employee
0 Kudos

Hi Robert,


Is there a way for the log file to show time/timers?

The request log does show the start time but the related timing of statements is embedded for brevity/speed - i.e.

   303 015027.158 --- Start month/day and time (March 3rd, 01:50:27.158 a.m.)

   =                      --- No change in time from previous line
   +48                  --- +48 ms from previous time

You will want to use 'sa_get_request_profile' / 'sa_get_request_times' to summarize the timing information for statements from a request log, which provides results into the temporary tables satmp_request_time / satmp_request_profile.

You can also use the Perl tracetime.pl script found in the 'Samples\SQLAnywhere\PerformanceTraceTime' folder as an option to parse the request log file:

   perl tracetime.pl myreqlog.txt format=fixed | sort /rec 65535 /R >sorted.txt

Regards,

Jeff Albion

SAP Active Global Support

Former Member
0 Kudos

Hello,

OK - thanks for your help here.   Last night, before the night process was run, this time we made sure that we had the V12 ODBC driver on the workstation.  The process for the updates took just as long:

03/04/2015 01:17:31 - Rebuilding Forecast ...

03/04/2015 02:08:17 - Processing User Selected After Dayend Reports ...

So, that procedure took almost an hour.   Again, if we run this process directly on the server, it takes about 1 minute.   That's the part we just don't get!!    The process is updating records, not queries or select statements.  It's taking a record set from one file and issuing UPDATE statements against another table.

Would you suggest maybe the ODBC Trace Log that I can use on the workstation level?  Would that help?

Tonight, the client is going to run this process (since it only runs once a day) on another workstation to see if there is any change i behavior to rule out the workstation itself.

Also, if I setup a GotoAssist session on this workstation, could someone from SAP log in and watch it and see what is going on or is SAP not allowed to do that?

Thank you.

jeff_albion
Employee
Employee
0 Kudos

Hi Robert,


The process is updating records, not queries or select statements.  It's taking a record set from one file and issuing UPDATE statements against another table.

If it's a pure update/insert process without any selects involved, there are fewer performance factors in play. The write speed on the database server primarily depends on the CPU/cache speed, followed by the speed of the disk holding the transaction log, and speed of the disk holding the database file. The write speed of the database client depends on the insert method used (prepared statement vs direct execution, with or without a cursor, single vs wide insert, imported from a client file or direct supplied data, etc.), network latency, and the client performance information (i.e. how busy is the client doing other work?).

Raw (maximum) database client insert times against the database server can be calculated via the \Samples\SQLAnywhere\PerformanceInsert\instest.exe ESQL test program:

Usage: INSTEST [options] [fname]

Options:

   -c conn_str     : database connection string

   -i              : use INSERT (default = PUT)

   -k rows         : CHECKPOINT frequency (default = never)

   -m rows         : COMMIT frequency (default = never)

   -n cols         : number of non-null columns

   -o outfile      : record duration in file

   -q              : quiet mode

   -w width        : rows to insert per request

   -r rows         : rows to insert

   -v start_value  : starting value (for keys)

   -x              : generate unique row values

We will need further technical details about what your update procedure is doing and how it's executing those updates exactly from the client.


Would you suggest maybe the ODBC Trace Log that I can use on the workstation level?

Tracing ODBC calls via the ODBC Driver Manager can be quite expensive on the client side - this will likely slow down the updates further if you enable it. (Thought: maybe check to see if this has already been enabled recently for other diagnostics...?) It may help to understand both sides of the conversation though if it appears that the database server is processing the updates quickly from a server-side perspective and the client is taking the time to send the updates. I would recommend not running that diagnostic currently until you can narrow down the behaviour a bit more, otherwise you will likely end up collecting a lot more information (at the cost of client performance) than you would otherwise need to.


if I setup a GotoAssist session on this workstation, could someone from SAP log in and watch it and see what is going on or is SAP not allowed to do that?

The rule with remote connections in SAP support is that we can only use the Citrix GotoAssist technology, which is configured for your account/system via KBA 2026090. No alternative screensharing technologies are allowed to be used by SAP technical support (Remote Desktop, TeamViewer, LogMeIn, WebEx, SAP Connect, VNC, etc.). But yes, once the connection is configured for your system, we can make use of the online connection to see what you're seeing and work with you in a live session to help you diagnose the problem further.

Regards,

Jeff Albion

SAP Active Global Support