on 01-31-2017 2:12 PM - last edited on 02-03-2024 7:36 PM by postmig_api_4
Hi community,
May I ask your feedback on whether you notice any improvements when the client side is executed on the same server than the dataserver?
To give you more context, our application is usually deployed on a different server than the dataserver. The upgrade is mainly taking place on the dataserver.
So we have a client process that connects to the dataserver and executes some SQL statements.
Now discussing with some colleagues they noticed some improvements in the upgrade process when our application is deployed on the same server hosting the dataserver.
So I'm just wondering what could the reason of such improvement (I'll run a test to see by myself): less usage of the network stack maybe?
Have you noticed similar improvement?
Thanks in advance
Simon
Four years ago we split our server, which initially hosted both the ASE server and the app, into two different machines, both high end Unix servers linked by a high quality LAN. Our tests revealed that the overhead for splitting machines was 130 microseconds per round trip from the app to Sybase ASE.
That overhead was unnoticeable for most programs. Only a handful of ESQL batch programs that opened some huge cursors shown a delay of several minutes, as each of its 50 million SQL FETCH statements made a round trip from the program to ASE. The programmer had better batched cursor rows in order to get more than one row per trip.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
88 | |
10 | |
10 | |
9 | |
7 | |
7 | |
6 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.