cancel
Showing results for 
Search instead for 
Did you mean: 

Sizing SQL Memory

Former Member
0 Kudos

We have a landscape with 6 systems.

We are planning on a three tier architecture.

An "Active-Active" cluster will be used to host the databases on SQL2005 64bit/Ent. Three systems will reside on each node under business as usual conditions.

Does increasing the amount of memory on the database servers have a significant impact on performance? Or if sized correctly, would little gain be achieved from increasing the amount of memory beyond the sizing results?

Regards,

Andrew

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

The rule we have here inhouse is:

1. For physical servers where database and SAP application will reside:

Memory setting (fixed) for SQL = (physical RAM - 2GB) divided by (Number of apps [including SQL]).

E.g. If I am running 2 SAP instances along with the DB server on a physical server with 48GB of ram. It would be:

(48 - 2) / (3 applications) = 15GB fixed memory for SQL.

2. For physical servers where only database is running, the rule is simplier: physical RAM - 2GB.

FYIs:

- The 2GB is for the O/S.

- Make sure in local security policy you enable "lock pages in memory" for the user running the SQL NT services. This minimzes paging outs and ensures SQL performance.

Former Member
0 Kudos

Thanks Kevin,

Not quite waht I was asking, but helpful nonetheless.

Does addtional memory on the SQL server (DB only, no apps) improve performance to any degree?

I imagine it could help with reads (caching?) but possibly not a lot of improve performance with writes.

Andrew

Former Member
0 Kudos

Sorry, I guess I should've been more straight forward with my points. Yes, every bit of memory you give to SQL helps, this explains why we have 66GB (on the 68GB server) dedicated to SQL in my landscape.

As for write performance, you are correct in SQL not making much performance gains. However, majority of the SAP transaction uses asynchronous updates, which makes update response time not as important as read response time.

Lastly, the storage system (we use CX3-80) and storage design matters greatly when it comes to write performance.

Former Member
0 Kudos

A little bit more information, SAP specific:

SQL server specific settings such as page lock escalations should be turned off for update tables, also enable async stat updates at the SQL DB level. These items if incorrectly set will make the update seem "slow", but the real cause is the blocking which occurs at the SAP level (lock escalations).

Also, if the update uses conditions which matches a buffered index, it could also affect write speed as well.

There are a lot of factors, but basically, more memory can't hurt.

Former Member
0 Kudos

Thanks for all the help and information, very useful.

(Points awarded)

Regards,

Andrew

Former Member
0 Kudos

Kevin is right on that more memory never hurts, however at some point you hit a diminishing return and you are just throwing memory at nothing.

If you want to be sure you need the memory increase try looking at the perfmon counter "Buffer Manager: Page life expectency". If the value is < 300-400s you need to increase the memory. If its substantially above that you are probably ok.

Answers (0)