cancel
Showing results for 
Search instead for 
Did you mean: 

About fucntion GET_POOL_SUM and units for page faults.

Former Member
0 Kudos

Hi,

Since last year I'm collecting the output of function GET_POOL_SUM but I'm not able to interpret the values obtained. The ouput of the function is a little bit tricky, at least interpreting times: there is a record for each memory pool and last 24 hours. Some fields are easy to understand: pool sizes or job transitions. But I cannot understand the value for page faults and pages.

Is it an average for periods of one minute ??

Example: In *BASE pool ( nr. 2 ) I have a value of 44.051 for field DB_FLT ( DB faults ) in row corresponding at 09:00. This value appears at OS as page faults / second. But 44.051 page faults/second is too much by far. As saposcol uses periods of 1 minute, do I need to divide this value by 60 ( 44.051 / 60 = 734,2 ) ?

Regards,

Joan B. Altadill

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi Joan,

nice to hear from you again )

the values are "a bit crazy" ....

They are in xxx,yy

It is supposed to be an average of the hour and 44.051 would mean: 440,51 page faults / s as average at that hour.

Please keep in mind, that "pages" are not of real interest ( at least as long as the disks are fast )

(that is as "pages" as asynchronous reads ...)

Regards

Volker Gueldenpfennig, consolut international ag

http://www.consolut.com http://www.4soi.de http://www.easymarketplace.de

Former Member
0 Kudos

Hi Volker,

Still figthing with performance 😄

Our response time is always a nightmare due to our Z code. In our case there is no way to attack the root cause: poor and ineficcient code. So the only tool available to me is hardware and SAP tuning. But SAP tuning has a limit and only additional hardware helps at all.

So we have activated in trial mode, tons of CPU and memory just to demonstrate the economical impact of an inefficient system. The response time has decreased enough but specially due to the extra memory amount. So I want to create a ratio that controls page faults, not the amount of pages which is not useful as you said. In fact in my example the numbers came from the faults data. As I do with disk, with excellent results, I want to use our BI system to have a forecast about when extra memory will be necessary and be proactive. Our system works correctly until we reach a value of 1.000 faults/s, higher values mean poor performance, always considering that CPU usage is also not overloaded.

But this values from GET_POOL_SUM are reliable ? I'm trying to compare data with data obtained from PM/400 and never match... Are there any other source of information ?

Regards,

Joan

Former Member
0 Kudos

Hi Joan,

very interesting, what you are writing ))

I do have some suspect, that they are not really reliable as well ...

If you are able to show problems with PM/400, I would open an OSS ticket and check this with IBM / SAP on a "detailed level" ...

Regards

Volker Gueldenpfennig, consolut international ag

http://www.consolut.com http://www.4soi.de http://www.easymarketplace.de

Answers (1)

Answers (1)

0 Kudos

Hi Joan,

function module GET_POOL_SUM was not intended to be called manually, that's why it is not really documented. First of all, Volker was right: The values need to be devided by 100 to get the real value, i.e. a value of 44051 needs to be interpreted as 440.51 (fourhoundred-forty point fifty-one). This is done to allow transfering integer values between the kernel and ABAP for data that does not represent integer values.

In addition, the data in GET_POOL_SUM represents the average page faults per second during the one-hour intervals. In other words if you see a value of 20000 (meaning in average 200 page faults per second), it could represent the first half hour having no page faults at all and the second half hour having 400 page faults per second. This may not be very helpful, but at some very early stage, SAP has decided to provide 1-minute averages for the snapshot data and 1-hour averages for the history data, and it is now hard to change that.

The PM/400 data may be more granular (5 or 15 minutes intervals), but if you combine the intervals that represent one hour and display the averages there, you should see roughly the same data as with GET_POOL_SUM.

Kind regards,

Christian Bartels.

Former Member
0 Kudos

Hi,

If values are reliable it is ok for us to use as a performance ratio. There are lots of ways of average calculations, but if we always use the same and compare it, it could be a great help in resource capaclity planning tasks. It does not matter if PM/400 has different values and periods are differents as you explained.

Note that we collect this hourly averages and summarize in a BI query as a monthly average ( with a tricky algorithm ). But it helps to see vegetative grothws which are also lineal and disruptions which are the real pain.

Regards,

Joan

Edited by: Joan Baptista Altadill Elías on Nov 18, 2010 11:35 AM