cancel
Showing results for 
Search instead for 
Did you mean: 

Memory consumption by HANA processes

Former Member
0 Kudos

Hi

I'm playing with HANA SPS6 Developer Edition on Cloudshare. While about 19.5 GB RAM is available from the OS point of view, as can be seen in "free -m" output, most of this memory is allocated by HANA processes themselves. Actually, I see that less than 3GB is available for user data and it is very tight restriction, especially given the in-memory nature of HANA.

I wonder if it is possible to decrease the memory allocation by some HANA processes, except hdbindexserver of course. How can I check if the memory allocation by various HANA processes is adequate? Is it possible to change it?

Thanks in advance

Accepted Solutions (0)

Answers (2)

Answers (2)

Former Member
0 Kudos

I encounter the similar issue , can not limit the minimum allocation memory .


I set the parameter global_allocation_limit value to 2G(2048M) of all host , see the following snapshot ,It can not take effect. but if I set the value to 20G(20480M) , it will work well ,


-- set the global_allocation_limit=2048
hdbsql=> select round(allocation_limit/1024/1024,2) allocation_limit from m_host_resource_utilization;
| ALLOCATION_LIMIT                       |
| -------------------------------------- |
|                               13605.15 |


-- set the global_allocation_limit=20480

> select round(allocation_limit/1024/1024,2) allocation_limit from m_host_resource_utilization;
| ALLOCATION_LIMIT                       |
| -------------------------------------- |
|                                  20480 |

Former Member
0 Kudos

Hi Syni

I guess that requesting from HANA to limit itself to so small amount of RAM (2GB) is probably too much for it. Probably, the figure of 13605.15 that you have received is the absolute minimum that it needs to allocate to be able to function, at least by default.

Also, to my understanding from documentation, global_allocation_limit is as global as it sounds, which means that it affects hdbindexserver too, which in turn affects the amount of user data in column tables that can be put to the memory by HANA. If this is correct then restricting of global_allocation_limit doesn't serve us at all. It would be interesting to know if it is possible to limit the memory allocation by different HANA processes and what are possible implications.

vivekbhoj
Active Contributor
0 Kudos

Hi Leonid,

Memory related questions have been asked multiple times,

You can refer to the below threads to know more:

Queries for Used and Resident memory and comparison with Overview tab numbers:

http://scn.sap.com/thread/3424524

Queries on Memory measurement and related:

http://scn.sap.com/thread/3421768

You can also check the following documents regarding Memory Usage in HANA:

https://cookbook.experiencesaphana.com/bw/operating-bw-on-hana/hana-database-administration/monitori...

http://www.saphana.com/docs/DOC-2299

Regards,

Vivek

Former Member
0 Kudos

Hi Vivek

I read the links you have posted. Unfortunately, it doesn't make the situation clearer for me. Some queries listed in the links return very strange results. Maybe that queries used to provide correct results in previous HANA versions (previous SPSs). Let's see some examples from the links you have sent me; I run the queries on a HANA SPS6 Developer Edition instance available on Cloudshare:

From https://cookbook.experiencesaphana.com/bw/operating-bw-on-hana/hana-database-administration/monitori...:

-- Available Physical Memory, returns 19.53, which corresponds the figure from "free -m" on Linux:

select round((USED_PHYSICAL_MEMORY + FREE_PHYSICAL_MEMORY) /1024/1024/1024, 2) as "Physical Memory GB"

from PUBLIC.M_HOST_RESOURCE_UTILIZATION;

-- Free Physical Memory, returns 12.74. This is not what I see on the Linux side

select round(FREE_PHYSICAL_MEMORY/1024/1024/1024, 2) as "Free Physical GB"

from PUBLIC.M_HOST_RESOURCE_UTILIZATION;

Actually, I don't succeed to import and merge a relatively small number of rows into my table, this is the original reason of my question. When I see that the "free memory" value provided by top or vmstat drops next to zero, I receive out-of-memory errors during the merge in HANA. The current values of "Free memory" from top output is 1205M. I think I have never succeeded to allocate more than 2 - 2.5GB for my table at any point of time, so the answer of 12.74 doesn't look real to me. In other words, I'm quite sure that I don't have 12.74GB RAM available for my data, not even close to this figure. Let's continue with the queries:

-- Total memory used, return 36.786. Even if we sum Virtual Memory values for all HANA processes (available from top), we will not receive this value, the sum is actually bigger, not clear why but it is not so important for now. It is not clear what to do with the resulted value anyway.

SELECT round(sum(TOTAL_MEMORY_USED_SIZE/1024/1024)) AS "Total Used MB" FROM SYS.M_SERVICE_MEMORY;

-- Code and Stack Size, returns 29.875. I don't see what is the meaning of that and how it helps me

SELECT round(sum(CODE_SIZE+STACK_SIZE)/1024/1024) AS "Code+stack MB" FROM SYS.M_SERVICE_MEMORY;

-- Total Memory Consumption of All Columnar Tables, returns 1,331, looks OK

SELECT round(sum(MEMORY_SIZE_IN_TOTAL)/1024/1024) AS "Column Tables MB" FROM M_CS_TABLES;

-- Distribution by schema, also looks OK.

-- Schema;MB

-- LEONID;791

-- _SYS_REPO;500

-- _SYS_STATISTICS;38

-- _SYS_BI;2

SELECT SCHEMA_NAME AS "Schema", round(sum(MEMORY_SIZE_IN_TOTAL) /1024/1024) AS "MB"

FROM M_CS_TABLES

GROUP BY SCHEMA_NAME

HAVING round(sum(MEMORY_SIZE_IN_TOTAL) /1024/1024) > 0

ORDER BY "MB" DESC;

I have similar problems with queries from http://www.saphana.com/docs/DOC-2299

The bottom line is that it is still not clear to me if it is possible to reduce the current memory allocation by various HANA processes and how. To be specific, does the current memory allocation by hdbnameserver makes sense? Maybe I can decrease it to free up some memory?

Also, does it sound normal that the HANA Developer Edition instance on Cloudshare is so severely limited in space available for user data?

vivekbhoj
Active Contributor
0 Kudos

Hi Leonid,

You can also check the following System Monitoring Views in SYS Schema to get memory information:

Similarly if you can see more memory related views in there and can cehck them to gain information.

To free up memory, you can also unload some tables from memory

I don't know if it is possible to reduce the current memory allocation by various HANA processes

Regards,

Vivek