Skip to Content
author's profile photo Former Member
Former Member

Memory consumption by HANA processes

Hi

I'm playing with HANA SPS6 Developer Edition on Cloudshare. While about 19.5 GB RAM is available from the OS point of view, as can be seen in "free -m" output, most of this memory is allocated by HANA processes themselves. Actually, I see that less than 3GB is available for user data and it is very tight restriction, especially given the in-memory nature of HANA.

I wonder if it is possible to decrease the memory allocation by some HANA processes, except hdbindexserver of course. How can I check if the memory allocation by various HANA processes is adequate? Is it possible to change it?

Thanks in advance

Add a comment
10|10000 characters needed characters exceeded

Assigned Tags

Related questions

2 Answers

  • author's profile photo Former Member
    Former Member
    Posted on Sep 27, 2013 at 02:58 AM

    I encounter the similar issue , can not limit the minimum allocation memory .


    I set the parameter global_allocation_limit value to 2G(2048M) of all host , see the following snapshot ,It can not take effect. but if I set the value to 20G(20480M) , it will work well ,


    -- set the global_allocation_limit=2048
    hdbsql=> select round(allocation_limit/1024/1024,2) allocation_limit from m_host_resource_utilization;
    | ALLOCATION_LIMIT |
    | -------------------------------------- |
    | 13605.15 |


    -- set the global_allocation_limit=20480

    > select round(allocation_limit/1024/1024,2) allocation_limit from m_host_resource_utilization;
    | ALLOCATION_LIMIT |
    | -------------------------------------- |
    | 20480 |

    Add a comment
    10|10000 characters needed characters exceeded

    • Former Member

      Hi Syni

      I guess that requesting from HANA to limit itself to so small amount of RAM (2GB) is probably too much for it. Probably, the figure of 13605.15 that you have received is the absolute minimum that it needs to allocate to be able to function, at least by default.

      Also, to my understanding from documentation, global_allocation_limit is as global as it sounds, which means that it affects hdbindexserver too, which in turn affects the amount of user data in column tables that can be put to the memory by HANA. If this is correct then restricting of global_allocation_limit doesn't serve us at all. It would be interesting to know if it is possible to limit the memory allocation by different HANA processes and what are possible implications.

  • Posted on Sep 26, 2013 at 08:55 AM

    Hi Leonid,

    Memory related questions have been asked multiple times,

    You can refer to the below threads to know more:

    Queries for Used and Resident memory and comparison with Overview tab numbers:

    http://scn.sap.com/thread/3424524

    Queries on Memory measurement and related:

    http://scn.sap.com/thread/3421768

    You can also check the following documents regarding Memory Usage in HANA:

    https://cookbook.experiencesaphana.com/bw/operating-bw-on-hana/hana-database-administration/monitoring-landscape/memory-usage/

    http://www.saphana.com/docs/DOC-2299

    Regards,

    Vivek

    Add a comment
    10|10000 characters needed characters exceeded

Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.