cancel
Showing results for 
Search instead for 
Did you mean: 

HANA Deadlock and Out Of Memory

Former Member
0 Kudos

Hi all,

While monitoring our HANA One server, we noticed that memory consumption went uncontrollably upward. We stopped all of our applications that were pushing data to HANA database, but that didn't change anything. The memory consumption still kept going up.

Having had problems with indexserver before, we opened the indexserver's log and checked what was going on, and we saw this:

[15548]{207448}[406931110] 2013-04-25 07:23:37.818999 e Lock         WaitGraph.cc(00504) : Deadlock detected: Deadlock detected while executing transaction (TRANSACTION_ID=173, UPDATE_TRANSACTION_ID=406931110):

This is not an HDB error. User or application may cause a deadlock due to incorrect access sequences on shared objects.

current tx CONNECTION_ID=7448, CLIENT_HOST=ip-10-101-27-234.ec2.internal, CLIENT_IP=10.101.27.234, CLIENT_PIP=18844, TRANSACTION_ID=173, TRANSACTION_ADDRESS=0x7f5354df3000, TRANSACTION_TYPE=USER, TRANSACTION_STATE=ACTIVE, TRANSACTION_CACHING_FLAG=0, TRANSACTION_ISOLATION_LEVEL=RC, THREAD_ID=15548, UPDATE_TRANSACTION_ID=406931110, QUERY_BUFFER_SIZE=35432, NUM_OPENED_CURSORS=0, SQL_STATEMENT="UPSERT "SOGAMO"."PLAYER_GAME" SELECT "ID_1", "GAME_ID", MIN("LOGIN_DATETIME"), MAX("LAST_ACTIVE_DATETIME") FROM "_SYS_BIC"."sogamo-web/AT_SESSION_DETAILED" GROUP BY "ID_1", "GAME_ID""

  LOCK_TYPE=RECORD_LOCK, LOCK_MODE=EXCLUSIVE CONTAINER_ID=0, TABLE_ID=162815, TABLE_NAME=SOGAMO:PLAYER_GAME, RECORD_ID=[CS:OID=0x0000013b, PARTID=0x0, OFFSET=0x8001d728]

blocked by CONNECTION_ID=7151, CLIENT_HOST=ip-10-204-101-225.ec2.internal, CLIENT_IP=10.204.101.225, CLIENT_PIP=7767, TRANSACTION_ID=123, TRANSACTION_ADDRESS=0x7f53aad58000, TRANSACTION_TYPE=USER, TRANSACTION_STATE=ACTIVE, TRANSACTION_CACHING_FLAG=0, TRANSACTION_ISOLATION_LEVEL=RC, THREAD_ID=15583, UPDATE_TRANSACTION_ID=406931117, QUERY_BUFFER_SIZE=35432, NUM_OPENED_CURSORS=0, SQL_STATEMENT="UPSERT "SOGAMO"."PLAYER_GAME" SELECT "ID_1", "GAME_ID", MIN("LOGIN_DATETIME"), MAX("LAST_ACTIVE_DATETIME") FROM "_SYS_BIC"."sogamo-web/AT_SESSION_DETAILED" GROUP BY "ID_1", "GAME_ID""

  LOCK_TYPE=RECORD_LOCK, LOCK_MODE=EXCLUSIVE CONTAINER_ID=0, TABLE_ID=162815, TABLE_NAME=SOGAMO:PLAYER_GAME, RECORD_ID=[CS:OID=0x0000013b, PARTID=0x0, OFFSET=0x80001a33]

blocked by current transaction (TRANSACTION_ID=173, see above for the details)

HANA was in a deadlock, because two threads were trying to hit the same table, PLAYER_GAME, at the same time. The SQL statement in question was an UPSERT with SELECT, which can fetch and upsert quite a considerable number of rows.

While we will probably break this UPSERT down into smaller transactions moving forward, what troubled us was that after we stopped all processes hitting that particular table, HANA's memory consumption continued going up. Eventually, other processes were cut off from HANA as well with the following errors:

SAP DBTech JDBC: [2048]: column store error: [29020] exception 29020: PersistenceLayer/DeltaLog.cpp:1214 ltt::exception caught while operating on I SOGAMO WORKER_INDIVIDUAL_ERROR D $delta$ exception 1000002: ltt/impl/memory.cpp:91 Out of memory ; $size$=4194304; $name$=Page; $type$=pool; $inuse_count$=578574; $allocated_size$=37980340224


[129]: transaction rolled back by an internal error: Memory allocation failed


Finally, we couldn't connect to HANA at all and had to restart HANA. Thankfully, HANA started up just fine this time. We have had worse experience where HANA crashed (due to out of memory too, but due to other reasons) and we actually couldn't start it back up and had to recover from a backup.


When we encounter table deadlock problem, is there a way to recover from a deadlock gracefully rather than go out of memory, crash and restart the database? We have no problem avoiding a long UPSERT, we can learn from errors, but we fear the risk of losing data while in the process of doing so.


I have attached the log file of our HANA One's index server, as well as the OOM (out of memory) crash dump.


Thank you and look forward to your reply.

Accepted Solutions (0)

Answers (3)

Answers (3)

Former Member
0 Kudos

- We had similar experiences with UPSERT without Primary Key(PK) with very large volumes of data(10 Billion +)  help link for it REPLACE | UPSERT - SAP HANA SQL and System Views Reference - SAP Library

Now coming to your experience not able to login & connect to it.

Have you looked at - Indexserver.ini parameters reserved_connections & reserved_memory

Also SAP Note 1999997 FAQ #  8 & 11 recommends these parameters

global.ini -> [resource_tracking] -> enable_tracking = on

global.ini -> [resource_tracking] -> memory_tracking = on

global.ini -> [memorymanager] -> global_allocation_limit in MB

former_member254090
Discoverer
0 Kudos

Hi Chadin,

Please check this URL.It may helpful for this Issue.

Please refer this 1917938 note also for the same.

Regards,

Rameshkumar Ramasamy

swapan_saha
Employee
Employee
0 Kudos

Hi Chadin,

Based on what we see here, you are running out of memory available here with 60.5GB memory for your use case. Do you know how large data you are using here. In HANA One, we recommend not more than 30-32GB data (compressed) within 60.5GB instance. Moreover, "upsert"  operation with select  tgenerally akes a lot of memory.

Please check how much data you are using in this 60.5GB instance and whether you can avoid "upsert" if your memory is close to the limit.

Thanks,

Swapan

Former Member
0 Kudos

Hi Swapan,

We removed an expensive UPSERT statement and indeed, it helped.

How do I find out how much data we are using? The administrator's view in HANA Studio has quite a few values. Is it "Database Disk Usage --> Data Volume Size/Disk Size"? It is 43.98 GB.

Thanks,

Chadin

swapan_saha
Employee
Employee
0 Kudos

Hi Chadin,

To find the exact memory utilization from data, it is beyond the scope here as it will take time and space to explain it. However, you can easily have a good estimate of memory  used for data from several ways. They are:

1. From HANA Studio as you reported

2. SAP HANA One Management Console (Rev 48 onward) Status (Dashboard). Just click on Data Volume of the Disk Usage

3. SSH to HANA One instance

  % cd /hanadata

  % du -ms

If you wish to learn more about monitoring,

1. HANA Studio

2. Help -> Help Content -> SAP HANA Administration Guide -> Monitoring SAP HANA System -> Monitoring Memory  Use Usage

Now, I see your data size is nearly 44GB which is beyond what SAP HANA One recommends, 30-32GB. So, your data size grew beyond the recommended limit and based on your use case, you are expected to experience issues related to memory limit. You either limit your data size in HANA One or explore other options including buying HANA OnPrem appliance or using SAP hosted solution provided by approved partners.

Thanks,

Swapan

0 Kudos

This message was moderated.