Skip to Content
avatar image
Former Member

SAP HANA Persistent Layer

Hello Everyone

Could anyone please clarify me on the below,

      When i read the SAP HANA Architecture, it says me that the data will be stored as  IN-Memory on which the reporting will be done and when there are power failures or disk failures occurs it retrieves the data from the persistent layer.

So from HANA Studio where do i see this persistent layer ? , Am assuming that the table under the content folder are IN MEMORY and all the views built in the catalog folders access the tables in content folder. So when any power failures occurs, from where these tables are recovered.

It might be a simple question but am a bit confused in this 😉. Please explain me as clearly as possible.

Thank you.

Regards

Prashanth

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

1 Answer

  • Best Answer
    May 22, 2013 at 11:08 PM

    Hello,

    you can open table (right click + Open Definition) and then you can switch to tab "Runtime Information" to see information related to memory vs. disk. Also notice information on tabs Parts (listing partitions) and Columns (listing individual columns). You might add additional columns into this view by right clicking the table to get more detailed information.

    In general you do not need to worry about table being persisted. Table is always on disk and can be fully or partially loaded into memory where it is operated.

    I hope this helps...

    Tomas

    Add comment
    10|10000 characters needed characters exceeded

    • Hello,

      "innovation is SAP HANA avoids expensive database operation"

      This can be interpreted in many ways - for example that SAP HANA is storing data in memory (and not as cached blocks in data cache like other databases - but in very structured way optimized for faster processing) thus avoiding the need to go to disk in case that data is not in cache... etc...

      It can also be interpreted that SAP HANA is storing data in columnar tables - therefore for statements when you need to analyze big amounts of rows but you need only few fields - you do not need to retrieve complete row and throw everything else away but you will retrieve only required fields because you will work only with required columns... etc..

      In other words this statement is very generic...

      Regarding second part - storing data in columnar tables has huge boost on select performance (in particular for doing selects for few fields against huge amount of rows) - however drawback is more expensive operation for "row based" types of operation (in particular insert, update. delete) - this would need rebuild of complete table - so SAP developed way how to mitigate degradation for these operations using "delta store"...

      Every columnar table is equipped with delta store (you can imagine it like additional internal table which is completely transparent to all operations = it is one logical table composed from these two parts) which is optimized for row based operation and all row based operations are done there (therefore there is no need to rebuild complete column table each time row operation is done).

      Therefore update statement will not adjust values in column based table directly - instead it will insert new record in to delta store invalidating "original" record in column store. In such case when you do select statement both parts (column table and delta store) are processed and correct result is returned...

      Once upon a time (when is controlled either by internal SAP HANA rules or optionally can be controlled by application like BW itself) an operation called "Delta Merge" is triggered which will take content of delta store and will include it into column table itself (rebuilding table) - this internal operation is online and is invisible to application (there is no disruption).

      I hope this clarifies the concept.