Skip to Content

Replicated SAP HANA DATA reclaim without loosing business time

Aug 15, 2017 at 03:34 PM


avatar image

Dear SAP experts,

I have a productive, replicated (HA) HANA (with SAP ERP on that) where I need to run DATA RECLAIM. The problem is that replication should be deactivated (see 1999880 - FAQ: SAP HANA System Replication' section '19. How can RECLAIM DATAVOLUME be executed when system replication is active?), but the business cannot afford it (only smaller (5-6 hours) downtimes.

Do you have any experience in splitting the data reclaim into several smaller (shorter) slice or running it with replication on, or even execute it faster?

The data size (persistent layer) is 7.7TB (RAM is 3.5TB) and the fragmentation is almost 50%.
On the Test system which is the exact copy of this system it ran 40 hours. I cannot execute something on production which turns off the HA for 40 hours.

Any advise helps.

Many thanks,


10 |10000 characters needed characters left characters exceeded
* Please Login or Register to Answer, Follow or Comment.

1 Answer

Best Answer
Venkata Ramakrishna Duggisetty Aug 17, 2017 at 02:16 PM

Hi Kyle,

You can find the details of total disk size( for data volume) and the used disk sizes from the studio and then split the task into phases:

The size should be considered for the Index server of the particular node.

Example: your total disk size is 2 TB and your usage is 500 GB i.e., hence you can perform the reduction of 2TB to 1.75 TB i.e., reduction of 250 GB as first phase :


you can also follow this from the below article on NetApp:

Best Regards,
Venkata Ramakrishna D

Show 3 Share
10 |10000 characters needed characters left characters exceeded

Thank you for your answer Venkata. Do you have experience in cases when you had to stop the reclaim during the execution? Does it have any risk cancelling the job?




There is no option to actually "cancel" the command.

In case the session, the thread(s) or the indexserver process aborts during the execution, there won't be any data corruption as the reclaiming process works internally in steps so that with every IO operation there is always a consistent, re-startable data representation on disk.

One option to mitigate the long running reclaim job, as mentioned in the linked article, is to have smaller reclaim jobs. So instead of doing a reclaim 150% at once, you might want to do several reclaims, say 220%, 200%, 180% and then 150%. That way, you can provide time windows in which your replication can catch up.


Thank you for your answer Lars, really appreciate.