cancel
Showing results for 
Search instead for 
Did you mean: 

SLT and HANA deleting archived data

justin_molenaur2
Contributor
0 Kudos

From the SLT Operations Guide, section 5.2 I can see the following statement

5.2 Archiving Data in Source Systems

The trigger-based replication also considers the deletion in source tables by archive activities (since it is not possible to distinguish on the database level between delete actions cause by archiving versus regular deletion of data records). As a consequence, SAP LT Replication Server will also replicate archiving activities as delete actions in the SAP HANA database.

In a typical standalone/sidecar implementation of SAP HANA, I would assume that in most cases this is not favorable behavior or a desired function. In typical DW/DataMart implementations, the data should be persisted in the target even after the source system data may have been archived. I can refer back to how BW operates in this caes - any new/changed data is extracted to BW, but any archiving operations do not affect the already extracted data in the target system.

I know there is functionality available to load archived data into HANA, but that would seem like a troublsome method to 'put the pieces back together' and get a wholistic picture of all the historical data (online data + archived objects) and would present some interesting challenges in the target (HANA).

Is there any way to disable the functionality to replicate deleted data due to archiving? Is there anyone with some experience navigating around this hurdle in a standalone/sidecar scenario that can shed some light as to how they handled this?

Thanks,

Justin

Accepted Solutions (1)

Accepted Solutions (1)

tobias_koebler
Advisor
Advisor
0 Kudos

Hi Justin,

this is a functional gap at the kernel level. So there cannot be a standard aproach so far. But let me share my ideas.

1. Delete triggers before and recreate triggers after the archiving run

     Can you ensure that the archiving run is no productive data/transaction is execute during the      archiving run - you can delete the trigger before the run and recreate it afterwards. The result:      triggers cannot record and archive deletes will not cause a delete on the HANA system.

     Deleting triggers can be executed via expert funtions on the SLT system at transaction LTRC.

2. Define a transformation rule, that exclude deletion

     You could define a rule that check the operations, a if it is a delete you can skip this from      processing. This rule has to be defined before you start the archiving run, but again you have to      ensure that no productive data is delete (that do not belong to the archiving run)

Both options require manual steps. So we are looking to improve it - but due to the fast that this is a core kernel story - it is not a enhancement that can be achieved asap.

Best,

Tobias

justin_molenaur2
Contributor
0 Kudos

Tobias, thanks for the information. In the BW world, this would be something like deleting any change pointers/delta initializations during the population of any unneeded data, then turning them back on at a later time. I would see this as a viable option vs. the second point, as in some processes the deletion of data should be replicated.

Do you have any reference material to the option you mention in your first point?

Thanks,

Justin

tobias_koebler
Advisor
Advisor
0 Kudos

To clarify both options are more project/workaround solutions - nothing that is standardized, because if you donot process correct/ensure that no "usual" deletes are process - you will run in inconsistency.

Here are the usual steps to go:

1.) SLT: Stop the master job: TX: LTR -> Your configuration -> Stop Master Job

2.) Source System: Delete the relevant trigger: TX: IUUC_REMOTE -> Expert Functions -> Delete Trigger -> Select the relevant tables

3.) Archiving Run - ENSURE NO OTHER DELETES DESPITE THE ARCHIVING DELETES ARE PROCESSED - SLT CANNOT TAKE CARE FOR THIS

4.) SLT: Reset Trigger Flags: TX: LTRC -> Your configuration -> Tab Expert Functions -> Button Reset Trigger and Logging Table Status -> Select your relevant tables -> Check Reset Trigger created -> execute

5.) SLT: ActivateTrigger Again: TX: LTRC -> Your configuration -> Tab Processing Steps -> Button Activate DB Trigger -> Input at select for table name your relevant table -> execute

6.) SLT: Restart the master job: TX: LTR -> Your configuration -> Restart Master Job

Afterwards you replicaiton is back and going on. You can try it on a test system - without step 3.

Best,

Tobias

Former Member
0 Kudos

Hello Tobias ,

Thanks for the two options,  In our production environment , deletion of data is only during the archival (no productive data deletion by users). In this case  your aforesaid 2 options can be adopted , Could you please share your thoughts on below questions.

Option 1 : Delete triggers , Can we remove only delete trigger (leaving Ins , upd1 and upd2) from source system  ? Since IUUC_REMOTE delete all 4 triggers , Any suggestion on removing the DEL trigger without DBA ?

Can we use the field triger only Ins/Upd/Del and set to "No Delete (I + U)" from  Activate DB trigger report from LTRC - Processing Steps,  for this ?

Option 2 : Transformation Rule ,

Are you referring an INCLUDE ABAP code to exclude the delete operations and include this in all tables using IUUC_REPL_CONTENT ? Do you have any reference guide to create thus rule.

Thanks

Thomas

tobias_koebler
Advisor
Advisor
0 Kudos

Hi,

Option 1:

I never tried option 1, but you can display all active triggers via trx: iuuc_remote in the source system.When you click "List Triggers" you also find the names. delete triggers will end with "
/1LT/<number>DEL . When you go back to the expert function of this transaction you see the option delete triggers. There you could define only the delete triggers and ge rid of them. Let me know if this is working for you.

Option 2:

Go to IUUC_REPL_CONTENT and fill and create a new rule in IUUC_A S S_RULE_MAP. Also an ABAP include has to be specified where you define the rule.

This is the coding of the ABAP include:

If the code detects the opertion "D" (delete) it will skip the record and will not transfer it to the target. Transformations can also applied only during the timeframe you execute the archive run, if you can ensure that not any usual delete will be executed.

Option 3: With DMIS 2011 SP5 we introduced also an third option. The idea is that you have an additional applications server added to the productive system. Only the archive run is executed at this additional application server. The next step is to have an transformation rule on SLT (see above) and you skip all reocrds that are coming from this additional app server (because you know the deletes are caused by the archive run).The big advantage is, that you do not block any users from the productive systems. everything can work normaly.

Best,

Tobias

Former Member
0 Kudos

Thanks Tobias , Appreciate your help on this .

Prathish Thomas

Former Member
0 Kudos

Hi Tobias,

I have tried option 1 - it does not work as I have no options in trx iuuc_remote to delete just the delete triggers (/1LT/<number>DEL). It always deletes all the three trigger tables

Option 2: The code does not skip the delete. Is there any option to debug this?

I don't the have option for testing option 3.

Please let me know any inputs for option 1 (Delete SLT triggers) and option 2 (Skiping the delete records).

Thanks so much

-Hari

Former Member
0 Kudos

Hi Tobias,

We have been follow up with SAP on this topic since 2012 and later last year, SAP provide same answer as you mentioned, I just wonder if you know any SAP customer implemented that idea in production. The reason I call it idea instead of a solution is in real world, it's very different to guarantee no user/job running on that dedicate application server other than archiving job, particular in a busy ECC system with thousands users logon everyday and jobs running around clock. Also do you know how to recover from mistake? e.g. a user logon to archiving application server and run transaction delete some data from ECC.

Thanks,

Xiaogang Pan

Former Member
0 Kudos

Option -2 skipped the records in my production system , I used the below ABAP code

 

FIELD-SYMBOLS : <ls_data> TYPE any,
                             <lv_operation>
TYPE any.

assign (i_p1) to <ls_data>.

ASSIGN COMPONENT 'IUUC_OPERAT_FLAG' OF STRUCTURE <ls_data> to <lv_operation>.

If sy-subrc = 0 and <lv_operation> = 'D'.
     SKIP_RECORD
.
ENDIF

.

Delete records are getting skipped during replication however those records still present in the logging tables , we had to remove those records from logging table manually after the archival process.

Shubhrajit
Explorer
0 Kudos

Hi Tobias,

Thanks for your input. As per my understanding from your mentioned process it will skip all the deleted records.

But if i want to skip only archived records, is there any way to set flag against those deleted 'archived' records.

Thanks,

Shubhrajit Chowdhury

Answers (3)

Answers (3)

Jonathan_Haun
Participant
0 Kudos

Interesting solution for sidecar implementations. I could see how customers might want this functionality and SAP should look to add an easier option to ECC / SLT. For example, a "do not replicate delete operations" check box per table in SLT.

Although, I wonder if it would just be better to not archive the data if the data is that important to real-time reporting. I work with several SoH customers and they would simply lose the data if they archived. (Assuming they don't copy it to another location before archiving). In my opinion this boils down to the fundamental difference between synchronous real-time operational reporting and asynchronous data warehousing. Side car is almost a hybrid but it leans closer to synchronous real-time than a traditional batch load solution.

justin_molenaur2
Contributor
0 Kudos

I was not actively following this conversation lately, but there is a very simple solution just as you laid out Jon.

There is a set of configuration tables behind the scenes we are using to avoid deletes due to archiving and involved no include coding or any specific server, etc.

Goto se16 – table name - IUUC_REPL_TABSTG:

Create entry for table table you want to avoid deletes on.

Field NO DEL TRIG should be marked as “true” for not allowing deletion to get replicated in HANA.

This has been used productively at a customer for the past 6 months and has worked perfectly.

Now what I DON'T know, is if this is global to avoid passing ALL deletes to HANA or just the archived deletes. Maybe Tobias can answer, but clearly this is a much better option that those outlined above. There is rarely hard deletes in SAP source systems, so I don't see much risk here.

Happy HANA (and SLT),

Justin

Jonathan_Haun
Participant
0 Kudos

Thanks for the info.

There was also mentioned of a solution in the following guides. The english was a little off in the first two links so I did not quite understand what the result would be. It either adds to archive data to the SAP HANA table or it creates a new archive table in the same SAP HANA MT schema.

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/1008bac2-6c05-3010-6c86-f2bb57ec9...

http://service.sap.com/sap/support/notes/1652039

5.2 Archiving Data in Source Systems The trigger-based replication also considers the deletion in source tables by archive activities (since it is not possible to distinguish on the database level between delete actions cause by archiving versus regular deletion of data records).

As a consequence, SAP LT Replication Server will also replicate archiving activities as delete actions in the SAP HANA database. If archived data of SAP source systems should to be also available in the SAP HANA database, you can use report IUUC_CREATE_ARCHIVE_OBJECT.

The report creates a replication object, allows the selection of relevant tables of an archive object and the loading of the archived data (by the date of the archiving session) into the respective the schema.

As a technical prerequisite, related SAP ILM (Information Lifecycle Management) APIs need to be available in the SAP source system. For more information, see SAP Note 1652039. Note: Similar to the initial load procedure, for the archive load procedure, no trigger and no logging tables are created.

justin_molenaur2
Contributor
0 Kudos

Yes, this works fine - I have implemented this as well. This loads archived data into the same table of the schema/configuration. you need to ensure that the table is already in replication/created, and then you can run through the steps mentioned.

So this would ADD already archived data, while the topic here is avoiding the deletes.

Regards,

Justin

Jonathan_Haun
Participant
0 Kudos

That's good to know. Just out of curiosity, if you were able to stop the delete triggers what would happen if you stopped and restarted replication 6 months later on a table that has been archived? I would assume that you would loose the archived data in the main table as it reloaded into SAP HANA. You would then have to restore it back from the archive using the method above?

justin_molenaur2
Contributor
0 Kudos

That's pretty much correct. Load/Replication operations can only work on data that is online.

Another solution in this specific case would be to start replication again without dropping the target table in HANA, preserving what is already there. You can do this with the info I posted in this thread

I haven't tried it directly, but seems as if this should work. I am unsure on the behavior for existing data, if it will be an update or insert.

Otherwise, as you mention you'd have to reload from archive - not a simple task.

Regards,

Justin

Jonathan_Haun
Participant
0 Kudos

Hopefully a future SP will include an easy button when replicating a table. Given that all the components are in place, an option to include archive data on a per table basis could be added to SLT. Wishful thinking I know, but why not just add another column to IUUC_REPL_TABSTG to manage this.


However, it's still a complex issue to solve when running SoH. Merging the data (on the fly) between two tables would require a lot of calculation or SQL engine overhead. It might just be that archiving is not necessary with SAP HANA but I am sure there are some that would disagree.

0 Kudos

Hi Tobias,

Option 3 (having a dedicated archiving server) seems an interesting option. However it's not clear to us how to implement it. Say we have 2 ECC servers to simplifies this: one for regular activities and one for archiving. But we have only one SLT server, so how would we distinguish in the ABAP rule the archiving server? Is there any easy way to do that? Could you please elaborate?


We have severals BI projects impacted by archiving. I am sure many customers are facing this situation. Do you have any news on the long-term kernel standard solution?


Thanks a lot,

Christian

Former Member
0 Kudos

Christian,

We are working with SAP on option 3 with SAP AGS support for a while and facing some challenges on dedicated archiving server in ECC, we believe without some enhancement at kernel level, there has no garantee you can isolate the application server only run archiving jobs.

We asked SAP AGS support submit the kernel enhancement requirment to SAP development team, but there has no date come back to us yet on when it will available. I can let you know once we have date on that.

Cheers,

Xiaogang.

0 Kudos

Thanks Xiaogang for the answer. Could you elaborate a little bit on the challenges you are facing to isolate the archiving server? Is this mainly to prevent users to log to that server, or it's more technical issues towards replication with the tranformation on the SLT?

Christian

sam_venkat
Explorer
0 Kudos

Hi, could you elaborate on the challenges you are facing with respect to app server isolation for archiving ? thanks

-Sam

Former Member
0 Kudos

To isolate a applicaiton server, not only for limite dialog user logon in, we have to consider all work process type like BTC/UPD/RFC....There has no existing functional within SAP can do that, so SAP ask us use various function for other purpose which can help isolate application work process, e.g. we set archiving job as A class and then use security to control the user who can run A class job, we use logon group for user login and exclude that application server from all logon group except the archive logon group we created just for run archive job, it's complicate and sometime has side effect, we put all SAP recommendations in our system and run for months, we monitoring the system and find out from time to time there has user/process running on dedicated application server which is not suppose to be there, no one from SAP can explain why. The conclusion is existing functionality can't meeting this requirement and kernel change is required.

However, recently SAP come back with new solution which doesn't require isolate application server, instead it user based, which is much better option for us, it does has some limitation as of now e.g. only availalbe for Oralce database by now. We are testing this new solution and so far result is positive.

Cheers,

Xiaogang.

0 Kudos

Hi Xiaogang,

we basically have the same situation. The isolated application server wasn't a valid solution for us. But I would be really interested in the solution based on a certain user. Could you explain a little bit how this solution will work and what's the technical basis for that.

Cheers

Dirk

former_member117942
Participant
0 Kudos

Hi Xiaogang,

we have tested solution with dedicated application server but it is complicated and it seems to have some side effects as described in your email.

Could you give us more details about this new SAP solution for replicating data without take care of archiving deletion?

Regards

Maurizio

Former Member
0 Kudos

Hello,

to "isolate" the application server dedicated to the archiving job we made this:

- create an jib server group as per note 786412 - Determining execution server of jobs w/o target server    is this group we DON'T insert the AS dedicated to the archiving

- in the profile of the AS dedicated to the archiving we set the parameter rdisp/tm_max_no=0 so no DIA, RFC and login is possible on that AS http://help.sap.com/saphelp_nw70/helpdata/EN/af/d8e6714c04264a81951c7cedf4cf4c/content.htm

- in the archiving customizing we set to use the archiving AS


In this way the AS is totaly dedicated to the archiving processes.


Regards

sam_venkat
Explorer
0 Kudos

Does this mean setting "rdisp/tm_max_no to 0" restricts only GUI, RFC, or HTTP logins but    BTC are not impacted?  In other words archiving jobs can execute fine. Does this also prevent UPD from other instances executing on the Archiving AS instance?  Thanks

-Sam

Former Member
0 Kudos

Does this mean setting "rdisp/tm_max_no to 0" restricts only GUI, RFC, or HTTP logins but    BTC are not impacted? --> yes

In other words archiving jobs can execute fine. Does this also prevent UPD from other instances executing on the Archiving AS instance? --> the AS of archiving have not UPD procecesses, only DIA (mandatory to start the AS) and BTC (to run the archiving processes)

sam_venkat
Explorer
0 Kudos

Hi Matteo, thanks for your response.  Does it mean archiving jobs do not required UPD workprocess?  Just wanted to make sure.

Former Member
0 Kudos

Hi,

archiving jobs need UPD workprocess, but these workprocess must not run on the AS where the archiving is running, archiving workprocess use UPD workprocess of the other AS.

Please mark the answer as correct or usefull.

Former Member
0 Kudos

Hi Dirk,

In a nutshell, we modify the database trigger generated for SLT replication by passing the user name which we used in ECC to run archiving job, all deletion from this user we consider as archiving deletion which means data will delete in ECC but we will keep in HANA.

It's custom solution SAP build for us and they say they may include this in future DMIS package.

Hope this help.

Xiaogang.

Former Member
0 Kudos

Hi Maurizio

The new solution is user based, basically, we modified the database trigger for those tables are archived in ECC, when deletion is made by archive user, then deletion will treat as archive deletion and this information will pass to SLT, SLT will not delete the data in HANA if it's archive deletion.

Cheers,

Xiaogang.

0 Kudos

Hi Xiaogang

Can you please share more details about how did you change database trigger to get the user id ?

Kind Regards

Kamaljit Vilkhoo

0 Kudos

Hi Xiaogang,

may you post an update here about the option 4), what you described above regarding SAP solution on modifying the trigger, which skips delete requests from the dedicated archiving user, so deletions by archiving won't be replicated to target HANA?

We are about to implement option 3), but, looks, it does not work really well, also, by your message option 4) is already available and think a real sophisticated, reliable one.

Thanks and best regards,

Zsolt

Former Member
0 Kudos

Hi Zsolt,

I suggest you contact SAP AGS support and ask them help to implement this solution for you. This is what we did due to procedure is not straight forward. It required configuration change and modify DB trigger in ECC, create new ABAP code in SLT for each table and this has to be done table by table. SLT DMIS SP08 make this procedure a little simple but still require lots manual changes, user based solution also has limitation which you can only allow ONE user in ECC run archiving job.

Thanks,

Xiaogang

Former Member
0 Kudos

Hi Justin,

I have no real experience in this scenario, but just wanted to help you to find a workable solution.

My idea is as follow: Use 2 data sources towards 1 target at the HANA-side, one is the "real object" and the other one the "archiving object". If some data is moved (deleted) towards archiving, the result (sum) is still the same on the HANA-side.

This can be done during configuration steps in SLT, as you have lots of options to transform/filter/merge your source data, also at the HANA-side (like additional field to avoid duplicate entries).

And you need to select the relevant tables to define a load object from your archive object anyway.

 

I hope this will help you.

Regards,

Andre