cancel
Showing results for 
Search instead for 
Did you mean: 

SLT Replication - BSEG and BKPF tables

former_member209912
Participant
0 Kudos

Hello team

My DMIS version is as below, so please check on my points/questions accordingly and help me to understand on that.

Please check and help me to understand the below points

1. I have created configuration and loaded KNA1 table from ECC system to HANA system. It is successfully loaded. I have checked SM37 in SLT server for job details, i am not able to find any job created for this run and parallely i checked in hana studion-->dataprovisioning-->jobs, here also i am not finding any job created for that loading of KNA1 from ECC to HANA system.

So now i want to know where can i see the jobs created in background to load KNA1 data into HANA because i want to measure the time taken for loading KNA1 data into HANA. please check and help me to get the job details, do i need to check SM37 in ECC system, please clarify. In the meantime, please tell me where can i get complete statistics of TIME taken to load KNA1 data into HANA from ECC system.

2. I want to replicate BSEG and BKPF table from ECC to HANA system. so please check and suggest me on the below points.

  a) Do i need to do any additional settings to replicate these two BSEG and BKPF tables because these tables are cluster tables and these tables are having huge amount of data in it.

  b) I had see a blog from Tobias Koebler on "

How To filter on the initial load & parallelize replication DMIS 2011 SP06 or higher", please check and suggest me whether i need to follow these steps for replicating BSEG and BKPF tables. please clarify.

c) i want to keep filter on BKPF table and i want to extract data from BKPF and BSEG tables based on those filters. normally in abap we join these two tables by using inner join and in where condition we will put the filters. so now i want to know how i can join these two tables BSEG and BKPF tables while replicating because i dont want to extract all data. Based on the filters of one table lets say BKPF and the for that filtered data i want to extract BSEG data or vise-verse. please check and suggest me how to put the inner join for these two tables in SLT replication.

Additional inputs are welcome.

Thanks and Regards

Raj

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Raj

Find my Responses

1. I have created configuration and loaded KNA1 table from ECC system to HANA system. It is successfully loaded. I have checked SM37 in SLT server for job details, i am not able to find any job created for this run and parallely i checked in hana studion-->dataprovisioning-->jobs, here also i am not finding any job created for that loading of KNA1 from ECC to HANA system.

So now i want to know where can i see the jobs created in background to load KNA1 data into HANA because i want to measure the time taken for loading KNA1 data into HANA. please check and help me to get the job details, do i need to check SM37 in ECC system, please clarify. In the meantime, please tell me where can i get complete statistics of TIME taken to load KNA1 data into HANA from ECC system.

Response:

  1. 1. First check if Logging table is created in the source system and also on the Target system.  Also check the Current Action of the Table from T Code LTRC à Table Overview.

You will see something like that

DD02L   /1CADMC/00000501                                          Replication            TRANSP                                X             Activated               Table created  Table created       Synonym created

If you see the replication status has initial load, then the data is still loading from Source to Target system. You can check the status of load from Load Statistics tab. In the selection select “Load in process” and check if you see the records. If you Refresh then you see the records are being calculated and read.

If the above steps does not work, then you can check the Jobs from Sm37. You will see jobs like this

/1LT/IUC_LOAD_MT_016_010                       SAPBASIS     Active          02/03/2015 00:00:14          63,019           

/1LT/IUC_REP_MSTR                              SAPBASIS     Active          02/03/2015 00:00:05          63,028           

IUC_LOAD_MT are the Data Transfer jobs used to transfer the data between the systems. IUC_REP_MSTR is the master job which controls all the SLT jobs in the system. If you don’t see any Jobs in SM37, then the master job must have stopped.

  1. Go to LTRC à Administration Data à Check Configuration Status. It should be running and if it is stopped, start the configuration. Once it is started, you should see the Jobs in SM37.

Check and let me know these things

  1. 2. I want to replicate BSEG and BKPF table from ECC to HANA system. so please check and suggest me on the below points.

  a) Do i need to do any additional settings to replicate these two BSEG and BKPF tables because these tables are cluster tables and these tables are having huge amount of data in it.

Response

If you would like to replicate normally with single threaded, then you don’t need additional settings. But if you would like to use Parallelization from Tobias Blog, then you need to make some changes to tables before you start the replication.

  b) I had see a blog from Tobias Koebler on "How To filter on the initial load & parallelize replication DMIS 2011 SP06 or higher", please check and suggest me whether i need to follow these steps for replicating BSEG and BKPF tables. please clarify.

Response


Tobias has written a very good blow on improving the initial load for large tables. You can use this for both cluster and transparent tables. I have implemented this in our landscape and working well. Check this Link

http://scn.sap.com/community/replication-server/blog/2013/09/26/how-to-improve-the-initial-load-by-r...

If you need any help with the Parallel replication let me know.


Mahesh Shetty


former_member209912
Participant
0 Kudos

Hello Saritha/Mahesh

I had gone through all the links provided. As a understanding, first i want to put some conditions on few fields in BKPF table as a filter. So as a test, i have used KNA1 table for my conditions. i have put the below conditions in the table as below

And i have seen that the reading mode is 5 only. please see the below screen shot

As per my condition, only one record has to be in HANA target table. i see that the replication is success, i have done LOAD only but there is no record avaialble in Hana target table.

Request you to please check and suggest me where i am going wrong. Once i am successfull here i want to use the same process to BKPF to put conditions on few fields to filter the data before replicating the same.

Awaiting for your response.

Regards

Raj

former_member209912
Participant
0 Kudos

Hello Saritha/Mahesh

Please ignore my earlier message because i am able to resolve the issue with KNA1 table, now the data is getting filtered.

I want to run the jobs in parallel. so please clarify me on the below points (i.e additional setting required).

1. My requirement is i want to split the jobs based on the below criteria

   like  job 1--> GJAHR<=2012 and RDLNR ='0l' and BLART <> 'ZX'

         job 2 -->GJAHR > 2012 and GJAHR<=2013 and RDLNR ='0l' and BLART <> 'ZX'

         job 3 -->GJAHR > 2013 and GJAHR<=2014 and RDLNR ='0l' and BLART <> 'ZX'

        job 4 -->GJAHR > 2014 and RDLNR ='0l' and BLART <> 'ZX'

  Please find the conditions which i have kept in the table as below, i am getting confused here, please check and help me here

2. After putting this condition, i have made a entry in the table IUUC_PRECALC_OBJ as below

MT ID 023 --> my configuration id

CONVOBJECT BKPF

NUMREC 20000000

3. After this step do i need to maitain a entry in this table IUUC_PERF_OPTION , please clarify.

Is there any step i am missing here, please suggest me.

Note: i have selected performance optimized option in LTR.

Thanks

Raj

Former Member
0 Kudos

Raj

I have not done much with Filtering so I cannot say much on that. Regarding your second question for parallelization, do the following things.

1. Add entry to table IUUC_PRECALC_OBJ

For NUMREC Field, you need to first check the number of records in the original table from the system and also how many Background WPs you are going to use it. You need to be careful when you decide the no of jobs to run in parallel.

Let’s assume that your SLT server has 15 BTC work processes allocated.  You have configured via the SLT transaction LTR for a Total work process allocation of 10 with 8 of those 10 allocated for Initial Loads.  This leaves 5 available BTC work processes.

A table to be loaded has 100,000,000 records and you want to use the 5 available BTC work processes for the access plan calculations.  You could divide the access plans into 20,000,000 records each and these would run in 5 access plan (ACC*) jobs on SLT – which corresponds to 5 access plan (MWB*) jobs on ECC.  Entering 20,000,000 in NUMREC will result in 5 jobs processing 20,000,000 records each.

2. Add entry to table IUUC_PERF_OPTION

1.  Identification -- MT_ID

2. Table Name -- Insert the Table Name

3. Parall Jobs -- No of Jobs avaialble.

4. Sequence Number -- Controls the priority of the jobs (lower number – higher priority). Usually 20 is a good value.

5. Reading Type -- Cluster Table use Type 4 --> INDX Cluster  ( IMPORT FROM DB )

                               Transparent Tables use TYPE 5 --> INDX CLUSTER with FULL TABLE SCAN

Save the changes

3. Select the Table for replication in LTRC --> Data Provisioning

Initially you see one Calc Job running in both SLT and ECC systems.

You will see "/1LT/IUC?CALC/ACP" in SLT and /1LT/MWB_DL_<TAB NAME> in ECC

Now we need to parallelize the process.

4.  Go to TCODE MWBMON -->Steps --> Calculate Access Plan --> Execute Function

Enter the following details.

MT_ID

Table Name

Access Plan ID ( Normally 0001 )

No of Jobs

Once you save the changes, it takes few minutes to kick the parllel jobs. First one job runs in ECC system which Gathers some information of records and then kick off multiple like below. I am attaching screnshots for your convenience

Jobs in SLT

Jobs in ECC

The Load should be faster now.

Mahesh Shetty

former_member209912
Participant
0 Kudos

Hello Mahesh

Apologies for late reponse.

thanks for your help. By following the above steps, i have successfully loaded the data. I have kept filter for 2014 and extracted the data from BSEG and BKPF tables by using LOAD option.

Now i need to extract delta records. I know i need to go with option REPLICATE but i want to get clarify on the below points

1. Do i need to do addtional setting for REPLICATION option as well. if yes, please let me know what additional setings i need to do for BKPF and BSEG tables.

2. If i go with REPLICATION option, how does it work. i mean it will take only the new changed records. the reason why i am asking is that i have extracted only 2014 data from both the tables.

So i request you to plesae check and suggest me accordingly to replicate the data which gives me only delta records.

Thanks

Raj

Former Member
0 Kudos

Hi,

Replication will only bring new record since the last load. Logging table in the source system will keep track of the changes.

Thanks,

Shakthi Raj Natarajan

Former Member
0 Kudos

Raj

Now that you have loaded the data, you need to add the table to replicate the Delta Records from your source system.  For the initial load procedure, neither database triggers nor logging tables are created in the source system. When you add the table to replicate in LTRC from Data Provisioning, corresponding Logging table is created in Source system along with the DB triggers. You can check the Logging table name from LTRC --> Table Overview

What does a DB Trigger do -- When there is any change in the data of the Original table, DB triggers copies the data from original table to Logging table. SLT picks the delta from the table and updates in the target Database.

For more understand of the Trigger Process, refer to this document

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/1008bac2-6c05-3010-6c86-f2bb57ec9...

Mahesh Shetty

former_member209912
Participant
0 Kudos

Thanks mahesh and raj for your reponses.

Finally i have the below questions, please help to get clarify on the same.

1. I have loaded 2014 data. So the records which are created after 2014 are treated as delta records by SLT? please clarify.

2. Raj said as below

Replication will only bring new record since the last load. Logging table in the source system will keep track of the changes. --> so here please tell the table name in source system which keep the track on changes.


3. Do i need to do any additonal setting for replication or just go to hana studio -->data provisioning --> and say replicate the tables.


4. normally i know the process to replicate/load tables from hana studio -->data provisioning, can you please tell me the process to select a table for load/replicate through LTRC t.code.


I will close this thread once i get answers, i will not extend this threat because the total purpose will be solved with these questions.


Thanks

Raj


Saritha_K
Contributor
0 Kudos

Hi,

I will reply to some of your queries-

1. yes the records which get created after your initial load would be considered as delta.

2. for knowing the logging table, login to your slt system - > ltrc tcode -> table overview tab->you can see the logging table name created for your corresponding source table.

3. no other setting is required for starting the replicate.

4. LTRC tcode-  "table overview tab"- > "data provisioning" icon is present at the top.

Please refer to the Application Operations guide available on help for any more queries.Moreover next to the data provisioning icon there is a small "I" icon which contains detailed documentation. Have a look at it.

Hope this helps.

Regards,

Saritha K

Answers (2)

Answers (2)

analytics
Advisor
Advisor
0 Kudos

Hi,

I was wondering how do you know the access plan id is 00001? I see my system generates two access plans in access plan head table 00001, 00010; do we need to provide filter for 00010 too? it looks like the access plans are generated at runtime, how could i customizing can know how many access plans i will get beforehand?

also what is the difference between standard access plan and collective access plan? thanks
!

NBIll

Saritha_K
Contributor
0 Kudos

Hi,

I will answer to some extent -

1. By default in SM37, there would be few LOAD JOBS running- starting IUC_LOAD* which takes care of your tables replication.

Initially there would be some calculation jobs *CALC* also running prior to it.

2. For knowing the statistics of your table, go to tcode- LTRC - load statistics tab.

3. FOR BSEG and BKPF, you could set up your configuration as PERFORMANCE OPTIMISED(LTR code)

4. replication takes place for tables individually. So I don't think inner join concept works over here.

Regards,

Saritha K

former_member209912
Participant
0 Kudos

Saritha

Thanks for your reply.

1. SM37 --> i think it is in SLT server only right. so you mean IUC_LOAD* jobs are used for replicaiton and what is the use of *CALC* jobs. so we will not have a individual job for one replication, am i right?

2. i want to know how much time taken for one table replication. so you mean i can get this statistics in LTRC-->statistics tab.

3. if i select perfomance optimzed option in my configuraiton , wil it take care of everything, i mean loading of huge tables without having performance issues.

4. i am cleared here, so i need to extract full data right.

Regards

Raj