SAP for Utilities Discussions
Connect with fellow SAP users to share best practices, troubleshoot challenges, and collaborate on building a sustainable energy future. Join the discussion.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to Cross Check Migrated Data through EMIGALL

Former Member
0 Kudos

Hello Guys,

Plz give me the best way to cross check the migrated data having one to many relations. Here below is one such case.

Our client has number of premises on a Connection object.

We are using a unique legacy key for a connection object and legacy customer number as legacy key for premise.

Using KSM I am linking PREMISE and CONNECTION OBJECT. For example, in the flat file I am linking as showing below.

PR10001 -


CO10001

PR10002 -


CO10001

PR10003 -


CO10001

Now, the problem is how to cross check this relation without using EVBS table that the for the the 3 legacy premises the same legacy connection object is used in one go.

In TEMKSV table we can see just OLDKEY and NEWKEY for an object say either CONNECTIONOBJECT or PREMISE. There is no relation.

Similary with the case if a Business Partner has Multiple Contract Accounts, OPEN ITEMS.

In short, my question is how do we cross check the all the legacy data with SAP-ISU Data after migration.

Thanks,

Vijay Kumar

1 ACCEPTED SOLUTION

Former Member
0 Kudos

Hi Vijay

Unfortunately, there's no other option than writing your own reconciliation report.

For obvious reasons, it's not possible to have standardised reports checking the relationship model in the legacy system(s) is in sync with SAP, because from SAP's point of view they have no clue how the legacy system looks like.

With the "object by object" load approach it's difficult to ensure that all objects are in a billable state*, because one linking object might be missing, which should raise an error, or objects are linked incorrectly, which is difficult to identify afterwards.

To be specific...

What you have to do is producing a relationship table/file like


CONNOBJ  PREMISE  INSTLN  ...  CONTRACT  ACCOUNT  PARTNER
-------  -------  ------       --------  -------  -------
1001     2001     3001    ...  4001      5001     6001
1001     2002     3002    ...  4002      5001     6001
1002     2003     3003    ...  4003      5002     6002
1002     2004     3004    ...  4004      5003     6002
...

You produce this report on the legacy side and you produce the report on the SAP side replacing the SAP internal numbers via TEMKSV by the legacy numbers.

Afterwards you can compare the two tables/files, that have to match exactly.

I hope that helps

Jürgen

___

  • I personally think the approach to migrate "object by object" - like first all connection objects, then all premises, afterwards all installations etc. - is not ideal, because of this it's more difficult to ensure the complete customer structure was migrated correctly. Therefore, on one project we implemented a "customer by customer" load framework, that ensures that either all information of a customer is loaded within one commit work or nothing by rolling back all customer information after an error. Specially in a phased migration, a "customer by customer" approach scales a lot better, too.

View solution in original post

3 REPLIES 3

Former Member
0 Kudos

Hi Vijay

Unfortunately, there's no other option than writing your own reconciliation report.

For obvious reasons, it's not possible to have standardised reports checking the relationship model in the legacy system(s) is in sync with SAP, because from SAP's point of view they have no clue how the legacy system looks like.

With the "object by object" load approach it's difficult to ensure that all objects are in a billable state*, because one linking object might be missing, which should raise an error, or objects are linked incorrectly, which is difficult to identify afterwards.

To be specific...

What you have to do is producing a relationship table/file like


CONNOBJ  PREMISE  INSTLN  ...  CONTRACT  ACCOUNT  PARTNER
-------  -------  ------       --------  -------  -------
1001     2001     3001    ...  4001      5001     6001
1001     2002     3002    ...  4002      5001     6001
1002     2003     3003    ...  4003      5002     6002
1002     2004     3004    ...  4004      5003     6002
...

You produce this report on the legacy side and you produce the report on the SAP side replacing the SAP internal numbers via TEMKSV by the legacy numbers.

Afterwards you can compare the two tables/files, that have to match exactly.

I hope that helps

Jürgen

___

  • I personally think the approach to migrate "object by object" - like first all connection objects, then all premises, afterwards all installations etc. - is not ideal, because of this it's more difficult to ensure the complete customer structure was migrated correctly. Therefore, on one project we implemented a "customer by customer" load framework, that ensures that either all information of a customer is loaded within one commit work or nothing by rolling back all customer information after an error. Specially in a phased migration, a "customer by customer" approach scales a lot better, too.

0 Kudos

Hi Jürgen

I totally agree with your approach to migrating data on a customer-by-customer basis in concept. I'm on my first implementation, and never even thought "outside the box" to do something like that, but it makes so much more sense.

Can you explain how did you make this work, since the migration workbench is based on the object-by-object model?

And time-wise, can you compare from your various implementations, the 2 different methods? Such as one method takes twice as long or something. I'm not talking about set-up, as I imagine the customer-based model takes longer. I'm willing to put in what it takes if I can go that route. Rather, I'm talking about the actual migration execution time. I've got about 500,000 customers to cut over, which is a relatively small dataset, but I need to be conscious of course of how much time I use on the actual migration.

Thanks for the great idea!

CN

0 Kudos

Hi Jürgen,

Thanks for the helpful answer. I could convince my Client now.

I am following the conventional method ' uploading Object by Object method'.

Could you please elaborate your new method uploading Customer by Customer instead of Object by Object?

Best Regards,

Vijay Kumar