cancel
Showing results for 
Search instead for 
Did you mean: 

Skip Duplicate IDs While Loading Master Data from Flat file

Former Member
0 Kudos

Hello all,

I have a scenario wherein I will have to load the Master Data from a flat file. The load has to happen quite frequently. I have a flat file wherein one of the fields in my flat file has to be loaded to the Dimension ID of one Dimension. The flat file contains multiple records with the same ID. I used transformation file to load the data from a flat file. So when I tried to load the data, it throws me an error that the duplicate records are detected.

Does anyone have any idea how we can skip the duplicate records using transformation file options and mapping section?

Thanks,

Thanuja Sekar

Accepted Solutions (0)

Answers (2)

Answers (2)

damovand
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hello,

BPC will not load duplicated values it will simply rejects them.  I recommend testing and verifying it, but I think all duplicated records will be rejected.

Regards,

Leila

former_member186338
Active Contributor
0 Kudos

The question is - how you get this file?

In general it's generated from some system... And the correct way is to generate the version without duplicates directly from the source system.

Another options are to process the file with duplicates:

1. Preprocess text file with some script before uploading (vbscript or...)

2. Use routine badi.

Vadim

Former Member
0 Kudos

Hi Vadim,

Thank you. When You say Use routines, which routine is most suitable for doing this? Is it a start routine or end routine?

My scenario is I have to delete the adjacent duplicates from the source and then load it to BPC. Since it has to be done before transformation and Mapping, I tried with Start routine. But there comes a problem wherein my records will be fetched line by line in a string line wherein I will not be able to delete the adjacent duplicates because this will only work for Internal Tables. In My start routine, my records will be fetched row by row wherein I cannot delete the entries form a table.

What's the best way to implement this?

Thanks,

Thanuja

former_member186338
Active Contributor
0 Kudos

But why you can't generate a correct file from a source system?

"In My start routine, my records will be fetched row by row wherein I cannot delete the entries form a table." - Why?

P.S. Have you read this:

Former Member
0 Kudos

Vadim,

The scenario is that I post an entry using Journals. After posting it, I need to take the posted journals and load the journal IDs into One dimension. My problem is that I may have many transactions for the same journal entry. When I take the posted journals, it will have Duplicate Journal IDs. For removing the duplicate entries, I need to use the routine.

And in start routine, If I use Delete Adjacent Duplicates syntax, since it works only on Internal tables, how can I do this?

Thanks,

Thanuja

former_member186338
Active Contributor
0 Kudos

Journal ID to master data???? Not clear!