Skip to Content
author's profile photo Former Member
Former Member

BODS reports Duplicate Key in Template Table

Hi
I have a problem performing the simple task of populating a template table in SQL database from a masterdata table in SAP SNP.
BODS reports a 'duplicate key' in the Template Table, which is impossible.

Has anyone seen this before?

Add a comment
10|10000 characters needed characters exceeded

Assigned Tags

Related questions

5 Answers

  • Posted on Jul 01, 2012 at 03:05 PM

    Hi Andy, Did you check if the primary ID is of same size and value what is there in APO? Is it truncating and that is why duplicates? If record number is same then there is no duplicates it is the key value issue

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on Jun 29, 2012 at 06:33 PM

    This is very common problem and very common way of finding the root cause data is to add one query transform. Make sure none of the columns are defined as primary Key. Run the job to write all data to Template table.

    After job is done then login into SQL server Management studio. Run the below query.

    Assuming Primary Key Name is : ID

    Assuming template table name is : TEMP

    Select ID,count(*) from TEMP group by ID having count(*) > 1

    Thanks,

    Add a comment
    10|10000 characters needed characters exceeded

  • author's profile photo Former Member
    Former Member
    Posted on Jul 04, 2012 at 04:51 PM

    have you by any chance defined a key on some other column?

    Add a comment
    10|10000 characters needed characters exceeded

  • author's profile photo Former Member
    Former Member
    Posted on Jun 04, 2014 at 05:37 PM

    Hi. Please double check the source systems to ensure that there are no duplicates that are extracted due to materials or whatever objects may be scheduled for deletion. I extract materials from a master data extractor in BODS today and I'm wind up with duplicates because it brings materials that are marked for deletion but not deleted yet. Just a thought.

    Add a comment
    10|10000 characters needed characters exceeded

  • author's profile photo Former Member
    Former Member
    Posted on Jun 05, 2014 at 05:27 AM

    Hi Andy,

    Please try with Table comparison transform and Selecting the Input contains duplicate keys check box provides a method of handling duplicate keys in the input data set

    TABLE_COMPARISON Transform

    Table_Comparison transform helps to compare two data sets and generates the difference between them as a resultant data set with rows flagged as INSERT, UPDATE, or DELETE. This transform can be used to ensure rows are not duplicated in a target table, or to compare the changed records of a data warehouse dimension table. It helps to detect and forward all changes or the latest ones that have occurred since the last time the comparison table was updated. We will be using this transform frequently while implementing slowing changing dimensions and while designing dataflows for recovery.

    The source table must be already imported into the DS repository. The fully qualified Table name e.g. DATASTORE.OWNER.TABLEshould be specified. Also set the input dataset columns that uniquely identify each row as Input primary key columns. These columns must be present in the comparison table with the same column names and datatypes. If the primary key value from the input data set does not match a value in the comparison table, DS generates an INSERT statement. Else it generates an UPDATE row with the values from the input dataset row after comparing all the columns in the input data set that are also present in the comparison table apart from the primary key columns. As per your requirement select only the required subset of non-key Compare columns which will give performance improvement.

    If the Input primary key columns have duplicate keys, the transform arbitrarily chooses any of the rows to compare during dataflow processing i.e. order of the input rows are ignored. Selecting the Input contains duplicate keys check box provides a method of handling duplicate keys in the input data set.

    If the comparison table contains rows with the same primary keys, the transform arbitrarily chooses any of the rows to compare. Specify the column of the comparison table with unique keys i.e. by design contains no duplicate keys as the Generated key column. A generated key column indicates which row of a set containing identical primary keys is to be used in the comparison. This provides a method of handling duplicate keys in the comparison table.

    For an UPDATE, the output data set will contain the largest key value found for the given primary key. And for a DELETE, the output data set can include all duplicate key rows or just the row with the largest key value.

    When we select the check box Detect deleted row(s) from comparison table the transform flags rows of the comparison table with the same key value as DELETE. When we select the options of the transforms - Generated key column, Detect deleted row(s) from comparison table and Row-by-row select or the Sorted input comparison method; Additional section appears to specify how to handle DELETE rows with duplicate keys. i.e. Detect all rows or Detect row with largest generated key value

    Apart from all these properties there are three methods for accessing the comparison table namely Row-by-row select, Cached comparison table and Sorted input. Below is the brief on when to select which option.

    1. Row-by-row select option is best if the target table is large compared to the number of rows the transform will receive as input. In this case for every input row the transform fires a SQL to lookup the target table.
    2. Cached comparison table option is best when we are comparing the entire target table. DS uses pageable cache as the default. If the table fits in the available memory, we can change the Cache type property of the dataflow to In-Memory.
    3. Sorted input option is best when the input data is pre sorted based on the primary key columns. DS reads the comparison table in the order of the primary key columns using sequential read only once. NOTE: The order of the input data set must exactly match the order of all primary key columns in the Table_Comparison transform.

    Thanks & Regards,

    Balamurugan G

    Add a comment
    10|10000 characters needed characters exceeded

Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.