cancel
Showing results for 
Search instead for 
Did you mean: 

Invoice data load brings Zero records from Inbound layer to Detailed layer

Former Member
0 Kudos

Hi,

In production We were refreshing 2 years history by selective loading every 3 months due huge database and refreshing EMEA , Japan and US invoice to Spend system. "using data source method"

We have loaded 2010 data successful then 2011 first quarter had special character issue u201C

u201CError when assigning SID: Action VAL_SID_CONVERT, InfoObject 0XSARDOCTXTu201D

We were try to correct this using ALL_CAPITAL_PLUS_HEX in RSKC.

Deleted failed activation request from DSO (0ASA_DS00) and executed DTP from direct DSO to detailed DSO (0ASA_DS01) and it picked up 0 records?

All next data loads from Direct DSO to Detailed DSO is picking up ZERO records,

We really donu2019t understand whatu2019s going on

your inputs will really appreciated

Mahesh

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi Mahesh,

when you look into the DTP filter, you will find a routine for the upload id. I guess that is the reason why you do not get the records and I am afraid you have to load the data from the UI again into the inbound layer and release them afterwards (releasing from the UI triggers the process chain to push the data through the layers). Or maybe there is a way to clear some OPM* tables, in order the avoid loading all the data again - but I am not yet aware of that.

I am also afraid that using ALL_CAPITAL_PLUS_HEX in RSKC does not solve the root cause. In long term it would be good to identify the problem in the source system.

Sometimes you get the issue with asian characters. For example in Japan a users logs in into the system in english. Afterwards they type in japanese characters by using their keyboard. We faced that problem especially with transaction data. I was loading invoice data and got the same activation error in the invoice and related PO transaction data. When I checked the table EKKO and EKPO, I realized that the SPRAS key is not correct. I think that causes the problem, why the extractor checker got the asian characters, but looking in the data in the PSA shows me only # signs instead.

Regards,

Tarkan

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi Mahesh,

There is a filter in the DTP from inbound to detail which picks up only those upload IDs which are not yet released. If you want to reload the ones which are already released you have following options:

1. Load it from scratch from source system.

2. Go to table OPMDM_UPLOAD and set the release status for the upload IDs you want to reload as blank.

3. Or you can also create a full load DTP without filter which will load everything.

Best Regards,

Divyesh

Former Member
0 Kudos

Divyesh / Tarkan,

Thanks for your immediate reply, we canu2019t change any tables or create DTP directly in production system due to strict security guidelines.

this morning I had reloaded data from scratch and it worked but I still see the special character issue using ALL_CAPITAL_PLUS_HEX in RSKC.

We canu2019t edit PSA since the data coming to Direct DSO when we load the data and then release for reporting reads data from Direct DSO, itu2019s become challenging for US to identify the record and correct directly in production?

These are historical No PO invoice from Japan and EMEA, we canu2019t ask users correct this.

We would have to identify the character and handle this separately special coding or BI level in Expert routine (Direct DSO to Detailed DSO)

Any idea how to identify the special character using the below information

u201CValue '#' (hex. '0023') of characteristic 0XSADOCTXT contains invalid charactersu201D

For time being considered removing the field XSARDOCTXT from detail. It is a description field and cannot be used much for analysis.

Going forward I have correct this. Hope I will find the way to do that.

Regards,

Mahesh