cancel
Showing results for 
Search instead for 
Did you mean: 

Extraction from Datasource 0UC_SALES_STATS_02

Former Member
0 Kudos

Hi All,

We are currently working on developments of sales statistics 0UCSA_C05 cube for a large utility company. As part of building this cube, we are using data source 0UC_SALES_STATS_02 .

When we run info package to extract data from R/3 to BI - PSA, we are not able to get any records into delta queue in source system and as a result the info package is returning "0" records.

We checked the same data source in extract checker (RSA3) & there a pop up comes & ask for print document no .If I provide print document no then it extract 5 records for that particular print document no otherwise it is giving 0 records.

I feel I need to give print document as my selection criteria in the Infopackage so that it will extract that 5 records. But problem is that print document no (OPBEL) is not available in data source 0UC_SALES_STATS_02 so how can I give it as a selection criteria in infopackage.

Please let me know if anyone has worked on data source 0UC_SALES_STATS_02 extraction & what is the exact extraction procedure for this data source.

If anyone has any clue on why the delta is not getting built in source system, pls let us know and appreciate your attention to this message.

Regards,

Sonal

tdotov
Member
0 Kudos

Hi,

I did all the steps mentioned above but was unable to initialize the source system.

I searched the web for any answer or proposal but without success.

Do you know where could be the problem?

Regards,

Tomislav

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi,

If you are doing the extraction for the first time:

1. Run full load, without any selection.

2. Do init load, with / without selection depending on your scenario.

3. Close reconciliation keys in source system (FPG4). The extractor only picks up the records for closed recon keys.

4. Run transaction EBW_DQ_SS to fill the delta queue.

5. Run the delta info-package.

Then you should be able to see some data in your PSA / DSO.

Once initialized, needless to add, you need to run only steps 3 through to 5 on an ongoing basis.

Please let me know if this helps or if you have any other Qs.

The other response to your query is a link to the help for the datasource. I'm sure you have already read through this.

Cheers,

Ashu

Former Member
0 Kudos

Thanks Ashutosh,

I performed all the steps which you have mentioned but I am facing another problem.For sales Statistic we have two DS & we can use either of these right??

0UC_SALES_STATS_01(3.5 DS from content)

0UC_SALES_STATS_02(BI7 DS from content)

when I use 0UC_SALES_STATS_02 in full it is not extracting any records & status remain yellow(which is OK as per the extraction process) then I run init without data transfer then I run delta(after closing rec key & transfering to G/L etc). In delta load it is only extracting current records for which I am closing rec key but not the historical records which were closed in the past.

where as when I use 0UC_SALES_STATS_01 FULL, INIT & DELTA all r successful & full is extracting all historical data & delta is extracting current data.

But DS 0UC_SALES_STATS_02 is normally recommended from performance point of view.Also my client wants to use DS 0UC_SALES_STATS_02. Could you plz tell me why these two DS behaving different? which one is better option???

Regards,

Sonal

Former Member
0 Kudos

Hi Sonal,

As far as I know, you need to have either the amount statistics group / quantity statistics group in the billing line items (historical / current) to extract these into BI.

We used SALES_STATS_02 with a ISU 4.72 / BI 3.2 environment which included historical data. So, there should not be any version specific limitations. However, with 7.0, you get the added write-optimized DSO and cube.

My recommendation would be

1. Check if the statistical group indicator is switched on in billing line items.

2. If its not, consider creating a generic datasource based on ERDK table to get the invoice figures into a separate staging area. [ To avoid the bespoke approach, would you be able to influence the data migration teams who use EMIGALL to upload the historical documents to update (STAFO) as well. ]

3. For newly created documents, i.e. post migration, sales_stats_02 should be used.

4. In the productive environment, please use any one of the DS.

5. I have learnt of many problems with SALES_STATS_01 extractor in productive environments.

So, if its an implementation project and you can influence data migration, I'd suggest update group should be fixed and then use SALES_STATS_02.

Also, consider not splitting of records into monthly levels for historical records. See, if your client is happy with this.

Regards,

Ashu

Former Member
0 Kudos

Hi Ashutosh,

Thanks alot for the information. Could you please explain this in detailed steps as I am new to this

extraction process.How can I check statistical group indicator is switched ?

what r the exact steps I need to perform?

Also, as in my project there are not much complex transformation I feel there write optimized DSO is not required.

Because normally we use write optimized DSO if there are complex transformations.

Still do u suggest to use WO DSO? is there any other reason to use this?

Once again thanks alot for help.

Thanks & Regards,

Sonal

Former Member
0 Kudos

Hi Sonal,

STAFO is a field on DBERCHZ series of tables that store the underlying billing line items.

Its a field that should be set during data migration / configuration activities.

If you have a bit of ABAP background or can get help from an ABAP resource, it would help to check the extractor logic ISU_BW_SALES_STATISTICS.

As for the use of the write optimized DSO, I'd highly recommend using / creatng it irrespective of the complexity of the assignement / requirement. In my experience, once users start using the queries they always ask for more. Its always a pain to get data from source system if you have to reconstruct.

Please use the BI environment for future scaleablity / ease of maintenance.

Hope this helps.

Regards,

Ashu

Former Member
0 Kudos

Hello Ashu,

I have a question related this data source. We are extracting billing data using this extractor on a daily method and we get millions of records daily.

However following incident happend last week and we are trying to figure out how to solve this issue without a complete re-load.

1. Delta load is going on daily to IS-U Billing cube.

2. For some reason, we tried a full load into PSA on 5/25/2009 with a single posting date" as 7/30/2008 using an info package. No mass activity done in R3 for this. just executed info-package. The purpose was to validate the cube data with PSA.

3. No records came in this process.

4. But on next day (5/26/2009) when delta data extracted, system picked only 7/30/2008 data instead of regular delta data. So the entries got duplicated in BI cube for 7/30/2008. We thought the entire delta pointer got scrued up.

5. We then deleted that particular request from cube so the duplication got eliminated.

6. After one day (ie.5/27/2008) the regular delta load came into BI and it happend today also.

My question here is that, did I loose any data? Do we have to reaload complete data?

Hope you understood my question.

Thanks & Regards

Romy

Former Member
0 Kudos

Hi Romy,

It has been a long time since you have faced this problem. We are facing exactly the same problem of duplication, What was your solution? It would greatly help me if you can provide some clue for its resolution.

Regards,

Former Member
0 Kudos

Hi,

Actually the IS-U Billing extractor works different from other standard extractors. The Mass Activity program executed in ECC is triggered based on the last info-package executed. If the infio-package is excuted as a delta mode, then mass activity process the delta data. But if the info-package process based on any date or other criteria, then the next mass activity program works only for the parameteres mentioned in the info-package. In any case you will not loose the delta data becuase the next mass activity picks correct data.

So, there should not be any chance of duplicate data unless you execute some info-package manually. Hope this helps! Please reply if you have more questions.

Thanks

Romy

Answers (2)

Answers (2)

0 Kudos

Hey, guys!

I need to do a new restructuring already started a few years ago using the extractor: 0UC_SALES_STATS_02 because there were irregularities in some records. CCS to BW.

I need to know the correct procedure to carry out a restructuring by initializing this extractor again, bringing the data from seven years ago.

Many thanks for your help in advance;

Sbrissa.

Former Member
0 Kudos

Hi Sonal

Did'nt worked on this.

But could able to find some documentation,which might be helpful

go thru the uploading process

Prerequisites

For optimal use of the DataSource, we recommend you check the IMG for BW from Plug-in PI2001.1 and mySAP Utilities 4.63 upwards. Under the application-specific settings, you can find notes on the utilities industry and on the sales statistics. You can find further information releases prior to IS-U 4.63 and Plug-in P12001.1 in SAPNet under the alias Utilities.

Due to the large amounts of data involved, it is important that OLTP Customizing for the sales statistics is completed before the first productive loading process takes place and that the changes are directly implemented.

In addition, the DataSource 0UC_SALES_STATS_02 is available from mySAP Utilities IS-U 4.62 upwards. You can only use one of the two DataSources in productive operation. Please see the documentation for the DataSource 0UC_SALES_STATS_02. As a rule, the other DataSource should only be implemented if you experience problems with the load time. It is not as user friendly but produces the same results.

Use

You can select the individual InfoObjects to be used from the characteristics and key figures in the DataSource. As a result, the DataSource provides a pool from which you can compose the InfoSources. Therefore, the DataSource has a far more wide ranging content than the InfoSources. You can determine the level of detail of data in the InfoCube and control performance by defining an InfoSource.

This DataSource transfers information to the InfoSource 0UC_ISU_01 (sales statistics). This applies to regional information (for example, country, city), contract data (for example, division, business partner) and invoicing data entered in the document line items (for example, rate).

Address data in the extractor can only be transferred when it has been entered in the IS-U system together with the regional structure.

In order to avoid unnecessary data import and extraction, you can hide any fields in the DataSource that you do not require.

If the fields of the DataSource are not sufficient, you can use the customer append structure to insert additional fields in the structure of the installation, the installation contract and the billing documents. This also applies to a user-defined field. When it has the same name as an existing field it is automatically assigned the value of the field.

Additionally, you are able to access the Exit BWESTA01 that enables you to carry out modifications whilst data records are being formatted. This is a higher-performance enhancement than the standard function module for DataSources.

Delta Update

After the initial run and initialization of the delta method, you can use this DataSource for the delta upload.

The full update is supported. However, you must only use the full update to extract documents that contain one or more of the following characteristics:

u2022 The documents were created prior to IS-U 4.61

u2022 The documents were created prior to connection to a BW system.

The documents have already been unloaded into BW. The user is responsible for any inconsistencies

When you extract the documents for the first time, select the complete posting period.

We recommend that you carry out initialization and full upload in a number of steps in order to create smaller packages. To improve the performance, select an update via the PSA. The data is extracted in accordance with the selection criteria. In this case, the posting date of the documents is used as the selection criteria.

Initialize all future posting dates, even if the enhanced selection area does not yet contain any documents. The new documents are automatically entered by delta processing.

Carry out delta processing at regular intervals. We recommend that large utility companies carry out delta processing on a daily basis. Only documents with closed reconciliation keys can be processed.

Uploading Processes in Detail

Full Upload:

All documents with a closed a reconciliation key are extracted in accordance with the selection. Use the full upload to extract all existing documents after connection of the DataSource.

Documents with an open reconciliation key are saved in index table DBESTA_BWPROT for later processing.

Initial Upload:

All closed reconciliation keys in index table DBESTA_BWPROT are processed according to the initial selection and the associated documents are extracted. You should perform the initialization after the full upload, using the same selection. The initial run only extracts data if in the mean time associated reconciliation keys were closed between the full and initial upload, or if there are new documents.

Delta Upload:

All closed reconciliation keys are processed in accordance with the initial selections and the corresponding documents are extracted. This normally concerns new documents in productive operation.

Features of the Extractor

As of release IS-U/CCS 4.71, the DataSource can also be used for the ODS. However, because of the large volume of data, you should only create one ODS with definition INSERT (no update of existing records).

Regards

Jagadish