cancel
Showing results for 
Search instead for 
Did you mean: 

Custom planning function to append only changed records

Hi all,

I am using BPC 10.1 Optimised for S/4 Hana. I created a custom planning function type and I use ABAP Classes for all BPC calculations.I filter the data based on planning filter and see the data available on aggregation level by using table c_th_data.And I calculate stg and append the data to c_th_data.Everything works fine.

However, if I do not append a record which was originally available in c_th_data, then it is deleted automatically.I see the log after I run it from rsplan.(xx number of records deleted) This affects the code performance, I have to add the source data to c_th_data each time.It takes too much time to append the source data of c_th_data to adso after I run the calculation.If I appended less records, then it would take less time.

This is not the case in BPC classic version.If we do not append a record in ct_data in BPC classic, then it will not be cleared by system automatically although it was available in the scope of the code.Only appended data pattern was affected for ct_data ( bpc classic).

What I want to know is if this is standard behaviour of bw ip(bpc embedded) or is there any parameter like "process changed records" or stg like this?You can see the interface I use and my custom planning function type below:

Planning function type(properties tab) . There is nothing in parameters tab:

My class uses the interface :

IF_RSPLFA_SRVTYPE_IMP_EXEC

All I do is writing the code to EXECUTE method of that class and append records to c_th_data.

Accepted Solutions (1)

Accepted Solutions (1)

0 Kudos

Hi Gunes,

this is by design. C_TH_DATA is a 'changing' table, so I don't understand your statement

However, if I do not append a record which was originally available in c_th_data, then it is deleted automatically.

This means that in your code you already deleted data from C_TH_DATA, why? If you don't touch records at all nothing happens to the records in the planning framework; that a record is considered as deleted if it was contained in C_TH_DATA before the call and not after the call is a feature.

Concepts used in BPC Standard and Embedded are different, cf.

https://blogs.sap.com/2014/10/21/concepts-compared-bpc-standard-and-bpc-embedded/

for a comparison.

Regards,

Gregor

Answers (3)

Answers (3)

Hi Gunes,

you don't have to read anything yourself, you already get the records you need in C_TH_DATA; you say you create 100K new records, so why do you say you have to append 300K records? Append only the 100K new records to the existing 200K records.

If you want to do mass data processing consider to write an SQL Script type of planning function and/or use process chains with planning sequences, there you can parallelize the planning sequence.

CL_RSPLFR_CONTROLLER does the reading, writing and checking for you, thus - depending on the customizing of planning constraints - this may take some time.

Regards,

Gregor

0 Kudos

Hi Gregor,

so why do you say you have to append 300K records?

Yes I append 100K to c_th_data, remaining 200K was already there.I didn't mean that.What I mean by append 300K records is that my c_th_data has to include 300K records. This 300K records are appended to adso back.And it takes too much time for standard SAP class to append 300K records back to adso instead of 100K records.Although remaining 200K was already there, the system itself still checks "if I append the same data pattern or not" for those 200K as well .Finally yes it appends 100K new records but the system checks the other 200K as well and it takes time.

In BPC standard , standard sap classes to append records were only checking 100K records.

Hope I am clear.If this is standard behaviour, it's clear.Thanks

Hi Gunes,

in fact the system 'checks' the 300K records but only to find out the changed/deleted/new records; this is just a hash lookup (some microseconds per records); the real check is done only for changed/new records where data slices/characteristic relationships and may be master data (new records) are checked. The latter part might be time consuming depending on the complexity of the modeled planning constraints.

The planning buffers are updated only via 'delta records', i.e. here also only new/changed/deleted records are considered. When data is saved the planning buffer has the delta records and saves the delta records for cube-like aDSOs and computes the after-images with new/changed/deleted records to insert/update/delete records from direct-update aDSOs.

Regards,

Gregor

0 Kudos

Hi Gregor,

Thanks for your reply.I am reading 200K records from a planning adso.And calculating 100K records more.Then, in this case, my c_th_data has to include 300K records instead of only 100K data.

It takes time to append 300K records.In sm66, I see that some standard jobs like "CL_RSPLFR_CONTROLLER". And if I thought if my c_th_data includes more records, then this job might get stuck depending on the number of records.