cancel
Showing results for 
Search instead for 
Did you mean: 

Changes in planning book

former_member182537
Active Participant
0 Kudos

Hi,

My apo team is adding some new kf in apo planning book so there will be chnages in planning area as well..

One they add new kf in planning book they get automatically added to my data sources based on Std aggregation level like MALO.

Extract structure gets changed as well automatically.

During production move they are going to re initialise planning area but my ques is

What is impact on data source in bw based on malo ,mala?.

My req is to pull those kf to bw as well but in case we are not doing change at the same to ears apo.

What is impact on bw malo data sources ?.is there chance that extraction will fail after apo transport move to production?..

Do I need to replicate and transport my data sources at the time when their transport is going ?.

Please advise ..what shall I do in this case...

Thanks

Nilesh

Accepted Solutions (1)

Accepted Solutions (1)

rajkj
Active Contributor
0 Kudos

Hi Nilesh,

If there is a change in planning area (addition of new key figure or assignment of key figure to aggregate), it will result in extraction structure changes and hence, your data source change even if the key figure may not be required for BW.

You need to create a transport request to propagate the data source changes and subsequent activation of underlying objects after PA is initialized i.e. first move PA transport request, initialize it, repair and update data source, and finally move your BW transport request. If it's not done, the data extraction queues will fail.

Thanks,

Rajesh

former_member182537
Active Participant
0 Kudos

Thanks Rajesh...I tested in my dev system and I was able to extract data from apo snp .it may be as apo changes have not yet moved to quality system ?.

So  you mean once PA is initialise - the snp data source extraction will fail...?.

In this case  do I need to replicate data source in bw side and lock dependent objects ..move bw transport after apo changes move right ?..

Do i need to activate my data source in bw as well  other than locking dependen objects ?.

What shall be sequence  of bw transports ...please explain.

Thanks

Nilesh

rajkj
Active Contributor
0 Kudos

Hi Nilesh,

Pl check the following logical sequence:

  • Update APO planning area (addition of new key figure or assignment of a key figure to an aggregate)
  • Perform consistency check and then, initialize the PA
  • Repair the data sources
  • Change the existing data source (select or hide the fields)
  • Check and test the data source
  • Transport your PA changes
  • Replicate and transport the data source
  • In BW, you need to update and activate your data source, info source, and DTP structures (communication and transfer)
  • Test the data extraction process
  • Once you're satisfied, then transport the active BW objects(Data Source, InfoPackage and DTP structures)

Thanks,

Rajesh

former_member182537
Active Participant
0 Kudos

Hi Rajesh,

Could you pls confirm what is meant by repair data sources ?.

I know that these new fields will be added to my malo data source automatically thn what activity  is needed for this data source on apo side ?. If this data source is used I. Apo side as well for extraction ..do you mean t check that and apo team should lock it along with their transports ?.

Also what checks I need to do in apo side for these data sources . ..?.

Thanks a lot

Nilesh

rajkj
Active Contributor
0 Kudos

Hi Nilesh,

As you aware, t.code /SAPAPO/SDP_EXTR has all the utility functions such as generate, change, check, repair, test, replicate, and delete data source. When you first generate the data source for your SNP planning area and aggregate, the system creates the underlying extraction structure and tables. You can check them by selecting the display data source button.

Repair Data Source: When you change your planning area by adding new key figure or more specifically assigning it to an aggregate such 9AMALO. This change needs to be propagated to all the downstream structures like data source, extraction structures, database tables, etc. The function 'repair' data source does this job and updates all the underlying structures so that new key figure's data from planning area can be extracted.

Inconsistencies Check: Check data source function highlights inconsistencies if present in data source and underlying objects such as extraction structures, database tables, etc.

Data Extraction Check: APO team uses planning area data extraction tools set (t.code /sapapo/sdp_extr) and provide you the working data source in the BI. As mentioned in the beginning, this tools set has functions to perform consistency check and test the data source. The testing functionality allows you to use filters (selection fields) and extract data. This execution indicates how many records are retrieved and allows you visualize these records without having to use BI.

Transport: APO team needs to transport both updated PA and transport it to upper landscape (t.code /sapapo/tsobj). Then, the APO data source needs to be transported using the data extraction tools set (transport data source button). These transactions bundle all the required objects in the transport requests and APO need not to update these transport requests.

Summary of Data source checks on APO side:

  1. Hit 'Check Data Source' button to perform consistency check and ensure that there are no errors.
  2. Hit 'Test Data Source' button to extract a sample of data and ensure that the retrieved data complies with that of planning area i.e. data visible in the planning book's data view.

The process is very simple and user friendly.

Thanks,
Rajesh

former_member182537
Active Participant
0 Kudos
Thanks Rajesh . Apo team ,moved their changes to test system and performed all activities . They transported snp data sources as well and from bw side I just replicated these data sources and worked fine for me..so I think I need not tranport my data  source then ? Also one more thing new kf that has been added in apo planning book ..is thr any setting that needs to be done in apo as my extractor is pulling zero value  for that new kf?.  Any idea. Thanks
rajkj
Active Contributor
0 Kudos

Hi Nilesh,

Ensure that you got some known data for the new key figure in the planning area using planning book and data view. Then, use the 'Test data source' option as mentioned in my previous post, perform data extraction test and validate the records retrieved. If this test is successful, your BI data pull should work without any issue. Assume the testing was done successfully on APO side, but BI job did not extract the data as expected. Then, you need to look into the DTP process all the way down to data source on BI side.

Thanks,

Rajesh

former_member182537
Active Participant
0 Kudos

Hi Rajesh,

Everything went fine to production...just there is one extractor issue for snp malo.

Before apo chnages for addition of new kf to planning book ..I used to extract past data for last 2 months ard 13 lac but after all those activities planing book reinitalize , add of new kf..all forward planning data extraction is ok except past data for last 2 months is very less ard 75000 only...compared to earlier 13 lac.i even repaired dat source and created one test data source as well in production.

Do you know is there any setting in apo that can be done ?.

Also past due data extracted s from 1 feb today based on date maintained in time series...so why data for some other products not extracted for past

Example- based on today date I am putting selection based on plant and last 2 months..

Please advise.

Thanks

Nilesh

rajkj
Active Contributor
0 Kudos

Hi Nilesh,

As we discussed earlier, the life span of planning area data is determined by the storage bucket profile. Pl check the number of past periods being considered by your profile and number of data records (different orders) of known location products in the planning book and data view. The past data did not cover by the storage bucket profile won't be persisted in the liveCache. So, you can expect that liveCache stores data only on rolling period basis and non-referenced data will be released from it's memory.

You can extract this data into a spreadsheet from the data view (interactive planning user interface, t.code /SAPAPO/SDP94). If you don't have a data view with past periods, you can define one suitable time bucket profile and associate that with the existing data view.


Then use t.code /SAPAPO/SDP_EXTR to extract the data through data source for the same planning area with known data set of location products and date range. Typically, it should match with that of your spread sheet. If there is a gap, your APO team needs to investigate.

Thanks,
Rajesh

Answers (0)