Skip to Content
author's profile photo Former Member
Former Member

Optimised data loading in APO- BW

Hi every one,

My requirement is to load data from Enterprise BW to APO BW for planning in APO.

my source is info cube in Enterprise BW. I have to derive some characteristics info objects in APO BW based on transaction data from EBW.

For this purpose I am using write optimised DSO as staging in APO BW and in transformations I am writing routines to load data

Into derived info objects in APO BW. Then I am loading data from write optimised DSO to standard info cube. Here I am using so much of ABAP CODING.

So my data flow in APO BW will be

INFO CUBE ------> WDSO -------> INFO CUBE

( Standard BW). APO BW APO BW

I am doing weekly loads to WDSO from EBW Cube AND monthly loads to info cube from WDSO with full update.

Is there any chance I can optimize the performance in APO BW side according to best standards.

Very much appreciate you guys for the help

Thank you.


Add a comment
10|10000 characters needed characters exceeded

Related questions

2 Answers

  • Best Answer
    Posted on Nov 17, 2013 at 03:57 AM

    Hi Sri,

    (1) InfoObject Pruning: Using the APO Cube as your guide to "Which InfoObjects" do I really need in the solution, trim the number of InfoObjects contained in the WAPO down to only what APO needs ... as opposed to all the InfoObjects that the BW Cube can give you. The width of the records being processed does have a reasonable impact on general ETL loading.

    (2) A further addition to (1) is to see of the DataSource definition in APO can be trimmed down to only ask for the InfoObjects you want in APO. A cube will offer many characteristics and key figures for reporting in BW that APO just won't care about ... so make sure your DataSource only takes what it needs.

    (3) While write optomised DataStores are technically faster for "reduced time it takes to extract right now" they are actually a lot, lot slower in the grand scheme of your whole data flow path way. By considering the use of a standard DataStore in APO you will get a slow down because of the process of "Activating the requests" ... however the advantages are much, much better when you consider (a) the dataflow from that point onwards through APO is a TRUE delta (b) the activation process compressed and added the granularity difference between the DataSource (BW cube) versus the DataTarget (APO DataStore) (c) you can run the BW cube to APO DataStore extraction every night, effectively spreading the burden of the data volume over 7 nights instead of jamming it into 1 night once a week (d) using a standard DataStore in APO also offers better options for the support of the requests due to the ability to do repair full loads and have the activation generate true deltas while confirming the whole DataSet matches between DataSource and DataTarget.

    (4) Consider introducing an InfoSource in APO directly after the DataSource and before the APO DataStore. The intention is to ensure you leverage the "Key Field" ticks on the InfoSource structure definition to match the key of the DataTarget (APO DataStore). The reason why you want to use the key field feature of InfoSources is that BW v7 will do an in-memory compression of the DataPacket as it flows through. This can introduce a significant improvement (without ABAP) as soon as possible into the ETL data flow when the DataSet in the DataProvider is significantly more granular than the DataSet definition in the DataTarget where it is going to live.

    Hope this helps.

    Kind regards,


    Add a comment
    10|10000 characters needed characters exceeded

    • Hi Sri,

      Using and InfoSource to optimise the size of a DataPacket as it flows through the system has several aspects that all contribute to the solution. [Grab a coffee]

      Start with the usual data flow, from DataProvider [A] to DataTarget [B] with a Transformation [C] in between and a matching DTP [D].

      At this point you know the exact record definition of [A] and [B]. You have assigned the appropriate InfoObjects (Characteristics and Key Figures) to the fields of the record. You also know what business rules you want to apply to the records as they flow from [A] to [B].

      An optional step you might have done is to consider the reporting usage of the records in the query, at which point you might have added:

      1. Additional time characteristics;
      2. Date key figures;
      3. Attributes from related master data;
      4. Counter key figures;
      5. New key figures to store before aggregation calculations;
      6. Forced upper case and ALPHA converted characteristic values;
      7. Meta data like a new characteristic for logical partitioning the SubSets;
      8. Filter to drop unwanted records, etc, etc, etc.

      You will be able to work out what is required to make the above business rules and transformation activities work. The dependancies of each will be tied to specific InfoObjects on the record. Now you will be able to determine if that dependancy is related more to the incoming DataProvider or more to the outgoing DataTarget.

      For Example: A record counter key figure is related to the DataTarget more than the DataProvider because why would you want to use CPU cycles adding a ‘1’ counter to a record that you will possibly filter out of the DataPacket (you would not).

      For Example: Calculated key figures based upon dates would be related to the DataProvider more than the DataTarget because you plan on using month/period within APO. Given the DataProvider supplies dates it means you could calculate date variances and store the result (duration) without the actual dates them selves being stored in the DataTarget.

      Now you should have a bit of an appreciation as to which side of the Transformation does your business logic lean towards, the DataProvider or the DataTarget. With this in mind you can now evaluate if there is a significant enough difference in granularity between the DataProvider and the DataTarget and confidently know which side of the new InfoSource will the business rules and their related InfoObjects should belong.

      You have already mention one significant difference; time will be changing from date to month/period granularity in the time dimension. What about the other dimensions in your record?

      • Are you allowing a customer dimension to summarise up to a customer attribute? Like customer group or customer level 3.
      • Are you allowing a material dimension to summarise up to a material attribute? Like plant or material type or brand or pack a combination of these.
      • Are you allowing a schedule line item record to summarise up to a document number or perhaps dropping the document dimension completely?

      When you have a decent enough change in granularity between the DataProvider and the DataTarget you now have an opportunity to have the DataPacket “Self Compress” on it’s journey between the DataProvider and the DataTarget, while it exists in the computer memory (RAM) on the application server.

      The main benefit is to reduce the height of the DataPacket as soon as possible in the data’s journey downstream through the DataModel.

      The second benefit is that you can use an InfoSource to also reduce the width of the record as soon as possible which in turn reduces the in-memory footprint of the DataPacket.

      Once you have decided that introducing and InfoSource is beneficial you can build it in parallel to the existing Transformation [C] and DTP [D].

      Using the same data flow, from DataProvider [A] to DataTarget [B] you will now introduce an InfoSource [E] with two new Transformations [F & G] a matching DTP [H].

      1. Create the new InfoSource [E].
      2. Create a Transformation [F] between the DataProvider [A] and the InfoSource [E].
      3. Create a Transformation [G] between the DataTarget [B] and the InfoSource [E].
      4. Create a matching DTP [H].
      5. Optimise (Optional).

      Step 1: Create the InfoSource

      Create the new InfoSource [E] under an appropriate Application Component.

      Add all the InfoObjects that are available from the DataProvider [A] and are wanted in the DataTarget [B].

      Add any missing InfoObjects that form the record key; the full unique key that identifies records as independent from each other.

      Tick each InfoObject as a key field that is part of the records key.

      Do you need to add the 0RECORDMODE characteristic to honour delta record integrity? Probably yes.

      Step 2: Create Transformation [F]

      Add a new transformation between DataProvider [A] and the InfoSource [E].

      Add the business rules that are more related to the DataProvider [A] into the Transformation [F]. If the result of the business rule outputs a value that needs to be made available to the Transformation [G] or stored in the DataTarget [B] then ensure an appropriate InfoObject is used in the InfoSource [E].

      Step 3: Create Transformation [G]

      Add a new transformation between DataTarget [B] and the InfoSource [E].

      Add the business rules that are more related to the DataTarget [B] into the Transformation [G].

      Step 4: Create DTP [H]

      Add a new DTP [H] to trigger data loads through the new data pathway that involves the InfoSource [E].

      Step 5: Optimise (Optional)

      Did you want additional InfoObjects for a better query reporting user experience? Now add the business rules for the additional time characteristics, date key figures, counters, etc.

      Too many records? Check the records arriving from the DataProvider [A] to see if there are sub-sets of records that have nothing to do with the reporting from the DataTarget [B] shown by the end user queries. You can add a filter to the DTP and also to the Transformation [F] to drop the unwanted records as soon as possible.

      For Example: CO-PA data for sales volume and/or fiscal amounts could be delivered at least three times due to the source data being stored redundantly in different currency types. Chances are that the DataTarget [B] only cares about one currency type so filter out the unwanted records.

      At this point there is a whole different topic around where do you apply the business rules within the Transformations [F & G]. Do you implement for performance optimisation or easier supportability for the BW Administrator? Thats a whole different topic, so for now do what you feel more comfortable with. Add the business rules to the start routine, field rules and/or end routine as desired.

      Follow Up: Did you remember to remove the old DTP [D] from active execution by process chains or a scheduled job?

      Hope this helps.

      Kind regards,

  • Posted on Nov 16, 2013 at 07:07 PM

    Your question is very generic. As you say you have so much of ABAP code in APO, you must implement all best practices in ABAP. There are lot of threads or docs in google or SCN to follow standards.

    Add a comment
    10|10000 characters needed characters exceeded

Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.