Skip to Content

Planning function AMDP read from one CUBE and write to a different CUBE

I want to use AMDP for Planning function that reads from one cube (of a multiprovider) and writes to another cube (of the same multi provider) . Thus the aggregation level is same for both cubes .

How can I do this

Method 1 : I can use the generated analytical view to read from CUBE 1 and then write to CUBE 2 . ( I want to avoid this method )

Method 2 : ( Would be below approach work ) ?

E_VIEW = SELECT DIM1, DIM2 , 'CUBE1' as INFOPROV , KF1

FROM :I_VIEW where INFOPROV = 'CUBE2'

The INFOPROV has been defined TYPE RSINFOPROV in the declaration .

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

5 Answers

  • Best Answer
    Mar 08, 2017 at 04:34 PM

    Hi Rajarshi,

    why don't you use the standard planning function type 'Copy'?

    Regards,

    Gregor

    Add comment
    10|10000 characters needed characters exceeded

  • Mar 09, 2017 at 04:19 AM

    Hi Gregor

    I found that apart from method 1 , method 2 also works . The field INFOPROV dumps into what ever infoprovider that is mentioned.

    My actual example requirement was more complicated as it had business logic in it, I just gave a simple example to highlight only the crux of my problem .

    rishi

    Add comment
    10|10000 characters needed characters exceeded

  • Mar 09, 2017 at 04:25 AM

    Gregor Thanks for your answers , also can you tell on the below

    In AMDP , we pass the parameters by value but in conventional programming passing by reference is better for performance .Would AMDP's also include passing by reference in the future and would it help planning functions performance

    Secondly : we have filters on Aggregation level . Are these filters pushed down to the database level or they are like post filters ( like more data is pulled from info-provider and then filtered on the ABAP application layer ).

    i.e : would this be better for performance if we do

    select * from INFOPRIDER where dim1 = filter and dim2 = filter

    in this case we can leave the RSPLAN FILTER wide open , but we are guaranteed that the least amount of data is picked from the database layer (infocube)

    Add comment
    10|10000 characters needed characters exceeded

  • Mar 09, 2017 at 08:56 AM

    Hi Rajarshi,

    when you get a dump in a standard planning function you should open a problem ticket since this (almost always) indicates a program error.

    You can also use FOX to implement a 'copy' function with some additional logic. FOX is also HANA optimized.

    Filters are important to describe the data region one wants to change, the filter is thus also taken to set enqueues. An yes, the filter is pushed down to HANA as well.

    If you really need an AMDP you can find a how to paper about this topic, cf. the following link:

    https://blogs.sap.com/?p=144309

    Remark:

    Pass parameters by reference usually is only possible in one run time not in two as an AMDP get parameters from ABAP and calls a SQL Script, so two run times are involved that cannot share references. But this is not an issue since fine granular calls of AMDPs (e.g. add 1 to a value, called 100000 times) anyway makes no sense, AMDPs are designed to push down 'big procedures' down to HANA (bring the algorithm to the data).

    Regards,

    Gregor

    Add comment
    10|10000 characters needed characters exceeded

  • Mar 09, 2017 at 05:39 PM

    Thanks for clarifying on the filters . Many Clients use EXITS in the filters , Does filters that contain user exits also gets pushed to the HANA DB level while getting pulled

    . I am actually converting slow running FOX ( usually with nested loops ) to AMDP and seeing varying levels of performance benefits .

    I did indeed start with using the "How to Guides" wich I believe was written by you .

    Also I have a query , in your How to Guide ..few of the examples using Cursors - the temp table has been used . Data has to be written to #tmp_tables

    Normally Writing is a expensive option in HANA as data first hits the Delta column , and then data from main column is combined with the delta column and re-ordered and written back to main column at certain intervals (Delta Merge Process).

    So does #tmp_tables written also first written to to delta column , or it works differently.

    The reason I am asking is because instead of writing to #tmp_table , we can use table variables with union

    table_var = table_var union select * from something ;

    Add comment
    10|10000 characters needed characters exceeded