Skip to Content
avatar image
Former Member

why we check OM13 and OM17 tcode in sap apo. what if we will check these tcode in production?

my first question is :

1 why we check om13 tcode in sap apo? what is LCA object? what it does in sap apo?

2. when we check critical logging event in om13? what that critical logging event do? if we will not check it what will be the effect? what is critical logging event?

3 . critical logging event fails because of one job om_dailyreorg, what that job do. why that job fails?

now question about OM17.

1. we check inconsistency between livecache and apo database here, but we are getting one error "incoming order quantity can not be adjusted" while we are checking product allocations.what will be solution of this error

2 whenever we are checking DP Time series check in om17 we are getting error : Number of superfluous time buckets profiles:". what will be solution of this error.

could anyone please give answer for above question. it will be great help.

I have checked all over these answer but didn't find. kindly provide answer.

Thank You

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

1 Answer

  • avatar image
    Former Member
    Jan 18 at 02:11 AM

    1 "why we check om13 tcode in sap apo? what is LCA object? what it does in sap apo?"

    - Some SCM componects like APO need the liveCache database to run the applications.

    The APO master data saved in SCM database and in liveCache. The APO transactional data < orders, TS data ... > are available only in liveCache on SCM system. In liveCache the objects stored in the class containers and can be accessed and manipulated only via LCA routines, for speed up the liveCache data loaded to memory for access . The registration of the LCA routines is done automatically when the liveCache on MAXDB is started in LC10, check the lcinit.log file.

    The liveCache on HANA is also released to the customers, if you installed the SCM system on HANA, install the compatible LCAPPS plugin ,

    setup the LCA/LDA/LEA connections in /ndbacockpit, initialize the liveCache.

    You could also migrate the SCM system to HANA with the liveCache on HANA.

    The shared procedures in the LiveCache are written in C++ and shipped to the customers as binary shared libraries (LCAPPS-Routines) together with the LiveCache on MAXDB in case of liveCache on MAXDB, of with LCAPPS plugin in case of liveCache on HANA.

    Names of all procedures, which have been called from ABAP, you can find executing 'lc10' or in /ndbacockpit < liveCache on HANA>

    transaction on your SAP SCM system.

    If you need to see the interfaces of the procedures, the LCA routines called in '/SAPAPO/OM' functions from ABAP, you may use 'se80' transaction and navigate in the Development class

    '/SAPAPO/OM'.

    See more details at

    service.sap.com/scm

    -> SAP SCM in DEtail ->TECHNOLOGY

    "liveCache Overview"

    The applications will call the //om* functions, which called the LCa routiens to access the liveCache data and munipulate them, when it will be needed. It's dependent on the application scenario & data, which you have.

    - Using /n/sapapo/om13 you could check/list the versions of the software related to the liveCache applications run on first tab, the second tab has the basic checks with the alert semaphore option to run for the proper work of liveCache on your system.

    2. "when we check critical logging event in om13? what that critical logging event do? if we will not check it what will be the effect? what is critical logging event?"

    - The critical LCA logging events could be listed in //om11. Check the red semaphors for the last 5 days. Or follow the "I" box on the checks folder in //om13

    - If you could not solve those issues, create the SAP message to BC-DB-LCA component & get the SAP support.

    - Not solving the critical LCA logging events in time could lead to the performance issues to run the liveCache applications, fail to run the liveCache applications.

    3 . "critical logging event fails because of one job om_dailyreorg, what that job do. why that job fails?"

    - Review the SAP notes:

    679118 - Which operations are performed by /SAPAPO/OM_REORG_DAILY?

    800927 - Standard jobs in the SCM/APO area

    - Check the system log on SCM system and the job log to see the errors.

    - If the case could not be solved by you after checked the errors, create the SAP message to BC-DB-LCA component & get the SAP support.

    4. "we check inconsistency between livecache and apo database here, but we are getting one error "incoming order quantity can not be adjusted" while we are checking product allocations.what will be solution of this error"

    - In general, before a correction with transaction /SAPAPO/OM17, you want to check inconsistencies again. If parallel postings are carried out during a check, the system can, for example, determine inconsistencies that are only temporary. If the same inconsistency will be listed when you will run the check during the downtime for another application to run in parallel, create the SAP message to the component - ComponentSCM-APO-ATP-BF-PAL ( Check against Product Allocations )

    5 "whenever we are checking DP Time series check in om17 we are getting error : Number of superfluous time buckets profiles:". what will be solution of this error."

    - The superfluous objects could be left in liveCache after the

    DP Time series application aborted, for example, the WP killed or appl. server was shutdown.

    Those inconsistency you could mark & run corrections, they will be deleted in the liveCache.

    * Additionally, review the SAP note -

    1723242 - SCM/APO and ERP consistency after liveCache recovery

    Best regards, Natalia Khlopina

    Add comment
    10|10000 characters needed characters exceeded