Skip to Content
avatar image
Former Member

HCI – Process and recommendation for scheduling integration jobs

Hi all,

Once you have created and tested all your tasks / data flows, what is the recommended approach to schedule them?

First, I wanted to group my data flows in the correct execution order within a process and then schedule only this process.

For this, I ran several tests:

- I created a process to load simple Master Data (Product, Location, Customer, etc.). It works fine

- Then, I created a process to load compound Master Data (ProductLocation, ProductCustomer, etc.). The process seems to work fine but the post-processing in IBP never ends; and data is never loaded. Note that all these data flows work fine individually.

- For some Master Data, I need to perform a multi-step integration :

1) Insert / Update IBP Master Data

2) Export this new IBP Master Data into a flat file

If I use a Process to group these 2 data flows, the post processing of the first data flow is not over when the second data flows begins – which means that I don’t extract the correct data.

Based on these observations, I think the best option would be to:

1) Group Simple Master Data in a process and schedule it

2) For the other Master Data /Extract to flat file / Key figure, schedule individual tasks one by one

If I do so, how can I be sure that a task is over when the following one begins? Do I start a task every 10 minutes, hoping that the previous one has been completely processed?

Has anyone encountered the same issue with Process in HCI?

Thank you for your input,


Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

1 Answer

  • avatar image
    Former Member
    Nov 26, 2015 at 12:33 PM

    Hi Pierre,

    I also had Process once that was running without ending in Post Processing. Have you also tried to run the process with a different check module setting?

    At that time the problem was fixed by changing the Check Status Module to "Dataflow" in the Process Execution properties.



    Add comment
    10|10000 characters needed characters exceeded

    • Former Member Former Member

      Hi Pierre,

      I would recommend following grouping.

      1) Process1 -- As you mentiond (product, location, customer etc).

      2) Process2 - group those tasks which do not have interdependancy, e.g. donot group productlocation & productcustomer, as you have product column common in between them.

      3) Process3 - Individual tasks.

      4) KF tasks - Dont group any KF tasks, it will go in error.

      I think there is some issue with locking of table in IBP, hence you were unable to load productcustomer & productlocation as part of single process.

      I would recommend you to do following test, run productcustomer individually and very next second run productlocation task. Once tasks are in IBP, monitor it, that do they get over or not?

      It will take more time stored procedure will be running for both the task at same time in IBP.

      Let me know results for this as well.

      Regarding other strategy of 10mins difference, is a good idea, but you should take 5 - 7 runs of each task and then you will come to know avg time per task. Keep buffer of 5-10mins over and above avg time. This way process / tasks will run in sequence but will not overlap (probability is less).