We have a situation where we have a job with multiple workflows, each with multiple dataflows. When the job ran we received an error on one of the dataflows.
However the job itself only shows a warning. The HDBODBC is a known warning that we ignore so our Ops team did not pick up that this job had failed.
Subsequently the steps after the failed step did not run and the monitor log shows it died in the middle of the failed step.
This is how the workflow is set up and it was the first dataflow that failed.
Long story short is this a known bug that is fixed with subsequent versions (we are on 4.0). Is there a way to work around this so that the rest of the job can continue?