Skip to Content

Developing B1if Java Atoms to run in parallel

Hi All,

I've developed a java atom using the documentation available in the bizstore to make a call to a 3rd party application over TCP. The java atom works functionally, but I'm having issues running parallel instances of the java atom, which I need to be able to do.

The inbound for my scenario is a synchronous web service call. To test parallel execution of the java atom, I've stripped out the TCP call detail. The main call just waits 10 seconds.

I've tried changing the isThreadSafe to true/false, but that seems to make no difference. The synchronous IPO steps always process the calls to my java atom one after the other, not at the same time.

The documentation says that the java class needs to have a parameterless initializer, I'm not sure if I'm not realizing this correctly.

I'm running this test in B1if version 1.22.0

import java.util.Properties;
import java.util.concurrent.TimeUnit;

public class TestJavaCall 
		public TestJavaCall()
		{//parameterless initialization	}
		public boolean isThreadSafe() {return false;}
		public void mustDestroy() {}
		public boolean usesJDBC() {return false;}
		public boolean call(Properties inProps, BizProcMessage inMsg, Properties outProps, BizProcMessage outMsg, Connection iConn,
				String jobID) throws Exception {
			return false;

Hopefully someone out there has a better idea of what to do; I'm stuck. If I can't figure it out soon, I may need to rethink the entire solution and move it outside of b1if.



Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

1 Answer

  • Best Answer
    Jun 07, 2017 at 12:14 PM

    Dear Cameron,

    For the records and to help others, here the detailed answer of our B1i Platform architect again:

    The reason for your issue when realizing the desired function is very likely located at an unexpected place:

    First, the interface implementing class is flagged properly to be able to execute in parallel. Assuming the implementing code as well is prepared for parallelism (e.g. caution on static member variables, possible critical section needing to be synchronized), so far so good.

    But the issue is – to run in parallel, it also must be called in parallel. Now coming to B1i: An instance of the BizProcessor (=the entity that runs the code executed in one IPO-Step (where in turn one or many are used to compose a B1if scenario step)) internally runs single-threaded, even if there is logical parallelism (e.g. using for-each / branch): That parallelism just is executed in an interleaved sequential way (thereby understanding the for-each as a kind of iterator, but not as a classical kind of programming language loop that executed in a pre-defined order). In fact, execution of atoms in a BizFlow happen as the atoms are ready to be executed, thereby not considering the others. That works as the Bizflows have no internal side-effects (e.g. a kind of global variables that can be updated from anywhere). Knowing that, it is predictable how external side-effects can be forecasted (e.g. a call to an external system out of a for-each): There must not be made an assumption about call-order in this case; if this is required, an explicit sequential call pattern serves better.

    But coming back to the issue:

    No matter how many calls occur in one instance of an IPO-Step, they all happen in any kind of sequence (thus, the processing is similar as in a J2EE Enterprise Java-Bean that as well has no multi-threading.

    The next consideration is when having a look about what happens amongst such executing IPO-Step instances: They principally can execute in parallel.

    But there is now another semantic constraint that in many cases would apply to some integration content and as well is applied to B1if: The need for many cases to logically process a stream of messages in a sequential, FIFO way. In B1if (and other integration approaches), it often makes sense to execute like that.

    That comprises e.g. all steps triggered by Q’s, as most likely, a Q stands for FIFO needs. As business data most often has a CRUD lifecycle, it would be problematic if first an update comes and then the creation only.

    On steps triggered by external entities (e.g. incoming data from a web service-call, or IDOC#s coming from R/3), it depends on how these entities react. On backends being the source, most often, the data-stream is sequential as well.

    So, having an IPO-Step and a message-stream processed in a FIFO manner, there won’t be detected any parallelism concerning the execution.

    Things change if a used B1if scenario-step consists of multiple, sequentially executed technical IPO-Steps (say, 2-3): In that case, a message still is processed in step #2, whereas step #1 already can process the next message. In fact, both run in parallel.

    But having said that, these both steps typically have distinct coding: Means, step #2 calls your Java-class of question, but step #1 doesn’t. As a matter of fact, again, there is no effective parallel call of the class.

    But at the end of the day, such a kind of parallelism still might apply e.g. if the class is executed from multiple distinct steps or if a particular step is in charge for distinct independent data: So e.g. a material-master upload might execute independently from e.g. an order-upload. If in fact B1if uses physically the same steps for both (what depends on the technical details of the scenario), parallel execution in fact would happen.

    Another good place for parallelism (sometimes more than desired) is e.g. a solution where e.g. mobile devices do some requests that need to be served immediately.

    So, as a tiny summary: Parallelism / awareness for it is good, but the prereq. Is that the task itself is prone to it / can be designed to benefit from it.

    As a side-note: Blocking / long lasting activities in IPO-Steps are ok for testing purposes, but should be avoided in real life, as long-lasting transactions create trouble for the whole environment. In your case, the 10secs pause just might be fine to detect / visualize the depicted way of sequential execution.

    Add comment
    10|10000 characters needed characters exceeded