Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

ABAP Push Channel under huge load. Any best practices?

dmitry_sharshatkin
Active Participant
0 Kudos

Hello,

I have a question regarding ABAP Push Channel, particularly about mixed scenario APC/AMC: http://scn.sap.com/community/abap/connectivity/blog/2014/04/14/abap-channels-part-3-collaboration-sc... 

What if I am going to attach the functionality to some system event, which suddenly, starts happening very often?

For instance, I would like to inform technicians about the errors (e.g. IDOC postings), but then, incidentally, all IDOCs start to fail, thus thousand (or much more) messages must be send. BTW, IDOCs are normally posted in parallel, so this must be also considered.

So my questions are:

How to ensure that APC/AMC are working under huge load?

Any best practices or references?

What is huge load in this case?

The question is mainly to gurus – Olga and Masoud, but any hint would be appreciated.

Posting this as a separate question, as the topic is quite important to my mind.

Thanks, Dima

1 ACCEPTED SOLUTION

masoud_aghadavoodijolfaei
Active Participant
0 Kudos

Hi Dima,

there exists a limitation for the number of AMC channels bound to WebSocket connections which can be adjusted using profile parameter, but for pushing messages to UI (WebSocket client) there exists (more or less) no limits for the number of messages transferred to the WebSocket client. Our tests, partly based on loadrunner, show that it is very hard to reach those (theoretical) limits. Just try to simulate the "heavy load" situation in your development system and if you would see any issues ,e.g. dropping of message just let us know.

We have a large number of unit tests which checks the consistency of ABAP Channel infrastructure. You as developer has also to test your software as good as possible, from different perspective, also regarding load test. Actually by increasing the number of AMC/APC producer you are able to generate more messages per seconds on the connection but you have to ensure that the consumer(s), e.g. the browser, is able to handler/consume those messages as well. My observation was, that if you send more than 100 messages per miliseconds to browsers you may observe issues regarding handling of those messages. Of course this depends on the JavaScript code and underlying library and garbage collector.

Cheers,

Masoud

5 REPLIES 5

masoud_aghadavoodijolfaei
Active Participant
0 Kudos

Hi Dima,

there exists a limitation for the number of AMC channels bound to WebSocket connections which can be adjusted using profile parameter, but for pushing messages to UI (WebSocket client) there exists (more or less) no limits for the number of messages transferred to the WebSocket client. Our tests, partly based on loadrunner, show that it is very hard to reach those (theoretical) limits. Just try to simulate the "heavy load" situation in your development system and if you would see any issues ,e.g. dropping of message just let us know.

We have a large number of unit tests which checks the consistency of ABAP Channel infrastructure. You as developer has also to test your software as good as possible, from different perspective, also regarding load test. Actually by increasing the number of AMC/APC producer you are able to generate more messages per seconds on the connection but you have to ensure that the consumer(s), e.g. the browser, is able to handler/consume those messages as well. My observation was, that if you send more than 100 messages per miliseconds to browsers you may observe issues regarding handling of those messages. Of course this depends on the JavaScript code and underlying library and garbage collector.

Cheers,

Masoud

0 Kudos

Hello Masoud,

Real-life example: we have 6.000.000 stock updates in MSEG per day. We have 300 plants and we are thinking about having 300 channels (not easy to hndle it manually), as the data must be sent per plant. This results into 20.000 updates per plant per day or 14 updates per minute, in average. But, in fact, we so far have no clue how this is distributed in time. (We will definetely check it).

Therefore, it is very likely, we will have more (maybe much more) than 1 update per second.

Unfortunately our FE Fiori application is not capable to handle it so fast (e.g. 1 Hz at max).

So the question woudl be, how would you develop the back-end logic in this case?

The idea is to:

1. Delay processing of the messages;

2. Queue the messages;

3. Collect the messages;

4. Sent a collective message update;

Background job is not an option.

Do you have an idea, how all above can be reached? 

Can we, maybe, use qRFC with queue name as PLANT id?

Thanks and Regards,

Dima

0 Kudos

Hi Dima,

the scenario would very likly create an issue at the FE side (Fiori), which has to be tested. In that case I would recommand to establish a "daemon", e.g. asynch. RFC session or APC stateful session in backend system (ABAP engine), which act as consumer of the messages and do the proper aggregation of the data to be sent to the WebSocket clients. Additionaly you should plan a batch job as watch dog to check the availbility/health of the "daemon" session.

Batch jobs, qRFC and bgRFCs as alternatives are - from my point of view - no options, as they do not provide any gurantee regarding the execution time of the tasks.

Cheers,

Masoud

0 Kudos

Hello Masoud,

Thanks for the proposal.

We will investigate our case further and will try different options.

I will let you know.

Thanks and Regards,

Dima

0 Kudos

Hello Masoud,

I have tested the following scenario using qRFC.

1. I check how many entries are in TRFCQOUT for my channel.

- If more than 2, i do nothing.

- If not, i trigger an outbound qRFC, i.e. generate one entry.

2. Inside this qRFC, first I wait some (5) seconds.

3. Then I read the last update (field update_start) time stamp from my own Z-table.

4. Then I update field update_start with new time stamp. Do commit.

5. Next I extract the delta for the data (now - last update p.3).

6. Then I prepare the data and send it to the front-end.

7. Finally I update the finish time stamp (field update_finish).

(One can see that probably this field is not used, but it will be useful when you need to identify the failed channel updates. In this case: update_finish < update_start )

It seems to work OK. Why do you think that qRFC do not provide the a guarantee regarding the execution time of the tasks? In my case it will be delivered in 10 seconds maximum, even if there were 1000 updates in between...

And another question, I used my own Z-table to store the information regarding APC requests, do we have something similar in the standard maybe?

Thanks, Dima