on 08-22-2012 3:31 AM
Hello-
Our portal box crashed with the below error:
[Thr 5659] Mon Aug 20 14:23:09 2012
[Thr 5659] *** WARNING => IcmReadFromConn(id=32/208743): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 1543] Mon Aug 20 14:23:12 2012
[Thr 1543] *** ERROR => HttpJ2EETriggerServer: alloc failed: out of MPI blocks [http_j2ee2_m 1646]
[Thr 3342] Mon Aug 20 14:23:18 2012
[Thr 3342] *** WARNING => IcmReadFromConn(id=44/208745): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 4372] *** WARNING => IcmReadFromConn(id=20/208746): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 5145] Mon Aug 20 14:23:19 2012
[Thr 5145] *** WARNING => IcmReadFromConn(id=11/208747): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 3085] Mon Aug 20 14:23:20 2012
[Thr 3085] *** WARNING => IcmReadFromConn(id=31/208748): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 5402] Mon Aug 20 14:23:22 2012
[Thr 5402] *** WARNING => IcmReadFromConn(id=25/208749): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 1800] Mon Aug 20 14:23:23 2012
[Thr 1800] *** ERROR => HttpJ2EETriggerServer: alloc failed: out of MPI blocks [http_j2ee2_m 1646]
[Thr 2828] Mon Aug 20 14:23:28 2012
[Thr 2828] *** WARNING => IcmReadFromConn(id=26/208751): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 4886] Mon Aug 20 14:23:32 2012
[Thr 4886] *** WARNING => IcmReadFromConn(id=16/208752): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 2571] Mon Aug 20 14:23:34 2012
[Thr 2571] *** ERROR => HttpJ2EETriggerServer: alloc failed: out of MPI blocks [http_j2ee2_m 1646]
[Thr 2828] Mon Aug 20 14:23:35 2012
[Thr 2828] *** WARNING => IcmReadFromConn(id=26/208751): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 3342] *** WARNING => IcmReadFromConn(id=25/208749): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 1286] *** WARNING => IcmReadFromConn(id=31/208748): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 3085] *** WARNING => IcmReadFromConn(id=11/208747): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 4372] *** WARNING => IcmReadFromConn(id=20/208746): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 5145] *** WARNING => IcmReadFromConn(id=44/208745): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658]
[Thr 5402] *** WARNING => IcmReadFromConn(id=32/208743): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2658
I found OSS notes 737625 and 715400 that are relavant , but I know not quite confident as to what all parameters and by how much the size of those parameters should be increased? I understand that the mpi/total_size_MB need to be increased.
Also for AIX sidadm ulimit -n we currently have "no files(descriptors) 2000" , how much should this be changed?
Please suggest all the parameters/sizes that need to be edited. We have around 6000 users that use the system heavily.
Our default values:
rdisp/elem_per_queue = 4000
mpi/total_size_MB = 500
mpi/buffer_size = 65536
mpi/max_pipes = 16000
Thank you!
Hi!
I had this problem in an SAP Portal and, in my case, it was because we exceed the number of processes in the database. We increased it and restarted the portal and the database.
Regards!
Perla
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Siri,
thanks a lot for updating the thread.
We are also thinking about to set the mpi/total_size_MB to a higher value.
I will update you about the success 🙂
Best regards,
Birgit
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Siri,
we are currently facing the same issue in our NW7.30 SP9 Portal
Did you get any solution?
You would help us a lot!
Thanks and best regards,
Birgit
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Siri,
we have set the values as recommended in SAP note 737625 to the following values:
icm/max_conn = 20000
icm/req_queue_len = 6000
icm/min_threads = 100
icm/max_threads = 500
mpi/total_size_MB = 500
mpi/max_pipes = 45000
icm/max_sockets = 22500
icm[J2EE]/enable_icmadm = true
icm/keep_alive_timeout = 80
Did a parameter change in these (or other) settings solve your problem? Could you identify the root cause?
Best regards,
Birgit
Birgit-
The mpi/total_size_MB and Connection Keep Alive timeout values are dependent on the concurrent users and other values. Here's how I went about fixing this issue:
______________________________________________________________________
The two parameters that most examples/best practices have changed are the mpi/total_size_MB and Connection Keep Alive timeout.
We need to calculate the concurrent_conn value to get the mpi/total_size_mb , and these are the 2 formulas used:
Values like req_per_dialog_step , thinktime_per_diastep_sec are similar across best practices and customer examples.
concurrent_conn = (users * req_per_dialog_step * conn_keepalive_sec)/ (thinktime_per_diastep_sec) | ||
mpi/total_size_mb = (concurrent_conn * mpi_buffer_size) / (1024 * 1024) |
The number of concurrent users is an estimate and so I am looking at a few options based on that:
We are probably maxing out on the buffer size around 150 users with the current setup.
So based on the derived values here is my analysis:
____________________________________________________________________________
After calculating a lot of variable/values, I increased “mpi/total_size_MB” from 80mb to 600mb.
If the issue occurs again I was planning to increase this value further and think about decreasing the icm/keep_alive_timeout value from 60sec to 40/30 sec.
But the issue did not re-occur.
Hope this helps. I have a spreadsheet I created based on the formulas, let me know if you need it and I can probably email it directly to your id. I dont see an attach spreadsheet option in here.
Thanks,
Siri
User | Count |
---|---|
84 | |
10 | |
9 | |
8 | |
6 | |
6 | |
6 | |
5 | |
3 | |
3 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.