Skip to Content
0
Former Member
Oct 28, 2012 at 11:09 AM

Performance problems during parallel processing

288 Views

Hello,

since the migration of our financial solution system (FS-CD) from AIX/Oracle to RHEL6/DB2/vSphere4.1 two weeks ago we have been facing massive performance problems during parallel processing in the system.

We are now able to reconstruct this behavior while running a SGEN in other systems in our new DB2 LUW landscape:

The higher the parallelization, the less CPU usage!

A SGEN of 44,380 objects with 17 parallel jobs in a sandbox system with 8 vCPUs consumes a max. CPU utilization of just 42%.

A SGEN in the same system with 6 parallel jobs leads to a CPU utilization of about 70% (further analysis over longer periods still need to be performed).

Same behavior in another quality assurance system (BW, 4 vCPUs):
SGEN 13x parallel: max. 50% CPU usage.
SGEN 5x parallel: max. 80% CPU usage.
There's nearly the same throughput of objects per second, whether we are working with 5 or 13 parallel jobs.

Important:
In last year's proof of concept for the platform change, this behavior didn't occur!
Back then a 15x parallel SGEN led to a nearly constant CPU utilization of 70-80%.

Differences between last year's PoC and the current production (apart from altered DB2 parameter recommendations by SAP):
PoC: DB2 9.7 FP3 / FP4, RHEL 5.7
Current production: DB2 9.7 FP5, RHEL 6.1

Until now no explanations were found for VMware, DB2 or Linux which led to a significant improvement of processing times, so we would like to ask if there's anybody who can confirm or even reconstruct this behavior during parallel processing in his system landscape?


Kind regards,
André Richter