cancel
Showing results for 
Search instead for 
Did you mean: 

Increase CPU usage during Upgrade

Former Member
0 Kudos

Hello All,

We are running upgrade from 4.7 to ECC 6. We have a very good hardware configuration with 8 Core (each core 8 threads and thus virtually 64 CPU). But still the CPU usage is between 2% to 5%. Is there any Parameters which could increase the CPU usage?

We are running in Solaris 10 and Oracle 10g

Kindly help

Regards,

Anand

Accepted Solutions (0)

Answers (4)

Answers (4)

Former Member
0 Kudos

Thanks all for the response

markus_doehr2
Active Contributor
0 Kudos

> We are running upgrade from 4.7 to ECC 6. We have a very good hardware configuration with 8 Core (each core 8 threads and thus virtually 64 CPU). But still the CPU usage is between 2% to 5%. Is there any Parameters which could increase the CPU usage?

What machine is that? If it's a T-Server series then that behaviour is expected.

See the thread

Markus

Former Member
0 Kudos

Hello All,

Thanks for your reply

Markus,

Yes we are running T server and i am worried as we need to meet the downtime The total database size is 1.2TB and we would need to complete the Upgrade and Unicode conversion within 3days.Downtime phase itself is taking nearly 24hours. We are using migration monitor for system copy and we have followed SAP note 936441 for Oracle parameters and 724713 for Solaris specific settings. We have set 64 parallel jobs as our server is running with 8 core (8 threads for each core). All went fine as soon as we start the export monitor. For every one hour 12GB data was exported. But later it went down drastically and after 10 hours for every 3hours it exports only 1GB. Are we missing any settings or configuration for minimizing the downtime during downtime phase and export / import?

Regards,

Anand

markus_doehr2
Active Contributor
0 Kudos

> Yes we are running T server and i am worried as we need to meet the downtime

T-Server series are not suited for single threaded (and so ABAP) applications. Effectively you will have the speed of 8 single CPUs (if those CPUs are on 8 dies). If those are dual/quad CPUs you'll get an ever worse performance. Read the link I posted before, the symtoms are the same there.

> All went fine as soon as we start the export monitor. For every one hour 12GB data was exported. But later it went down drastically and after 10 hours for every 3hours it exports only 1GB. Are we missing any settings or configuration for minimizing the downtime during downtime phase and export / import?

Did you estimate the export time before and did you split the biggest tables themselves up so you get a parallel unload of the same table?

Markus

Former Member
0 Kudos

Hello Markus,

Yes we have split the 30 largest table and they were spilt successfully. But later the export is getting slow down and for the last six tables its taking nearly 6hours. The tables are as follows:

CDCLS

FMIFIIT

COSB

BSAS

MLKEPH

COSD

I future run we have decided to include these tables also in table splitting. But these tables are not in the list of first 50 tables and still they take more time. We are wondering how this is happening.

Also for server performance we are checking with hardware vendor for options if any. Please suggest us

Regards,

Anand

markus_doehr2
Active Contributor
0 Kudos

>

> Yes we have split the 30 largest table and they were spilt successfully. But later the export is getting slow down and for the last six tables its taking nearly 6hours. The tables are as follows:

>

> CDCLS

> FMIFIIT

> COSB

> BSAS

> MLKEPH

> COSD

What I was referring to is not splitting them into separate packages but splitting up the export itself using R3ta, did you do that?

> I future run we have decided to include these tables also in table splitting. But these tables are not in the list of first 50 tables and still they take more time. We are wondering how this is happening.

Which 50 tables? biggest?

> Also for server performance we are checking with hardware vendor for options if any. Please suggest us

I'd change the hardware to a non-Cool Threads server (so no T-series).

Markus

sunny_pahuja2
Active Contributor
0 Kudos

Hi,

It depends upon which strategy you have chosen in configuration mode for your upgrade. If you have chosen standard Resource Use then it is normal.

If you have sufficient resources then you can use High resource use strategy.

But better is use Manual Selection of parameters where you can define different processes like number of R3trans processes depends upon the how much resources you have like RAM and CPU etc.

Thanks

Sunny

Former Member
0 Kudos

Hi,

How many parallel processes did you specified during Prepare and at the begining of the Downtime?

If you left it in 3 (Default) then this will be normal, also it depends on the phase your upgrade is running at the time you meassure the CPU consumption because much of the phases runs on Phisical disks and CPU won´t do much during those phases.