Skip to Content

O11.2.0.3 : ORA-00445: background process "W000" did not start after


I regularly have the following error with an recently installed Oracle


Process W000 died, see its trace file

Sun Apr 07 00:48:23 2013


Fatal NI connect error 12537, connecting to:



        TNS for Solaris: Version - Production

        Oracle Bequeath NT Protocol Adapter for Solaris: Version - Pr


        TCP/IP NT Protocol Adapter for Solaris: Version - Production

  Time: 07-APR-2013 00:48:23

  Tracing not turned on.

  Tns error struct:

    ns main err code: 12537


TNS-12537: TNS:connection closed

    ns secondary err code: 12560

    nt main err code: 0

    nt secondary err code: 0

    nt OS err code: 0

Sun Apr 07 00:49:29 2013

opiodr aborting process unknown ospid (17179) as a result of ORA-609

Sun Apr 07 01:01:14 2013

Errors in file /oracle/xxx/saptrace/diag/rdbms/xxx/xxx/trace/xxx_smco_7937.trc  (incident=85849):

ORA-00445: background process "W000" did not start after 120 seconds

Sun Apr 07 01:03:30 2013

Incident details in: /oracle/xxx/saptrace/diag/rdbms/xxx/xxx/incident/incdir_85849/xxx_smco_7937_i85849.trc

Sun Apr 07 01:17:37 2013

Process 0x480433ce8 appears to be hung while dumping

Current time = 938816049, process death time = 938749879 interval = 60000

Attempting to kill process 0x480433ce8 with OS pid = 7937

OSD kill succeeded for process 480433ce8

Sun Apr 07 01:18:22 2013

Restarting dead background process SMCO

Sun Apr 07 01:18:38 2013

SMCO started with pid=30, OS id=26990


Would you know how to solve this problem?

Thanks in advance for your answer.

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

7 Answers

  • Best Answer
    Aug 23, 2013 at 08:45 AM


    The problem was related to blocking write issues at the Solaris Level.

    After open a Oracle Solaris SR we have changed the following kernel parameter:

    • zfs:zfs_arc_max must be lower than the free memory of the machine      
      This avoids stucks related to an over usage of the memory.
    • zfs:zfs_vdev_max_pending is set to 128
      This increases the IO throughput.
    • zfs:zfs_write_limit_min is set to 2516582400
      This avoids IO writes from starving because IO reads           have a higher priority.
    • primarycache is set to metadata for        /oracle/SID/sapdata* and /oracle/SID/oraarch filesystems      
      This avoid waste and over usage of the ZFS cache.


    Add comment
    10|10000 characters needed characters exceeded

  • Apr 16, 2013 at 02:03 PM


    The error occurs during online db verify.

    Would you have any suggestion on how I could avoid this problem?

    Thanks in advance for your answers.

    Add comment
    10|10000 characters needed characters exceeded

    • Hello,

      Stefan Koehler wrote:

      The call stack (function order) looks nearly the same, but the two processes are missing in your system state dump. I can imagine two possible reasons for that:

      • Both processes are not attached to SGA at all
      • Both Processes already died when you performed the system state dump (the runtime of the system state dump was quite large on my system)

      I am surprised by your answer.

      From what I see the process did not die and seems to be attached to all shm that belongs the

      oracle database:

      # ipcs -m | gawk '/^m / {gsub("^m +[0-9]+",sprintf("m %12#x",$2))} {print}' | grep -i orasid

      m   0x520000a1   0x6ab0fb88 --rw-r-----   orasid      dba

      m   0x520000a0   0x0        --rw-r-----   orasid      dba

      m   0x5200009f   0x0        --rw-r-----   orasid      dba

      # pmap 21524 | grep -i shm

      0000000380000000      32768K rwxsR    [ ism shmid=0x5200009f ]

      00000003C0000000    3178496K rwxsR    [ ism shmid=0x520000a0 ]

      00000004C0000000         16K rwxsR    [ ism shmid=0x520000a1 ]


  • avatar image
    Former Member
    May 25, 2013 at 07:04 AM

    Hi Benoit,

    Please check how big memory has been used by ZFS filesystem. Please run this command

    mdb -k (detail in

    Your memory could be occupied by ZFS filesystem cache.



    Add comment
    10|10000 characters needed characters exceeded

    • Former Member Benoît Schmid

      Hi Benoit,

      In my previous environment, we have installed fresh ECC under AIX with Oracle 10g (NAS Storage). Since the beginning we have notice the slowness along the R3load of installation and accessing the system. I have checked the CPU and memory, both are fine. IOSTAT is normal, there is nothing special except it's slow.

      But when we run dbv, the CPU was straight away hit 100% with High IO Wait and after awhile the database crash (around 15-20 minutes). So I complaint to vendor.

      I ask them to check disk subsystem. They have put additional patch into OS. dbv is still slow but no more crash. So they invetigate further more. They found out one of the cache (built-in) in the NAS storage hasn't installed. After installation of the cache, the dbv only took 30 minutes to finish.

      So, in my case dbv has made the issue very obvious, but the symptoms actually are there (slow, IO Wait, high CPU).

      Best Regards,


  • Apr 08, 2013 at 06:45 AM

    Hi Benoit,

    A Similar issue is addressed here

    Additionally please check the following configuration

    f Async I/O is used, Oracle may hang with "startup nomount". The following error message is displayed after two minutes:

    ORA-00445: background process "PMON" did not start after 120 seconds

                           This problem is due to a missing MLOCK privilege for the dba group and the oper group. First use "getprivgrp dba" and "getprivgrp oper" to check whether this privilege exists. If it does not exist, you can assign it as a root either temporarily by using "setprivgrp dba MLOCK"/"setprivgrp oper MLOCK" or permanently by creating the /etc/privgroup file with the following contents:

       dba MLOCK

      oper MLOCK

    If above solution is not suitable please share the following logs /oracle/xxx/saptrace/diag/rdbms/xxx/xxx/trace/xxx_smco_7937.trc


    Deepak Kori

    Add comment
    10|10000 characters needed characters exceeded

    • Hello Deepak,

      Deepak Kori wrote:

      Hi Benoit,

      check whether the issue happens at peak time.

      Execute  select * from v$resource_limit; to see if any limit is reached.


      Deepak Kori

      Unfortunately, the error occured during the weekend, after the off line backup.


  • May 24, 2013 at 08:32 AM

    Hi Benoit

    Have you tried the following solutions ?

    I have also attached the oracle note suggested in the link provided by Deepak

    Best Regards


    Add comment
    10|10000 characters needed characters exceeded

  • avatar image
    Former Member
    May 24, 2013 at 11:00 AM

    How much CPU, RAM, swap you have allocated?

    Add comment
    10|10000 characters needed characters exceeded

  • May 24, 2013 at 11:57 PM

    Hello Benoit,

    What value do you have in the environment variables and listener.ora for ORACLE_HOME?


    Eduardo Rezende

    Add comment
    10|10000 characters needed characters exceeded

    • We observed some issues with customer that upgraded from Oracle to and had different values in the environment and listener.ora. Even though the alias was points to the correct location.


      /oracle/MYSID/112_64 pointing to /oracle/MYSID/11203

      env: /oracle/MYSID/11203

      listener.ora: /oracle/MYSID/112_64