Skip to Content
author's profile photo Former Member
Former Member

BODI 11.7.3.8 Failure Running Table Compare as Seperate Process

Greetings,

have a created a data flow that performs a table compare and when I select to run as a seperate process the job fails. Running DI 11.7.3.8 on a Solaris

$ uname -a

SunOS dvdiapp01 5.10 Generic_141414-02 sun4u sparc SUNW,Sun-Fire-V245

$ prtconf | more

System Configuration: Sun Microsystems sun4u

Memory size: 16384 Megabytes

System Peripherals (Software Nodes):

SUNW,Sun-Fire-V245

scsi_vhci, instance #0

packages (driver not attached)

SUNW,builtin-drivers (driver not attached)

deblocker (driver not attached)

disk-label (driver not attached)

terminal-emulator (driver not attached)

dropins (driver not attached)

kbd-translator (driver not attached)

obp-tftp (driver not attached)

SUNW,i2c-ram-device (driver not attached)

SUNW,fru-device (driver not attached)

SUNW,asr (driver not attached)

ufs-file-system (driver not attached)

hsfs-file-system (driver not attached)

chosen (driver not attached)

openprom (driver not attached)

client-services (driver not attached)

options, instance #0

aliases (driver not attached)

memory (driver not attached)

virtual-memory (driver not attached)

SUNW,UltraSPARC-IIIi, instance #0

memory-controller, instance #0

SUNW,UltraSPARC-IIIi, instance #1

memory-controller, instance #1

$ ulimit -a

time(seconds) unlimited

file(blocks) unlimited

data(kbytes) unlimited

stack(kbytes) 8192

coredump(blocks) unlimited

nofiles(descriptors) 1024

vmemory(kbytes) unlimited

(11.7) 07-17-09 09:41:22 (24563:0001) DFCOMM: Starting sub data flow <df_prc_pc_mis_customer_address_dim_new_1_1> on job server host <dvdiapp01>, port <3500>. Distribution

level <Job>.

(11.7) 07-17-09 09:41:22 (24563:0001) DFCOMM: Starting sub data flow <df_prc_pc_mis_customer_address_dim_new_1_2> on job server host <dvdiapp01>, port <3500>. Distribution

level <Job>.

(11.7) 07-17-09 09:41:22 (24563:0001) DFCOMM: Starting sub data flow <df_prc_pc_mis_customer_address_dim_new_1_3> on job server host <dvdiapp01>, port <3500>. Distribution

level <Job>.

(11.7) 07-17-09 09:42:13 (23783:0001) WORKFLOW: Work flow <wf_prc_pc_mis_customer_address_dim_pc> is terminated due to an error <210115>.

(11.7) 07-17-09 09:42:13 (23783:0001) WORKFLOW: Work flow <wf_prc_pc_mis_customer_address_dim> is terminated due to an error <210115>.

from the $LINK_DIR/log/errrlog.txt I have the following

(11.7) 07-17-09 09:17:32 (E) (2204:000) FIL-080101: Cannot open file

/opt/bobj/DI/log/Server1/dbometa__msykora/error_07_17_2009_09_14_56_23__d16337cb_3c6d_4bac_9ae0_f92ee944a47f.txt.idx in 'rb' mode. OS error message is:No such file or directory OS error number is:2 al_engine reached 'fileopen_retry_time' limit so exiting

Note that the file above is read/writeable by the bobj user.

Any help would be greatly appreciated,

M

Edited by: Martin Sykora on Jul 17, 2009 3:53 PM

Add a comment
10|10000 characters needed characters exceeded

Related questions

3 Answers

  • author's profile photo Former Member
    Former Member
    Posted on Jul 17, 2009 at 03:57 PM

    check if the file /opt/bobj/DI/log/Server1/dbometa__msykora/error_07_17_2009_09_14_56_23__d16337cb_3c6d_4bac_9ae0_f92ee944a47f.txt.idx exists or not ?

    have you tried increasing the number of file descriptors ?

    there was a issue fixed related to same error, in 11.7.3.7

    you can try following

    Add the following in the $LINK_DIR/bin/DSConfig.txt in the [AL_Engine] section on the job server machine

    fileopen_retry_time=90

    First try with 90 then with 120

    also are you able to reproduce this error all the time or its random ?

    are you running Dataflows in paralle ? if so how many DF are running in parallel ?

    Add a comment
    10|10000 characters needed characters exceeded

  • author's profile photo Former Member
    Former Member
    Posted on Jul 20, 2009 at 05:36 PM

    check if the file /opt/bobj/DI/log/Server1/dbometa__msykora/error_07_17_2009_09_14_56_23__d16337cb_3c6d_4bac_9ae0_f92ee944a47f.txt.idx exists or not ? it does exist

    have you tried increasing the number of file descriptors ? capped at 1024

    there was a issue fixed related to same error, in 11.7.3.7

    you can try following

    Add the following in the $LINK_DIR/bin/DSConfig.txt in the AL_Engine section on the job server machine

    fileopen_retry_time=90

    First try with 90 then with 120

    =>added to DSConfig.txt but no effect

    also are you able to reproduce this error all the time or its random ? all the time

    are you running Dataflows in paralle ? yes

    if so how many DF are running in parallel ? 3

    Add a comment
    10|10000 characters needed characters exceeded

    • Former Member Former Member

      setting this parameter to 0 will set the value to default that is 60, so try with 1

      fileopen_retry_time=1

      I am trying to reproduce the issue, but its not reproducible in my env, are you getting any conversion warnings which are getting written to error file ?

  • author's profile photo Former Member
    Former Member
    Posted on Jul 21, 2009 at 12:40 PM

    tried with =1 and same result and no conversion warnings

    Add a comment
    10|10000 characters needed characters exceeded

Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.