07-23-2008 2:11 PM
Hi all,
I've a problem with a program working on Shared Objects technology.
We have a Job, scheduled in 18 parallelism, and each one writes into the SHM controlled by a SHMA Class.
At jobs ending, a program reads content from the area and sends an automatic e-mail with the results.
Everything works well if the writer program is executed on-line.
Otherwise, in background, seems that nothing is stored in the SHM.
Here's the code executed by the writer program:
FORM shared_memory_access TABLES it_fehler STRUCTURE rpfausg.
DATA: errors_reference TYPE REF TO data.
DATA: lx_pterl00 TYPE REF TO zcx_pterl00_collector.
TRY.
* --> Get SHM Access
CALL METHOD zcl_pterl00_collector_root=>build
EXPORTING
invocation_mode = cl_shm_area=>invocation_mode_explicit.
* --> It's ok?
IF zcl_pterl00_collector_root=>area_exists EQ 'X'.
* --> Fill Data:
GET REFERENCE OF it_fehler[] INTO errors_reference.
CALL METHOD zcl_pterl00_collector_root=>fill_area_with_data
EXPORTING
error_messages_dref = errors_reference.
ENDIF.
CATCH zcx_pterl00_collector INTO lx_pterl00.
MESSAGE lx_pterl00 TYPE 'S' DISPLAY LIKE 'E'. "Non-blocking -> JOBS
ENDTRY.
ENDFORM. " SHARED_MEMORY_ACCESS
Here is the section from the class handling the attachment to the SHMA:
METHOD if_shm_build_instance~build.
DATA: lx_collector TYPE REF TO zcx_pterl00_collector.
* --> Automatic building of instance:
TRY.
CALL METHOD get_handle_for_update( inst_name ).
CATCH zcx_pterl00_collector INTO lx_collector.
MESSAGE lx_collector TYPE 'X'.
CATCH: cx_shm_no_active_version.
TRY.
CALL METHOD get_handle_for_write( inst_name ).
CATCH zcx_pterl00_collector INTO lx_collector.
MESSAGE lx_collector TYPE 'X'.
ENDTRY.
CATCH: cx_shm_inconsistent.
zcl_pterl00_collector=>free_area( ).
TRY.
CALL METHOD get_handle_for_write( inst_name ).
CATCH zcx_pterl00_collector INTO lx_collector.
MESSAGE lx_collector TYPE 'X'.
ENDTRY.
ENDTRY.
ENDMETHOD.
I cannot explain why multiple jobs do not populate the area...
07-29-2008 1:12 PM
Hi Rob,
a bit hard to be sure, but I suspect it is the way you are connecting to the shared memory area.
A shared memory instance can have only one writer at a time. This immediately indicates that running 18 processes in parallel that need to write to the same shared memeory instance will not work.
Your build method tries to attach to the shared object for update. If that fails then it attaches for write. So if the shared object is already attached for update to one process, a second process will try and attach for write. If you have versioning turned on, then this will create a new instance of the shared object - if not then the attach for write will fail.
Cheers
Graham Robbo
07-29-2008 1:12 PM
Hi Rob,
a bit hard to be sure, but I suspect it is the way you are connecting to the shared memory area.
A shared memory instance can have only one writer at a time. This immediately indicates that running 18 processes in parallel that need to write to the same shared memeory instance will not work.
Your build method tries to attach to the shared object for update. If that fails then it attaches for write. So if the shared object is already attached for update to one process, a second process will try and attach for write. If you have versioning turned on, then this will create a new instance of the shared object - if not then the attach for write will fail.
Cheers
Graham Robbo
07-30-2008 10:50 AM
Hi Graham,
thank you for your answer!
Having area versioning active now, I've added a little piece of code in get_handle_for_update method:
* Catch exceptions
catch: cx_shm_change_lock_active into lcx_shm_change_lock_active.
* --> Active change lock: wait a moment
do.
if area_handle->get_lock_kind( ) eq cl_shm_area=>lock_kind_detached.
exit.
endif.
enddo.
call method if_shm_build_instance~build( inst_name = instance_name ).
* --> Other exceptions:
catch: cx_shm_version_limit_exceeded into lcx_shm_version_limit_exceeded,
cx_shm_exclusive_lock_active into lcx_shm_exclusive_lock_active,
cx_shm_parameter_error into lcx_shm_parameter_error.
* Trigger one generic:
raise exception type zcx_pterl00_collector
exporting textid = zcx_pterl00_collector=>error_in_generation.
So the calling process will wait for the one which is writing (that is, the detach_commit( ) call and the consolidation of the new area version).
What do you think about this? I know that this is a "big hammer" approach, but maybe is useful...
Thank you again,
Rob.
07-30-2008 11:21 PM
Hi Rob,
if your requirement is to have many (18) active processes all updating the shared object, and very few simply reading the shared object, then versioning is probably not what you require.
Versioning allows readers to continue to attach and read the active shared object instance while the updater gets their own instance of the shared object. When the updater does a detach_commit the old instance becomes obsolete and all new requests to attach will be diected to the new instance. The old instance will be cleaned up by garbage collection once all of its readers have detached.
If your programs primarily attach for update then you will decrease performance with versioning because a new instance needs to be created at every attach for update.
Perhaps you should just retry the attach for update after a small period of time has passed?
If, on the other hand, you do have lots of other readers of the shared object you may well still find that it is more efficient not to have versioning. I build a web shop catalogue using shared objects and found that versioning severly hampered performance. This was because, once the catalogue was initialised, updaters were pretty rare but readers were constant.
BTW make sure you keep the locks on the object as short as possible. Do all your preparation work first, then attach for update, update, detach as quick as possible.
Cheers
Graham Robbo
09-22-2008 2:44 AM
Hello GuysÇ
How do we manage Enterprised shared objects in SAP BI when multiple projects with multiple technologies (main frames, ETL,SAP BI are involved.
Thanks
GR