11-23-2007 10:02 AM
Hi Folks,
I have one report which process some data (lets call it Rep1). Now I have created another report (Rep2). Rep2 reads data from the database. Then divides it into 10 chunks of same size. One by one it Exports each chunk to cluster (using Export to database, and each data block has a different object name) and creates a background job (of Rep1) using JOB_SUBMIT. Inside Rep1 I am reading from cluster using Import. Now the problem is out of 10 jobs, onle 2 or 3 are able to read data. Others are just failing (saying 'No data found').
What could be the cause? Any tips?
Regards,
Munish
11-29-2007 1:37 PM
Hi,
I suppose that memory cannot be read if the writing job is running on application server A and the reading job on server B.
Regards, Marina
11-29-2007 12:09 PM
11-29-2007 12:12 PM
Check your data in debug mode before the export statement
See if 'chunks' 4 - 10 contain any data
11-29-2007 12:16 PM
Hi Kris,
Export works perfectly. Sy-subrc is zero afterwards.
And during different runs, different number of jobs fail. Its not always the last ones. Sometimes first few, and sometimes randomly.
Regards,
Munish
11-29-2007 12:20 PM
It may export an empty table / data and that could cause the other programs to fail (based on the logic there)
Also check if the DELETE statement in one of the programs is clearing all the data
11-29-2007 12:29 PM
Yes. Thats the exact problem. Second report is unable to read data from the memory (exported by the first report). The table imported is blank.
there is no Delete statement for the database memory.
11-29-2007 12:33 PM
There is a limitation on Amt of data you can store in ABAP Memory and SAP Memory .. check those limitation ..per user ..
also check for Number of external session granted by urr basis ppl...
it may be possible that only 3-4 session are allowed coz each background process create and external session ..
11-29-2007 1:37 PM
Hi,
I suppose that memory cannot be read if the writing job is running on application server A and the reading job on server B.
Regards, Marina
11-30-2007 9:48 AM
Hi Folks,
The problem is solved. The entries in the cluster table were getting over-written. Simply used separate keys for diffetent sets (though sets already have different names).
Thanks.