cancel
Showing results for 
Search instead for 
Did you mean: 

Open Cursor with hold issue

0 Kudos

Hello Experts,

I have a very strange issue, i have one master table(data comes into the table 24*7)

Issue is number of records in this table is nearly 10 million and simple select is not working as internal table is not capable of holding such a huge data.

So i written open cursor dbcur for select * from table and fetching the results in packet size using fetch next,

Once i get one package I submit it for background processing using open job close job.

Issue is suppose there are 1000 enteries in table and package size is 100 so ideally it should schedule 10 backjobs(do 10 iterations of fetch)

But strangely its not do 10 iterations, some times 8 some time 7.

Nb; i am using db_commit for commit operation in program,

Meanwhile cursor is open for the table, there are fair chances new entries are getting into the ae table.

Anyone have had experienced such situation.

For simple flow is like.

1 open cursor on table F.

2 fetch data using package size

3. Schedule background job for processing of data, using submit

4. In scheduled program using Db_commit for committing table R(not same table on which cursor is open)

5. Again go for next fetch (as its in do enddo)

So my question is number of fetch iterations should be number of total entries divided by number of package size. Which is not holding trie in my case

Best Regards

Accepted Solutions (0)

Answers (0)