Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Short Dump During List processing

Former Member
0 Kudos

Hi ,

Problem Desc:

A program when it runs in production has huge number of data to be displayed in the list .While doing the write statement the internal table(system assigned internal table for list processing '%_LIST') exceeds the memory limit and goes for dump.

Please suggest for the following for finding a better solution, I am in 4.7 only

1.Once the program reaches a maximum memory limit I have to clear the list from memory

- so how to send the processed list to other persistent area like spool etc

- How to clear the internal list memory allocation ( Any c functions , Class is available for it )

- How to catch the memory used for the list process programmatically (for Internal table %_LIST)

Thanks in advance

Best regards,

Raj

3 REPLIES 3

Former Member
0 Kudos

Hi .

Could you try and process the data in batches instead of doin it all in one go.. clear/refresh/free internal tables once you have finished with them..

Former Member
0 Kudos

Hi Raj,

Why don't you go for prevention rather than cure?

When the user selects the data ranges for the report, you could do a calculation to determine the size of the report and then ask them to restrict the criteria. They could then run several reports rather than trying to do it all in one.

I would guess that if you are running out of memory that the report must be HUGE. Who is going to look at something that large?

You could also increase the memory allocation to each work process (although this is a bit of a sledgehammer to crack a nut).

As for your quesions:

1. I'm not sure you can persist the spool yourself, and certainly not from within your program. You could, however, spin off jobs from the main program which each had a portion of the overall data to process and thus each would generate a smaller spool (although collectively they would be the same size).

2. As I don't think you can achieve 1, I don't think there is any point to 2.

3. There was a thread discussing internal table memory consumption just today, with approaches for 6.20+ and 4.6-. Heres the link:

Cheers,

Brad

0 Kudos

Hi Brad,

Thanks for your detailed mail . I'll investigate this issue further and try to provide a cure ... otherwise as you advised ...Just prevent the issue !

Once again Thanks a lot

Cheers ,

Raj