04-17-2013 12:23 PM
Hello All,
We have a list report which displays contents of an *.xml file.
It uses write statement using color option within an internal table to display the data of the file.
The internal table has single column but is of char 3600 length.
This internal table has 0.5 million records.
Currently on one of these write statements, the program is dumping with an error TSV_TNEW_PAGE_ALLOC_FAILED.
It is happening only for one such case.
The memory is probably insufficient.
Is there any specific SAP documentation to explain the memory usage of write statement in such case?
Thanks,
Prashant
04-17-2013 2:28 PM
How are you using the write statement? It's highly unlikely that the issue is directly related to WRITE, so you're not going to get any documentation about memory usage of "WRITE".
04-18-2013 6:23 AM
Hi Matthew,
The following code is being executed within a loop on internal table consisting of 0.5 million records:
color = color + 1.
color = color MOD 2.
ENDON.
FORMAT RESET.
IF color = 1.
WRITE: /2(79) space COLOR 2.
IF text1 = ' 0'.
text = sy-tabix.
WRITE: 3 text COLOR 2, 11 text2 COLOR 2.
ELSE.
WRITE: 2 '+' COLOR 2, 3 text1 COLOR 2, 11 text2 COLOR 2.
ENDIF.
ELSE.
WRITE: /2(79) space COLOR 4.
IF text1 = ' 0'.
text = sy-tabix.
WRITE: 3 text COLOR 4, 11 text2 COLOR 4.
ELSE.
WRITE: 2 '+' COLOR 4, 3 text1 COLOR 4, 11 text2 COLOR 4.
ENDIF.
ENDIF.
Thanks,
Prashant
04-18-2013 6:58 AM
I assume then that your logic is:
LOOP AT data.
write data.
ENDLOOP.
My guess is that you're already on the cusp of memory lack. Any action - writing to a list or whatever - is going to increase the memory to an extent. The WRITE just pushes it over the edge.
You can check by running in debug with the memory monitor. How much does memory usage increase with each iteration of your loop?
I do wonder, however, what the purpose is of outputting half a million records? Who is ever going to read it?!
04-18-2013 7:06 AM
Hi Matthew,
It is an old report used to view data of files stored on UNIX server.
We are thinking of a workaround to download the file separately and then viewing it.
It will spare us the short dump.
Thanks,
Prashant
04-17-2013 2:58 PM
You could try to debug your report, and before (not always easy) execute a memory use analysis ?
(ref: The Memory Analysis tool)
Regards,
Raymond
04-18-2013 6:49 AM
Hi Raymond,
Memory analysis is as follows:
Though we could understand that internal table was usurping bulk of the memory.
But after the data is read from the dataset completely and while looping at this internal table we are encountering this dump.
Thanks,
Prashant
04-17-2013 3:13 PM
yes the memory is insufficient.I think the data is too huge.
Check with basis guy's then will increase memory.
04-18-2013 6:26 AM
Hi Sai,
It is happening with this one program only and that too for just one file.
So increasing the memory is not a good idea, as per the basis team.
Thanks,
Prashant
04-17-2013 6:12 PM
04-17-2013 7:04 PM
Hi Prashant,
The Dump actually means that there is not enough run time memory to hold the data required to complete the process. The reason may not be the WRITE statement itself. The best thing to do is to check for the memory parameters or to reduce the data volume that is being processed.
Can you please provide us the complete Long text of the Dump which will help us in getting more insight on the actual Dump that has occurred.
Best regards,
Praveenkumar T
04-18-2013 6:41 AM
Hi Praveen,
Thanks a lot but due to our client's privacy policies we cannot divulge any such information on public portals.
Thanks,
Prashant
04-18-2013 7:11 AM
What use spool thus generated will you have, could you consider "breaking the spool" in smaller spool with some db commit between every spool open/close to free some memory (NEW-PAGE PRINT ON/OFF) ?
Regards,
Raymond
04-18-2013 7:17 AM
Hi Raymond,
The report is to display the data of a file from unix server.
We are currently working on a workaround beacuse this report is used by a lot of users and the impact is very high.
Plus, generally we do not have such bulky files so we were looking for some mall breakfix to make this program up and working.
Thanks,
Prashant
04-18-2013 8:48 AM
As the file is read from a UNIX server, i guess you use READ DATASET to retrieve the contents?
If that is the case, you could break up the reading and writing of the file in several parts. E.g. after reading 100 lines, write 100 lines and then proceed to read the next 100 lines.
04-18-2013 7:16 AM
Hi,
I'm sorry, but the design of this program sounds loco.
Loading 500K records into an internal table, and then trying to write all out to spool?
That's too much data for anyone to make sense of (even with color coding). Is it meant to be stored away, for auditing purposes?
It might be better to create an ALV report where the users can select which parts of the file they want to view. This way they can use sorting, view it on the screen, hide columns, etc.
Or perhaps just use an XML editor to view the file?
cheers
Paul
04-18-2013 7:21 AM
Hi Paul,
It is a very old report. Generally we do not encounter any such dumps.
But it is for this particular file that we received this dump.
So we were thinking of some breakfix rather than changing the process.
This report is generally used to read files from unix to application layer.
We do not have such bulky files in general, it is just a one off case.
Thanks,
Prashant