Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

OPEN DATASET FOR OUTPUT functions like append rather than overwrite

0 Kudos

Hi All,

overview:

We have an issue with one of the Z report thats been in production for more than 5 years. The job executes successfully 99.5% but fails every year once or twice.

Details of the report: The report creates two files. The first file gets data from the DB and writes to an output file. There is a SUBMIT statement within this report which opens this written file(why SUBMIT? - the SUBMIT report is reused. It performs multiple tasks!) and generates another file. Very rarely, the OPEN DATASET FOR OUTPUT works as append rather than overwriting the file contents.

0****** ***** 1*****1***310000000*****00000000****0000*******0

************** 01092*****2**000000***005***00201***** R******** **C##

************** 01092*****2**000000***005***00201***** R******** **C##

************** 01092*****2**000000***005***00201***** R******** **C##

************** 01092*****2**000000***005***00201***** R******** **C##

************** 01092*****2**000000***005***00201***** R******** **C##

***********ES##

**Further lines present***

The last line ***********ES## is from the previous contents of the file which causes the job to fail.

13 REPLIES 13

Sandra_Rossi
Active Contributor
0 Kudos

As explained clearly in the documentation, FOR OUTPUT "If the specified file already exists, its content is deleted." There's no exception to that, so you should search patches in the SAP support website, or open a ticket.

FredericGirod
Active Contributor
0 Kudos

maybe it is just a problem of memory inside the report. If the statement OPEN DATASET is perform using the SUBMIT, it could be somewhere before this call.

0 Kudos

Hi Girod,

Thanks for the inputs. But if it's a memory issue, we should be facing this everyday/frequently? Also, if we re-run the job, it gets completed successfully without any issues.

FredericGirod
Active Contributor
0 Kudos

maybe not, when the job has a problem, does it run late ? does the writing program run twice ?

other question, is the filesystem a real filesystem or a network fs ? like a NFS ?

0 Kudos

The run time depends on the data selected from the DB. The last time the job got failed, it did so within 3 secs. The write gets completed successfully, but issue is when we read that data into the Work area and try to manipulate it.

For some reason, FOR OUTPUT works like append, overwriting few lines of the previous data from the start of the file until the last offset of the current data pushing the other contents to the next available offset.

We are not sure if it's a hard error since this job has been executing for more than 15 years and we have faced this issue like 5-10 times since then. Re-running it resolves the problem.

And yes, it's a real filesystem.

Sandra_Rossi
Active Contributor
0 Kudos

Can you explain more precisely how the two programs work with the two files, and which one of the two files is concerned by the issue? I want to know because sometimes file writings may not be flushed completely by Operating System, so you may need to add a wait in ABAP to compare the number of bytes written and the number of bytes read, to make sure you've got the complete file...

mohit_dev
Contributor
0 Kudos

Have you tried to use the OPEN data set function with SMARTLINE FEED:

OPEN DATASET <.....> FOR OUTPUT ..........WITH SMART LINEFEED.

0 Kudos

Hi Sandra,

To your question,

Let's assume we have 2 files: infile and outfile being used in a report ZTEST.

Note: rec1 is of same structure in both the reports.


REPORT ZTEST.

OPEN DATASET infile FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.

TRANSFER rec1 to infile.

CLOSE DATASET filename.

SUBMIT ZTEST1 WITH *selection_params*.

REPORT ZTEST1.

OPEN DATASET infile FOR INPUT IN TEXT MODE ENCODING DEFAULT.

OPEN DATA outfile FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.

**Pick Records from infile**

READ DATASET infile INTO rec1. "Work Area type 1.

*This is where the issue occurs because of data not getting completely erased when OPEN DATASET FOR OUTPUT is used instead it's getting replaced on the first few lines. We are pushing data from work area in TEXT MODE... The previous day had 100 records written to it, and the next day had 5 records. The first 5 lines of the file got replaced with the new ones and the remaining 95 records were pushed down and their offsets were changed.

**Do manipulation/calculation on those data and copy the contents into REC2 internal table**

LOOP AT rec2 INTO wa_rec2.

TRANSFER wa_rec2 INTO outfile.

ENDLOOP.

CLOSE DATASET infile.

CLOSE DATASET outfile.

Sandra_Rossi
Active Contributor
0 Kudos

Thank you. The process seems very straight forward. I think it's possible that ZTEST1 reads infile before it's completely flushed by ZTEST, but it doesn't explain the symptom.

Is it possible that you have several parallel processes to run your program? And that you have several application servers, the directories of each application server should be logically mapped to the same physical location, maybe one is not mapped or something like that?

If you can't find what the issue is, maybe you should avoid using an intermediate file (infile) and use the memory instead (if not too big). Otherwise, you could log some information to help troubleshoot the issue when it occurs next time (number of characters written, size of the file after it's written, number of characters read, size of the file when it's about to be read, etc.)

DoanManhQuynh
Active Contributor
0 Kudos

I think "the data look like append instead of rewrite" is the result of error not the cause, i mean if open file process interrupted then file might not be completed. it would be hard to find out why but as it already happened 5,10 times i think you could look in file data if is there any pattern make the read file process dump, or check sap system status at that time...

Thanks Sandra and Quynh.

sandra.rossi It's just a normal job that just picks data from DB and writes to a file and there are no parallel processes/Mass activity involved. We did come with multiple solutions but we have a strict rule of not moving the new changes to prod unless a root cause of the issue can be determined 😞

I've tried replicating this issue in all possible scenarios like..

1. Opening a file for write and in read mode in multiple sessions and performing some process on the file.

2. Manually creating runtime exceptions while still in write mode.

3.Opening multiples files in write mode etc.

It's like one of those bugs that game developers face where an issue occurs out of a 1 in a million systems playing the same game 🙂

quynh.doanmanh : Unfortunately, when the job fails, the state of the file is already changed and there is no means of retrieving that file back since its overwritten or half appended in this case. We do not have a back up of these files and neither does the integration team(TIBCO). Once they transfer it, it's gone from their folder as well. We have been taking backup of the file since the last time this issue occurred and we have not faced any issues since then.

Sandra_Rossi
Active Contributor
0 Kudos

Thanks for the feedback.

So, if the rule is strictly applied, all issues which cannot be reproduced are to be recovered manually. What if an issue happens 100 times a day but cannot be reproduced... 🙂

Well, it's very rare, so a manual recovery is a reasonable choice.

I just had a very interesting conversation with a .NET developer and luckily he has faced the exact same issue like the one described in the question. They had an issue where the same file was being used in multiple threads(All manually written in code and kind of distributed system) and they assumed their code was thread safe by default(Almost 10 years back). Unfortunately, there was one thread which was reading some content from the file and the other thread was writing data to the same file. They were supposed to overwrite the complete file but the residual contents from the previous day were still present along with the new contents. Seems they implemented singleton to overcome this issue along with few other minor changes.

It all started with the developer thinking that threads and concurrency were handled automatically which it didn't.

This doesn't solve our issue but just wanted to share. Maybe one of the threads failed while execution ? We even tried getting the logs from the UNIX team but we were able to find nothing 🙂