on 10-20-2011 4:14 AM
I have a question about how the delete is handled after the write of the archive files has taken place.
I am archiving data from a very large DSO. (160Gig).
I am archiving one year at a time. 2010 and 2009 worked fine.
When I archived 2008, the system tried to handle the delete in a very different way. Rather than using a selective delete, the job tried to copy the entire table excluding the archive criteria. I guess after than it would drop the table and then reload it. But the table is 160, so the transaction log filled up and the system crashed.
Is there a threshold? i.e. if the number of records to be archived is less than a certain value in the delete is handled by a normal selective delete, but if the number of records to be archived is higher than the threshold then it copies the table excluding the records to be deleted, drops the table and reloads it from the copy?
If this is the case, can this be turned off? i.e. force the system to use the selective delete only
SAP has explained this situation:
The PARAMETER is called BW_SELDEL_PERC_THRES and specifies the
percentage (integral) from which the deletion is now performed using
COPY/RENAME instead of DELETE. The relationship is compared [number of
records to be deleted]/[number of all records in the table] for
BW_SELDEL_PERC_THRES/100 (therefore figures in the area of 0 - 1). If
the parameter is not set, the default value of 10 is valid. Insert the
parameter with program SAP_RSADMIN_MAINTAIN into the RSADMIN table (or
use the program change or delete it).
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
I am archiving data from a very large DSO. (160Gig).
I am archiving one year at a time. 2010 and 2009 worked fine.
When 2009 and 2010 worked fine, obviously your DSO size would have been reduced from 160GB to some smaller size. Isn't it?
When I archived 2008, the system tried to handle the delete in a very different way. Rather than using a selective delete, the job tried to copy the entire table excluding the archive criteria. I guess after than it would drop the table and then reload it. But the table is 160, so the transaction log filled up and the system crashed.
I feel you would have given selection wrongly for 2008. The best way of archiving sequence is 2008 -->2009 -->2010 ......
You have done in a reverse way. Archiving never deletes the data, it actually moves data selectively from our Infoproviders to NLS or data files in any of the application servers or third-party systems
Regards,
Suman
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Suman,
Thank you for your comments.
Since I posted the question, I have done further archiving and testing.
There are 2 distinct steps in the archiving process. The first step writes the archive files to the archive directory and then the second step deletes the data from DSO/InfoCube.
I have run 8 archive sessions and I see clearly that the delete step seems to work differently depending on the the amount of data in the archive selection. Sometimes it does a normal selective delete from the DSO/InfoCube and other times it starts to copy the entire table excluding the archive selection. Even with some of the deletions already done, this table is still very large. So I am trying to establish if there is a setting somewhere in config which determines this threshold. I would like to force the selective delete method since this process does not fill up the database transaction log.
Regards
Rob
User | Count |
---|---|
78 | |
10 | |
9 | |
7 | |
6 | |
6 | |
5 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.