Skip to Content

system copy with sapinst and R3load - SRM and table CFF_CONT

We are busy exporting a SRM system with sapinst/R3load

We do not use database specific tools as this is a ABAP+JAVA dual stack

At first we did try to start the export only with the option "Split STR" which was ok

except for one huge table, there we discovered that R3load has a limitation in the number

of exported data files, the maximum for a group export is 256, in our case the group SAPAPPL1

reached file SAPAPPL1.256 and ended in error with the attempt to have the next file (257)

in the LOG we had:

LOG:

(WTF) ERROR: too many data file entries for table CFF_CONT

(TFH) ERROR: Unable to close /sapmnt/PSR/export/ABAP/DATA/SAPAPPL1.257

(TFH) ERROR: OS error message: Bad file number

(DB) INFO: disconnected from DB

We saw here that the total size of this SRM database is 340 GB,

and the cluster table CFF_CONT alone is about 270 GB, thus without that table the system is only about 70 GB

The table itself (CFF_CONT) looks like just above 1GB but using LOB segment for about 270 GB

Now this huge table required that we restart the Export with different parameters

and unsing the Table Splitting Preparation

with a file like: SplitTable.TXT containing 1 line: CFF_CONT%4 and the option ROWID

this split in 4 (4 WHR files) and would avoid to reach the 256 limitation of R3load

Our question : is that the best recommend option for that table ?

Would also be nice to know what that table is for, and if it could be archived or clean up by business

Regards.

Add a comment
10|10000 characters needed characters exceeded

Assigned Tags

Related questions

11 Answers

  • Best Answer
    Posted on Dec 12, 2011 at 10:18 AM

    Hi,

    You should go with table splitting option .

    with a file like: SplitTable.TXT containing 1 line: CFF_CONT%10 and the option ROWID

    this split in 10 (10 WHR files) .

    By Splitting into 10 or more file such a big table can be exported and imported in a faster manner thus saving enough of time.

    Hope this helps.

    Regards,

    Deepak Kori

    Add a comment
    10|10000 characters needed characters exceeded

  • author's profile photo Former Member
    Former Member
    Posted on Dec 12, 2011 at 10:20 AM

    Hm, my SRM (5.5) does not even have a CFF_CONT table...

    First of all i would ask the business if there can be data deleted prior to the system copy. 300gb sounds like a lot of data, most probably you can get rid of older data or you might need to establish data archiving.

    Second i would split the table in more than 4 packages, i would rather choose something like 24 packages given the size of the table (for performance reasons). Also i would try a few dry runs to figure out the best strategy. But your problem only seems to be the file limitation per package, so with 4 packages you should only get around 70-80 files per package.

    But first of all double check if you really need to keep all that data.

    Cheers Michael

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on Dec 12, 2011 at 11:06 AM

    Hello Raoul,

    of course the first approach should be the one mentioned by Michael. Migrate only the data, that is needed and get the rid of the old stuff by deleting it first.

    You only have an issue with one table and i guess your bottleneck is not the application server part (where R3load runs) - so i would go for the PL/SQL package approach with sap_r3load (sapnote #960280 - unfortunately this note is under review right now).

    Afaik the table CFF_CONT (don't have a system with that table right here) does not need to be exported sorted - so unload it unsorted and with parallel query using sap_r3load package. The degree of parallelism depends on your database server hardware of course, but you can rock it hard with that approach 😊)

    Regards

    Stefan

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on Dec 12, 2011 at 10:40 PM

    Hi,

    may be you can increase the size of the export dumpfiles as well.

    I think the default might be 650MB (to be burnable to CD), but you can increase this

    (by twisting the cmd file if I remember correctly).

    If the target DB is oracle as well, you might try to use the datapump for this single table.

    Volker

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on Dec 12, 2011 at 12:24 PM

    Thanks all already for the answers...

    It looks like that table has only a few fields (columns) and the Table Splitting Preparation tool

    did not let me split it in more than 4, every thing above ended with a warning about too few fields

    to split it so much

    Now it's interesting that this table is not found in SRM 5.5 for certain people

    maybe I will adress a Oss Message to SAP to learn a bit more about this table

    Business is already requested to look into it, as archiving if possible, sound really good in that context

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on Dec 12, 2011 at 04:27 PM

    FYI

    The CFF_CONT is a special cluster table owned by the cFolder component and holding documents in different formats

    that table do use the LOB segments

    Add a comment
    10|10000 characters needed characters exceeded

    • Hello Raoul,

      i have checked the table in a CFolders system (450_700 SP06) right now and i did remember right.

      By the way this table is no cluster table (like RFBLG or CDCLS) - so if you still need all the data you can go for the unsorted parallel query approach.

      Regards

      Stefan

  • Posted on Dec 13, 2011 at 05:50 AM

    The best way to prepare for the table split in oracle is to use the PL/SQL slitter , the use of which is mentioned in SAP note 1043380. Since the table has an LOB field, the best way to figure out the number of split would be to find the number of rows in the table rather than the size of the table. Please check the number of rows of the table. Also ROWID based splitting is not supported for cluster tables . Hence try to find the primary key for the table and figure the field out with the column with the highest cardinality and split the table based on the field using the PL/SQL splitter . Also the splitter is optimized to calculate the ranges for cluster and pool tables . So just start with a value of 30 splits and you will find the splitter doing its best.

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on Dec 13, 2011 at 03:03 PM

    The "Table Spliting Preparation" has been used (sapinst from NW 7.0 SR3)

    in the split text file the option was = CFF_CONT%4;ROWID

    (This needs probably to be changed for better performances, maybe by using a key column/field insead of ROWID

    and by splitting more than only 4, probably at least 20 split would be a lot better)

    The above did work out as expected by splitting into 4 export groups

    Now we did also change the command file CFF_CONT.cmd in

    /tmp/sapinst_instdir/SRM05/LM/COPY/ORA/EXP/CENTRAL/AS/EXP

    tsk: "/tmp/sapinst_instdir/SRM05/LM/COPY/ORA/EXP/CENTRAL/AS/EXP/CFF_CONT.TSK"

    icf: "/sapmnt/PSR/export/ABAP/DATA/CFF_CONT.STR"

    dcf: "/sapmnt/PSR/export/ABAP/DB/DDLORA_LRG.TPL"

    dat: "/sapmnt/PSR/export/ABAP/DATA/" bs=1k fs=4000M

    dir: "/sapmnt/PSR/export/ABAP/DATA/CFF_CONT.TOC"

    - Line 4 => Changed from 1000M into 4000M

    To resume all this our next attempt to optimize this will be by trying other option in the split text file :

    This is what the table looks like in the DDIC

    MANDT MANDT CLNT 3

    RELID INDX_RELID CHAR 2

    PHIO_ID SDOK_PHID CHAR 32

    SRTF2 INDX_SRTF2 INT4 10

    PH_CLASS SDOK_PHCL CHAR 10

    CLUSTR INDX_CLSTR INT2 5

    CLUSTD SDOK_DATAX LRAW 32000

    The 1st four field are part of the Primary Key

    (The Cluster part is in the LOB segment)

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on Dec 13, 2011 at 03:12 PM

    What would be the content of the split.txt file if we want to use more than one FIELD (sapinst - Table Splitting Preparation)

    replacing

    CFF_CONT%4;ROWID

    by

    CFF_CONT%20; MANDT RELID PHIO_ID SRTF2

    Woudl this be ok = space separator for fields, or shour we use semi colon separator

    like in

    CFF_CONT%20; MANDT; RELID; PHIO_ID; SRTF2

    Add a comment
    10|10000 characters needed characters exceeded

    • Do not specify any column name . R3ta would figure out the field to be used for splitting.

      Else find out the field as follows

      Run the query

      select count(distinct(<field_name>)) from <sapsid>.<tabname>;

      against each column .of the primary key.

      Select the column with the highest distinct values as the column for split.

  • Posted on Dec 13, 2011 at 03:55 PM

    Or maybe, the sequence of key fields should only go in the R3ta_hints.txt file located in the exe kernel directory

    and contain the following:

    CFF_CONT MANDT RELID PHIO_ID SRTF2

    Right ?

    Add a comment
    10|10000 characters needed characters exceeded

Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.