Skip to Content

VBFA / KOCLU / CDCLS migration

Hello Experts !

I'm preparing my system (erp 6.0) for migration.

what throubles me is that 3 tables : KOCLU / CDCLS / VBFA are enormous (over 1 tb of data)

Is there any effective way to export / import them ?
Export mechanism of swpm doesn't allow exporting such big tables without splitting them first, but after splitting they cannot be imported in parallel (db6 won't allow that)

I'm looking for effective way to archive / delete data from those tables.

I know that those table clusters consist of tables
KOCLU : KONV

CDCLS : CDPOS,PCDPOS

VBFA : is a transp. table.

but their sizes don't add up when I compare the result of rsatablesize and db02

for example :
CDCLS is 1.084.491.616 kb big ( + additional 47.896.736 kb for index)

DB02 doesn't recognize tables CDPOS / PCDPOS and tells me that they don't they exist.
While at the same time RSATABLESIZE is is showing their size to be 3.218.210.788,75 kb (each of them!)
(which is almost 3 tb + 3 tb -> almost 6 times of db02 CDCLS's value if you take a sum into account)

Shouldn't CDCLS and CDPOS/PCDPOS size be exact ?

will archivization of tables CDPOS/PCDPOS result in CDCLS smaller size ? Or should I archive both oth them ?

The same case is for KOCLU cluster :

DB02 informs me that KOCLU is 762.360.224 kb (+ 8 gb index)

while rstablesize for KONV is showing : 1.134.156.899,04 kb. why are those values not same ? Even worse why a table konv that is a part of KOCLU is bigger ?

The main question is :

should I tell my business to delete /archive data from KONV,CDPOS,PCDPOS or from CDCLS and KONV or both options ? Any sap notes would be really helpfull (there was one 12904 but was removed from sap...)

Best regards,
Peter

Add comment
10|10000 characters needed characters exceeded

  • Follow
  • Get RSS Feed

3 Answers

  • Jan 11 at 02:51 PM

    Hi Peter,

    the tables you are interested in are table clusters. Since your system is still on ERP 6.0 level you do not yet have the option to decluster those tables ( note 2227432 ). For systems based on SAP_BASIS 7.40 I always recommend to decluster tables ... but this is a different topic.

    If tables CDPOS, PCDPOS are still clustered they do not exists on database level. The database does only know the corresponding table clusters ( e.g. CDCLS ). The corresponding data from tables CDPOS, PCDPOS is stored in an SAP compressed format in table CDCLS.Transactions like DB02, ST04, DBACOCKPIT ... extract the table sizes from database level using database functions and therefore tables CDPOS, PCDPOS, ... do not show up here.

    I am not very familiar with report RSATABLESIZE but I assume that it just calculates the number of rows in the cluster tables

    CDPOS, PCDPOS, ... by inspecting the data in the table cluster and multiplies it with the table width defined in SAP data dictionary. This is very crude and does not take into account any compression ( neither the SAP compression of the data in the table cluster tables nor DB2 compression ).

    >Is there any effective way to export / import them ?
    > Export mechanism of swpm doesn't allow exporting such big tables without splitting them first, but after splitting they cannot be imported in parallel (db6 won't allow that)

    Db2 does not allow to start multiple LOAD operations into one database table concurrently. It is possible to split the table export and to use multiple R3load processes that do concurrent INSERTs into those table. However this may cause lock escalations or log full problems if you are not careful with the commit count or if other R3load procs start long running database transactions in parallel while your parallel INSERTs use up the log space.

    It is possible to split the table export and to use the R3load option SPLITTED_LOAD to import the data. However this requires a 3 step approuch which may not be possible using SWPM.

    (1) 1 R3load proc creates the table only without indexes

    (2) multiple R3load procs with option SPLITTED_LOAD can LOAD data into temporary tables in parallel

    (3) 1 R3load proc with option SPLITTED_LOAD that creates indexes on the target table. The first index create triggers a LOAD FROM CURSOR from the temporary tables to the target table.

    SPLITTED_LOAD may be faster than running concurrent INSERTs. However since it needs this 3 step approach it requires some expertise and maybe manual start of the R3load procs with the correct command line parameters.

    Hope DB6 migration experts can provide their opinion too.

    Regards

    Frank

    Add comment
    10|10000 characters needed characters exceeded

  • Jan 14 at 12:49 PM

    Hi Peter,

    Frank has compiled a good summary of the options and you may want to have a look at the old - but to a large extend still valid IBM redbook here:

    DB2 Optimization Techniques for SAP Database Migration and Unicode Conversion

    It contains a lot of small bits an pieces to optimize large tables. Unfortunately, the SPLITTED_LOAD procedure is not mentioned there as this was introduced after the book was published. But you will find some more details like unsorted export, socket transfer and so forth. For more recent information about export/import, you can check the following link. It describes optimization techniques in the context of an Unicode conversion, but most of them are valid for a heterogeneous system copy as well:

    General approach for downtime optimization

    A different approach would be to use the Database Migration Option (DMO) of the Software Update Manager (SUM)

    Database Migration Option of Software Update Manager

    regards

    Thomas

    Add comment
    10|10000 characters needed characters exceeded

  • Jan 15 at 11:49 AM

    Dear Thomas & Frank,
    Thank You very much for Your time and answers !
    I'am able to export and import the database with splitting option, but as Frank has mentioned- it takes enormous amount of time (which is unaccetable by business). This is db migration only from one os to another, so SUM's option of DMO is out of option unfortunately.
    We are thinking about migration with socket/pipe option, but danger of one simple network error (and thus having to start everything over again) is too great for us for now. And because of that we are looking for efficient ways to archivize data from those table clusters.

    Still thank You both for links and for clarification on how the rsateblesize might work and ofr the option with splitted load. I did ask SAP about that mechanism (which is not described at all in guides ) but they didn't recommend it.

    best regards,

    Piotr

    Add comment
    10|10000 characters needed characters exceeded