Skip to Content
avatar image
Former Member

Database migration to MAXDB and Performance problem during R3load import

Hi All Experts,

We want to migrate our SAP landscape from oracle to MAXDB(SAPDB). we have exported database of size 1.2 TB by using package and table level splitting method in 16 hrs.

Now I am importing into MAXDB. But the import is running very slow (more than 72 hrs).

Details of import process as per below.

We have been using distribution monitor to import in to target system with maxdb database 7.7 release. We are using three parallel application servers to import and with distributed R3load processes on each application servers with 8 CPU.

Database System is configured with 8CPU(single core) and 32 GB physical RAM. MAXDB Cache size for DB instance is allocated with 24GB. As per SAP recommendation We are running R3load process with parallel 16 CPU processes. Still import is going too slow with more that 72 hrs. (Not acceptable).

We have split 12 big tables in to small units using table splitting , also we have split packages in small to run in parallel. We maintained load order in descending order of table and package size. still we are not able to improve import performance.

MAXDB parameters are set as per below.

CACHE_SIZE 3407872

MAXUSERTASKS 60

MAXCPU 8

MAXLOCKS 300000

CAT_CACHE_SUPPLY 262144

MaxTempFilesPerIndexCreation 131072

We are using all required SAP kernel utilities with recent release during this process. i.e. R3load ,etc

So Now I request all SAP as well as MAXDB experts to suggest all possible inputs to improve the R3load import performance on MAXDB database.

Every input will be highly appreciated.

Please let me know if I need to provide more details about import.

Regards

Santosh

Add comment
10|10000 characters needed characters exceeded

  • Follow
  • Get RSS Feed

6 Answers

  • avatar image
    Former Member
    Nov 15, 2011 at 11:25 AM

    I would like to add more details here.

    R3load options

    -nolog -c 10000 -loadprocedure FAST -force_repeat -para_cnt 16

    Add comment
    10|10000 characters needed characters exceeded

  • avatar image
    Former Member
    Nov 24, 2011 at 12:25 PM

    Hi,

    did u checked this note?

    Note 1327874 - FAQ: SAP MaxDB Read Ahead/Prefetch

    consider also parameter:

    MaxTempFilesPerIndexCreation -> 131072 ???

    (Possible values include 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072 up to 1/3 of cache memory size).

    regards

    Stanislav

    Add comment
    10|10000 characters needed characters exceeded

    • Former Member

      Hi,

      Thanks Stanislav for you reply with helpful inputs.

      We have assigned 26 GB ( 3407872 pages) as CACHE memory size and MaxTempFilesPerIndexCreation=131072.

      I am not sure about parameter MaxTempFilesPerIndexCreation value. As per SAP Possible values include 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072 up to 1/3 of cache memory size. so I have set maximum number-131072

      So Can we increase further from its current value 131072. and if yes what should be the value if considering cache memory size is 26 GB.

      So please advice.

      Regards

      Santosh

  • avatar image
    Former Member
    Nov 24, 2011 at 01:41 PM

    Hello,

    description of parameter:

    MaxTempFilesPerIndexCreation(from version 7.7.0.3)

    Number of temporary result files in the case of parallel indexing

    The database system indexes large tables using multiple server tasks. These server tasks write their results to temporary files. When the number of these files reaches the value of this parameter, the database system has to merge the files before it can generate the actual index. This results in a decline in performance.

    as for max value, I wouldn't exceed the max valuem for 26G value 131072 should be sufficient. I used same value for 36G CACHE SIZE

    On the other side, do you know which task is time consuming? is it table import? index creation?

    maybe you can run migtime on import directory to find out

    Stanislav

    Add comment
    10|10000 characters needed characters exceeded

  • avatar image
    Former Member
    Nov 24, 2011 at 02:02 PM

    Hello again,

    by the way table splitting on MaxDB isn't recomended at all

    [Note 1385089 - R3ta for table splitting with MaxDB|https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1385089]

    ....At the moment, the tool does not consider the properties of the particular RDBMSs. For MaxDB, this can result in SQL statements that do not split the table in primary key order, which might lead to very long export and import times....

    Regards Stanislav

    Add comment
    10|10000 characters needed characters exceeded

    • Former Member

      Hi stanislav,

      In one of your reply you have mentioned that "table splitting on MaxDB isn't recomended at all".

      But in our landscape source database is oracle and our target DB is MAXDB, So in this case is it advisable to split very big table in source database and import in MAXDB.?

      We have split few big table in source DB during export.

      Could you please let me know if I am doing wrong with splliting big table during export?

      Regards

      Santosh nikam

  • Nov 25, 2011 at 03:33 PM

    I would also recommend using the paramter -force_repeat. This will significantly increase the import time because the database will not do a savepoint after each table import (also empty tables) is finished.

    During the import you can check the database activity using

    x_cons <SID> sh ac 1

    this command will show you the current database activity.

    Markus

    Add comment
    10|10000 characters needed characters exceeded

  • avatar image
    Former Member
    Dec 05, 2011 at 09:24 AM

    Just to add to what markus has said.

    Note 1016732 will help.

    Add comment
    10|10000 characters needed characters exceeded