cancel
Showing results for 
Search instead for 
Did you mean: 

Export speed

Former Member
0 Kudos

Hello,

we are trying to export a content server, according to SAP note 962019. First we are trying with a QA-system. There is not much load on it, DB size is about 200GB.

What makes me worry is the speed of about 2GB/h, or about 155GB since Friday. Is this expected behaviour, or should it be much faster?

DB is on local disks, DB version is 7.5.0, on Win 32bit.

loadercli is running on a remote system, version 7.6.03, on Win 64 bit.

command x_cons <SID> show active is showing one process, in status 'IO Wait (R)'

regards

Accepted Solutions (1)

Accepted Solutions (1)

lbreddemann
Active Contributor
0 Kudos

Hi Jon,

how many datavolumes are configured for the source database?

Please activate DB Analyzer (set snapshot interval to 120 secs.) and the time measurement (x_cons <SID> time enable).

What I/O timings do you get from DB Analyzer?

Please do also let us know the parameters of the source db instance.

regards,

Lars

Former Member
0 Kudos

Hello Lars,

thank you very much for your help!

Currently there are 11 data volumes, 20GB each.

I activated DB Analyzer; but not sure where to get I/O timings from. (Sorry, never worked with DB Analyzer before!)

There is a file created, DBAN_TASK_IO.csv, with a column 'AvgRTime_UserPThread - Time (ms) user reads via dev threads'. Entries are between 10 and 14. Is this what you were looking for?

And list of 'General' parameters:

CACHE_SIZE 19252

INSTANCE_TYPE OLTP

KERNELVERSION KERNEL    7.5.0    BUILD 032-123-111-699

LOG_MIRRORED NO

LOG_SEGMENT_SIZE 218453

MAXBACKUPDEVS 1

MAXCPU 2

MAXDATAVOLUMES 12

MAXLOCKS 50000

MAXLOGVOLUMES 2

MAXUSERTASKS 50

MCOD NO

RESTART_SHUTDOWN AUTO

RUNDIRECTORY c:sapdbdatawrkSDB

Will 'Extended' and 'Support' parameters also be needed?

kind regards

lbreddemann
Active Contributor
0 Kudos

Hi Joe,

yes, please do also provide the rest of the parameters.

If possible, please post a link where we can download a ZIP-file of the dbanalyzer files.

regards,

Lars

Former Member
0 Kudos

Hello Lars,

thanks again for your help.

In the meantime export has ended; 203GB took 88 hours.

For a test I started a local export on the source machine, with loadercli 7.5.0; there seems to be not much difference in speed.

Not sure if I may post a link, but zip files aren't that big. I am going to send them to the email address from your business card, with subject line 'SDN, MaxDB: Export speed question'. Is this okay?

And then I am struggling with parameter list; Yesterday I managed to copy&paste general parameters from database manager window, but I am not able today... I will continue trying, but perhaps could you give me a hint what I am doing wrong? (Sorry for this dump question!)

kind regards

Former Member
0 Kudos

Hello,

in the meantime tried with dbmcli:

dbmcli on SDB>

param_extgetall

OK

KERNELVERSION string KERNEL 7.5.0 BUILD 032-123

-111-699

INSTANCE_TYPE string OLTP

MCOD string NO

ALIGN int 16

USE_FIBERS string NO

RESTART_SHUTDOWN string AUTO

SERVERDBFOR_SAP string YES

_UNICODE string YES

DEFAULT_CODE string ASCII

DATE_TIME_FORMAT string INTERNAL

CONTROLUSERID string DBM

CONTROLPASSWORD crypt

MAXLOGVOLUMES int 2

MAXDATAVOLUMES int 12

LOG_VOLUME_NAME_? string

LOG_VOLUME_NAME_001 string D:\sapdb\SDB\saplog\DISKL001

LOG_VOLUME_TYPE_? string

LOG_VOLUME_TYPE_001 string F

LOG_VOLUME_SIZE_? int

LOG_VOLUME_SIZE_001 int 655360

M_LOG_VOLUME_NAME_? string

M_LOG_VOLUME_TYPE_? string

DATA_VOLUME_NAME_? string

DATA_VOLUME_NAME_0011 string D:\sapdb\SDB\sapdata\DISKD0011

DATA_VOLUME_NAME_0010 string F:\sapdb\SDB\sapdata\DISKD0010

DATA_VOLUME_NAME_0009 string F:\sapdb\SDB\sapdata\DISKD0009

DATA_VOLUME_NAME_0008 string F:\sapdb\SDB\sapdata\DISKD0008

DATA_VOLUME_NAME_0007 string F:\sapdb\SDB\sapdata\DISKD0007

DATA_VOLUME_NAME_0006 string F:\sapdb\SDB\sapdata\DISKD0006

DATA_VOLUME_NAME_0005 string F:\sapdb\SDB\sapdata\DISKD0005

DATA_VOLUME_NAME_0004 string F:\sapdb\SDB\sapdata\DISKD0004

DATA_VOLUME_NAME_0003 string F:\sapdb\SDB\sapdata\DISKD0003

DATA_VOLUME_NAME_0002 string F:\sapdb\SDB\sapdata\DISKD0002

DATA_VOLUME_NAME_0001 string F:\sapdb\SDB\sapdata\DISKD0001

DATA_VOLUME_TYPE_? string

DATA_VOLUME_TYPE_0011 string F

DATA_VOLUME_TYPE_0010 string F

DATA_VOLUME_TYPE_0009 string F

DATA_VOLUME_TYPE_0008 string F

DATA_VOLUME_TYPE_0007 string F

DATA_VOLUME_TYPE_0006 string F

DATA_VOLUME_TYPE_0005 string F

DATA_VOLUME_TYPE_0004 string F

DATA_VOLUME_TYPE_0003 string F

DATA_VOLUME_TYPE_0002 string F

DATA_VOLUME_TYPE_0001 string F

DATA_VOLUME_SIZE_? int

DATA_VOLUME_SIZE_0011 int 2621440

DATA_VOLUME_SIZE_0010 int 2621440

DATA_VOLUME_SIZE_0009 int 2621440

DATA_VOLUME_SIZE_0008 int 2621440

DATA_VOLUME_SIZE_0007 int 2621440

DATA_VOLUME_SIZE_0006 int 2621440

DATA_VOLUME_SIZE_0005 int 2621440

DATA_VOLUME_SIZE_0004 int 2621440

DATA_VOLUME_SIZE_0003 int 2621440

DATA_VOLUME_SIZE_0002 int 2621440

DATA_VOLUME_SIZE_0001 int 2621440

DATA_VOLUME_MODE_? string NORMAL

DATA_VOLUME_MODE_0011 string NORMAL

DATA_VOLUME_MODE_0010 string NORMAL

DATA_VOLUME_MODE_0005 string NORMAL

DATA_VOLUME_MODE_0004 string NORMAL

DATA_VOLUME_MODE_0003 string NORMAL

DATA_VOLUME_MODE_0002 string NORMAL

DATA_VOLUME_MODE_0001 string NORMAL

DATA_VOLUME_MODE_0009 string NORMAL

DATA_VOLUME_MODE_0008 string NORMAL

DATA_VOLUME_MODE_0007 string NORMAL

DATA_VOLUME_MODE_0006 string NORMAL

DATA_VOLUME_GROUPS int 1

LOG_BACKUP_TO_PIPE string NO

MAXBACKUPDEVS int 1

BACKUP_BLOCK_CNT int 8

LOG_MIRRORED string NO

MAXVOLUMES int 15

MULTIO_BLOCK_CNT int 4

DELAYLOGWRITER int 0

LOG_IO_QUEUE int 50

RESTARTTIME int 600

MAXUKPS int 1024

MAXCPU int 2

MAXUSERTASKS int 50

TRANSRGNS int 8

TABRGNS int 8

OMSREGIONS int 0

OMSRGNS int 25

OMS_HEAP_LIMIT int 0

OMS_HEAP_COUNT int 2

OMS_HEAP_BLOCKSIZE int 10000

OMS_HEAP_THRESHOLD int 100

OMS_VERS_THRESHOLD int 2097152

HEAP_CHECK_LEVEL int 0

ROWRGNS int 8

IOSERVERTASKS int

MINSERVER_DESC int 16

MAXSERVERTASKS int 19

ESTIMMAXTRANS int 3

_MAXTRANS int 284

LOCKREGIONS int

MAXLOCKS int 50000

LOCKSUPPLY_BLOCK int 100

DEADLOCK_DETECTION int 4

LOCKSUPPLY_SEGMS int

ROWLOCKHASHLIST int

TABLOCKHASHLIST int

TRANSENTRIES int

TRANSHASHLIST int

SESSION_TIMEOUT int 900

OMS_STREAM_TIMEOUT int 30

REQUEST_TIMEOUT int 5000

USEASYNC_IO string YES

IOPROCSPER_DEV int 1

IOPROCSFOR_PRIO int 1

USEIOPROCS_ONLY string NO

IOPROCSSWITCH int 2

UKPS int 5

USERPER_UKT int

SERVERPER_UKT int

LRU_FOR_SCAN string NO

PAGESIZE int 8192

PACKETSIZE int 36864

MINREPLYSIZE int 4096

MBLOCKDATA_SIZE int 32768

MBLOCKQUAL_SIZE int 16384

MINMBLOCKSTACKSIZE int

MBLOCKSTACK_SIZE int 32768

MBLOCKSTRAT_SIZE int 8192

WORKSTACKSIZE int 8192

WORKDATASIZE int 4096

MINCATCACHESIZE int

CATCACHE_MINSIZE int 262144

CAT_CACHE_SUPPLY int 3264

INIT_ALLOCATORSIZE int 233472

ALLOW_MULTIPLE_SERVERTASK_UKTS string NO

TASKCLUSTER01 string tw;al;ut;2000sv,100bup;10*ev,1

0*gc;

TASKCLUSTER02 string ti,100dw;25us;

TASKCLUSTER03 string equalize

MPRGN_QUEUE string YES

MPRGN_DIRTY_READ string YES

MPRGN_BUSY_WAIT string YES

MPDISP_LOOPS int 1

MPDISP_PRIO string YES

XP_MP_RGN_LOOP int 0

MP_RGN_LOOP int 100

MPRGN_PRIO string YES

MAXRGN_REQUEST int 3000

PRIOBASE_U2U int 100

PRIOBASE_IOC int 80

PRIOBASE_RAV int 80

PRIOBASE_REX int 40

PRIOBASE_COM int 10

PRIOFACTOR int 80

DELAYCOMMIT string NO

SVP1_CONV_FLUSH string NO

MAXGARBAGECOLL int 0

MAXTASKSTACK int 400

MAX_SERVERTASK_STACK int 100

MAX_SPECIALTASK_STACK int 100

DWIO_AREA_SIZE int 50

DWIO_AREA_FLUSH int 50

FBM_VOLUME_COMPRESSION int 50

FBM_VOLUME_BALANCE int 10

FBMLOW_IO_RATE int 10

CACHE_SIZE int 19252

DWLRU_TAIL_FLUSH int 25

XP_DATA_CACHE_RGNS int 0

DATACACHE_RGNS int 16

XP_CONVERTER_REGIONS int 0

CONVERTER_REGIONS int 32

XP_MAXPAGER int 0

MAXPAGER int 32

DYND_B12_DW_IO_BUF int

SEQUENCE_CACHE int 1

IDXFILELIST_SIZE int 2048

DYNP_B49_IDXSIZE int

SERVERDESC_CACHE int 72

SERVERCMD_CACHE int 20

VOLUMENO_BIT_COUNT int 8

SAVE_QUEUE_SIZE int

AUTOSAVE_QUEUE int

PAGES_PER_DATACACH int

B20_DATACACHE_HEAD int

B20_DATACACHE_PIDQ int

B20_DATACACHE_CB int

OPTIM_MAX_MERGE int 500

OPTIM_INV_ONLY string YES

OPTIM_CACHE string NO

OPTIM_JOIN_FETCH int 0

JOIN_SEARCH_LEVEL int 0

JOIN_MAXTAB_LEVEL4 int 16

JOIN_MAXTAB_LEVEL9 int 5

READAHEADBLOBS int 32

RUNDIRECTORY string c:\sapdb\data\wrk\SDB

_KERNELDIAGFILE string knldiag

KERNELDIAGSIZE int 800

_EVENTFILE string knldiag.evt

_EVENTSIZE int 0

_MAXEVENTTASKS int 0

_MAXEVENTS int 100

_KERNELTRACEFILE string knltrace

TRACE_PAGES_TI int 2

TRACE_PAGES_GC int 0

TRACE_PAGES_LW int 5

TRACE_PAGES_PG int 3

TRACE_PAGES_US int 10

TRACE_PAGES_UT int 5

TRACE_PAGES_SV int 5

TRACE_PAGES_EV int 2

TRACE_PAGES_BUP int 0

_MAXBACKUPTASKS int

KERNELTRACESIZE int 704

EXTERNAL_DUMP_REQUEST string NO

AKDUMP_ALLOWED string YES

_KERNELDUMPFILE string knldump

_RTEDUMPFILE string rtedump

UTILITYPROTFILE string dbm.utl

UTILITY_PROTSIZE int 100

BACKUPHISTFILE string dbm.knl

BACKUPMED_DEF string dbm.mdf

MAXMESSAGE_FILES int 0

DYND_B20_DATACACHE int

DYND_K38_AUTOSAVE int

DYND_K38_SAVE int

DYND_K57_KB_PAGES int

DYNP_A51_CACHE int

EVENTALIVE_CYCLE int 0

DYNP_A42_PARSEID int

DYNP_B20_DATACACHE int

DYNP_K38_AUTOSAVE int

DYNP_K38_SAVE_GLOB int

K51LOCK_AUX1 int

K51LOCK_AUX2 int

DYNP_K51_LOCK_LIST int

DYNP_K90_SERVER int

DYNP_B930_HISTORY int

_SHAREDDYNDATA int 19569

_SHAREDDYNPOOL int 15365

USE_MEM_ENHANCE string NO

MEM_ENHANCE_LIMIT int 0

__PARAM_CHANGED___ int 0

__PARAM_VERIFIED__ string 2008-06-14 09:23:49

DIAG_HISTORY_NUM int 2

DIAG_HISTORY_PATH string c:\sapdb\data\wrk\SDB\DIAGHISTOR

Y

DIAGSEM int 1

SHOW_MAX_STACK_USE string NO

LOG_SEGMENT_SIZE int 218453

_COMMENT string

MINI_DUMP string NORMAL

FORMATTING_MODE string PARALLEL

FORMAT_DATAVOLUME string YES

OFFICIAL_NODE string

HIRES_TIMER_TYPE string CPU

LOAD_BALANCING_CHK int 0

LOAD_BALANCING_DIF int 10

LOAD_BALANCING_EQ int 5

HS_STORAGE_DLL string libhsscopy

HS_NODE_? string

HS_DELAY_TIME_? int

HS_SYNC_INTERVAL int 50

USE_OPEN_DIRECT string NO

SYMBOL_DEMANGLING string NO

EXPAND_COM_TRACE string NO

OPTIMIZE_OPERATOR_JOIN_COSTFUNC string YES

OPTIMIZE_JOIN_PARALLEL_SERVERS int 0

OPTIMIZE_JOIN_OPERATOR_SORT string YES

OPTIM_FIRST_ROWS string NO

OPTIMIZE_JOIN_OUTER string YES

JOIN_OPERATOR_IMPLEMENTATION string YES

JOIN_TABLEBUFFER int 128

OPTIMIZE_FETCH_REVERSE string YES

SET_VOLUME_LOCK string YES

SHAREDSQL string NO

SHAREDSQL_EXPECTEDSTATEMENTCOUNT int 1500

SHAREDSQL_COMMANDCACHESIZE int 32768

MEMORY_ALLOCATION_LIMIT int 0

USE_SYSTEM_PAGE_CACHE string YES

USE_COROUTINES string YES

MIN_RETENTION_TIME int 60

MAX_RETENTION_TIME int 480

MAX_SINGLE_HASHTABLE_SIZE int 512

MAX_HASHTABLE_MEMORY int 5120

HASHED_RESULTSET string NO

HASHED_RESULTSET_CACHESIZE int 262144

AUTO_RECREATE_BAD_INDEXES string NO

LOCAL_REDO_LOG_BUFFER_SIZE int 0

FORBID_LOAD_BALANCING string NO

SYMBOL_RESOLUTION string YES

PREALLOCATE_IOWORKER string NO

---

dbmcli on SDB>

regrads

lbreddemann
Active Contributor
0 Kudos

Hi Jo,

got your email.

Not much to say from the DBanalyzer logs this time.

There are just two kinds of general Warnings here:

1. Data Cache Hitrate is low.

That's of course normal for a database export where we basically do have only first-time-reads of any page and never want to visit the same page again.

2 Same Command running for xxxx seconds.

Also nothing to be concerned about here. It's the export task, that runs a long time.

Concerning the I/O times - an average of >12 ms for reading is not that good.

Perhaps you can improve this by setting USE_DIRECT_IO = YES.

If you want to do some experiments you may also check if you can split up the export job into several smaller jobs. So if you have more than one content server repository than you may export the tables for each repository in it's own job.

This is not documented in a note, but can be easily done by reading the documentation.

regards,

Lars

roland_mallmann
Advisor
Advisor
0 Kudos

Hi there,

I don't want to hijack this thread, but a CACHE_SIZE of only 19252 pages (around 150MB) with such a DB size (200GB) is not really advisable. I'd suggest to increase the CACHE_SIZE to at least 2GB, providing there's enough free RAM available on the machine. Please note that a change of this parameter only comes into effect after a restart of the DB instance.

Changing the CACHE-SIZE will not have a big influence on your Export though, but it does have an influence on the performance of the DB and make sure that the target DB has a much larger CACHE_SIZE configured.

Regards,

Roland

roland_mallmann
Advisor
Advisor
0 Kudos

Hi again,

I'd really like to suggest, if you want us to fully investigate the issue, to open an official OSS message for your issue.

Optimally, we would get direct operating system access to the source system.

Important: make sure you also mention this thread in your message.

Regards,

Roland

P.S.: I take it you already checked if you could use a backup/recovery process to copy your DB to another machine?

Former Member
0 Kudos

Hello Lars and Roland,

thanks for your help!

Here a few words about what we are actually trying to achieve:

Our goal is to migrate to more powerful hardware, this includes migrating from 32 to 64 bit. We want to stay on Windows.

Changing to a recent version of MaxDB is an additional goal we would like to have in the same step, if possible.

So first we thougth of an heterogeneous system copy. But seeing long runtimes, we are looking for other ways now.

A backup/recovery process seems more feasible now. We found out that switching from 32 to 64 bit will still be a homogeneous system copy. However we will have to upgrade the source system to 7.6.03 first (or possibly install a target system with the old release 7.5.0 - not sure if this is available for 64 bit).

Well, I never did an upgrade before (I am a bit new to MaxDB), but I already did a homogeneous system copy, half a year ago, when the QA system was created, the very same system that we are now trying to migrate. Restore runtime for the 200 GB was somewhat between 15 and 20 hours. Much better than an export, not to mention the (still untested) import!

What to you think about these plans?

And any ideas about upgrade runtimes for a 200 GB database?

kind regards

lbreddemann
Active Contributor
0 Kudos

Hi Joe,

if you can avoid export/import, than please don't do it.

It's always much slower than backup/restore.

A MaxDB upgrade really is a 'walk in the garden' compared with most other DBMS support by SAP.

Simply use SDBSETUP or SDBUPD - that's it.

I would propose to to the upgrade first, than perform the system copy via backup/restore.

In any case, I wouldn't consider using 7.6.03 anymore. It's already outdated!

Use 7.6.05 instead!

best regards,

Lars

P.s.

We still would be interested in learning about what caused the slowdown of the export.

So if you want to pursue that - go ahead and open a support message.

Former Member
0 Kudos

Hello Lars,

thanks again. So we will go the backup/restore way.

And it's good news that an upgrade of MaxDB should be easier than e.g. Oracle. I did that a few times, and found it quite time consuming, especially because of those many SAP notes that have to be read and patches that have to be applied.

As for research about the slowdown of export: Unfortunately our leasing contract is going to end, and we will have to return that old box this Friday. So there will be not enough time, I'm afraid.

Luckily, on the other hand, our users will not need the new QA system immediately. It will be sufficient, if we create a new QA system in the beginning of December, and we are free to create it either as a copy of the old QA, or as a copy of the old production system.

And Prod will have to be migrated afterwards.

So for me the next thing to be done will be an upgrade test on the old box.

best regards

Answers (0)