Skip to Content
avatar image
Former Member

Database dump is too slow

I have recently migrated from ASE12 to ASE15.7 on RHEL 6.4(raid 6), My database size is around 5GB only. The database dump to a local disk is taking almost 1 hr. I am not using any compression levels.Can anybody explain the reasons?

Add comment
10|10000 characters needed characters exceeded

  • Follow
  • Get RSS Feed

3 Answers

  • Oct 07, 2015 at 11:44 AM

    Database dumps are disk intensive operations ... heavy reading from the database device(s) ... heavy writing to the dump device(s).

    If the dump is running slow then you'll want to take a look at the disk subsystems (including filesystem mount settings/configs) to see where the delays are occurring.

    How long did the same dump take for ASE 12?

    Is/Was ASE 12 running on the same linux host, same disk subsystem, same filesystem configs?

    Add comment
    10|10000 characters needed characters exceeded

    • ....and RAID configs. RAID 6 is notoriously slow at physical reads - which the dump would have to do a ton of. RAID 6 normally has service times >20ms/IO.

      I would also check to see if the dump directory was being synchronously block replicated via storage replication. We have seen huge impacts of sync block replication on dump/loads even at short distances (<10km) when the bandwidth is << IO subsystem (e.g. a 1Gbs link provisioned as 2 512Gpbs - one in each direction)

  • avatar image
    Former Member
    Oct 08, 2015 at 04:22 AM

    ASE 12 was on Solaris 10 and the dump took only 7 mins. I recently migrated to RHEL and ASE 15.7. The local disk is on the same RAID group as the database device. Data cache size is increased to 20G.

    Add comment
    10|10000 characters needed characters exceeded

    • If you are on RHEL or *any* Linux, the biggest problem you have is getting the IO subsystem out of the way .....it was written for a laptop with 4800 rpm drives.

      Step 1 - set the IO scheduler to noop

      Step 2 - raise the nr_requests from default to 1024

      Step 3 - tune the file system - in this case, unfortunately you are dumping to the same RAID group - and I suspect the same filesystem. Normally, we would suggest turning off journaling as a key filesystem tuning consideration as DBMS's using DIRECTIO typically don't need it. However, with doing DB dumps to the same file system you are being burned 2x. There are a few things you can do with mount options, but not likely to make big differences. One that might if using file system data devices is set the readahead buffer to be quite large

      Step 4 - get OFF RAID 6. It should only be used by people who don't care about performance and can afford to wait all day.

  • Oct 09, 2015 at 02:58 PM

    First identify the devices where the backup is taking place and run iostat every 5 seconds on those devices using the following switches and redirect to a file.

    iostat -x -t 5 <dev1, dev2, .... devn> | tee logfile.iostat

    Use regexps on the logfile.iostat to check for the column svctm > 10 ms. This indicates that the io's initiated by the ASE for backup process is not being handled properly by the host io subsystem. Based on this data you need to tune the device queues accordingly.Generally for Linux the config value nr_requests is 128 by default, please check this @ /sys/block/sda/queue/nr_requests and tune accordingly so that the svctm values show < 10 ms. If the issue still not improved then you need to check with your Storage Admin.

    Hope this helps.

    Add comment
    10|10000 characters needed characters exceeded