cancel
Showing results for 
Search instead for 
Did you mean: 

ASE 15.7 on Solaris 11 with ZFS

Former Member
0 Kudos

Hi Folks,

Wanted to know if anybody can share their experience with the above combination installation.

We are trying to test it out and are getting some really skewed results.

Are there any tuning parameters for ZFS which needs to be taken care of for ASE 15.7 or vice versa? We see some really slow IO read times at the OS level.

Accepted Solutions (1)

Accepted Solutions (1)

kevin_sherlock
Contributor
0 Kudos

Recordsize

--------------

We experimented and found that recordsize can affect some performance aspects of ZFS and database usage.  Check your settings for recordsize.  16Kb on DATA volumes seems like a reasonable compromise for us, using a 2K ASE page size.

Larger recordsize for transaction logs (64Kb) is half of the default recordsize for ZFS (128Kb), but should better support the sequential IO nature of tran logs, while saving some controller bandwidth on frequent  writes done to transaction logs.

Hard to say all of this without testing, but settling for out-of-the-box default of 128Kb for everything is just a waste.  It needs to be smaller for DB’s imho.

ZFS Pools

------------

Ideal situation is to have one pool for each filesystem.  If not, the “*DATA” filesystems can be created from one main ZFS pool, and the “*LOG” filesystems from another log ZFS pool. Again, though, we like ZFS pools to be separate, and even be back ended by different Raid groups (separate spindles) on the SAN.  At the very least, we like to have our DATA and LOGS on separate Raid groups on the SAN.

Device IO Queue Size settings

---------------------------------

By default, the queue size (outstanding io’s) is set to 10 in ZFS.   What we did here is to not have to tune this setting, but ask for about as many LUN’s to make up a ZFS Pool as there are backing spindles in the SAN Raid group. So, if the backing Raid group has 8 spindles, would be nice to have 6 to 8 LUNS carved from that and assigned to the pool (rather than just a few, or one, large LUN comprising the space).  This will allow ZFS better queueing capacity for outstanding io’s to the ZFS pool.

RAIDZ (not)

---------------

RAID-Z is not a good fit for random reads.  We’d prefer a straight n-mirrored configuration over RAID-Z for ZFS pools.  Anything other than RAID-Z if possible.

Filesystem DBMS Isolation

-------------------------------

Again, we’d like to have nothing but DBMS device files exist on these filesystems, and separate filesystems used to store application files.

ARC Cache Limits

---------------------

We recommended that ARC cache be limited so there is available memory for applications, and even ASE to grow into. We think it is still wise to impose a reasonable limit for ZFS so that other processes can obtain sufficient memory when needed to avoid performance and possible outage scenarios as we had experienced.


Testing is of course the standard recommendation, but these are just some food for thought suggestions that we implemented on our systems running ZFS.

Former Member
0 Kudos

Along with what Kevin said, Oracle has a number of tuning white papers for Oracle RDBMS on ZFS.  Most should translate to ASE is little to no modification as long as you keep in mind the block sizes for ASE is considerably smaller.

Tuning ZFS for Database Products - Oracle Solaris 11.1 Tunable Parameters Reference Manual

As always don't deduplicate the zfs volumes, use compression instead - you'll get better throughput.(hint: use lzjb or lz4 compression if available)

Answers (3)

Answers (3)

Former Member
0 Kudos

We did play around with almost all the options listed above by Kevin(recordsize, ARC cache, RAID levels) but we still didn't see any improvement in the disk IOs.

We didn't really look into the compression aspect which Jason pointed out but I guess we would be moving to Linux at this point in time as we are clearly not going anywhere on this and we are running out of time.

Former Member
0 Kudos

You haven't said what IO times you're having.

What is the time for a page read at its fastest and slowest ?

Are you using SAN or local disk ?

Former Member
0 Kudos

Also, what RAID levels are used and how many disks make a difference too.  High volume databases like tempdb should be on local disk if possible or SSD if in ZFS volumes

former_member229302
Participant
0 Kudos

Make sure you have these patches installed.

These patches address an Oracle bug for Solaris::

  • Solaris 10 SPARC – 148888-03
  • Solaris 10 x86/x64 – 148889-03
  • Solaris 11 – the latest SRU containing the fix for Oracle Bug 16054425 <<<<<<

See Solaris config param "Async I/O mode" at this link:SyBooks Online

This isn't specific to ZFS,

solaris async i/o mode:


Allows you to select various asynchronous IO modes on the Solaris                platform. This parameter is effective if Adaptive Server is running in threaded                kernel mode. This parameter is static, therefore is effective after restarting                Adaptive Server.

0 – (Default) Use this mode if the Solaris patch containing the fix for Oracle BugID                16054425  is not installed. You may see sub-optimal IO performance.

1 – (Recommended) You must have the Solaris patch containing the fix for Oracle BugID                16054425 installed.  

Install the following Oracle patch for your platform:

  • For Solaris 10 SPARC: 148888-03
  • For Solaris 10 x86/x64: 148889-03
  • For Solaris 11, latest SRU containing fix for Oracle Bug 16054425

Note:  If solaris async i/o mode is set to 1 without the patch for                Oracle BugID 16054425,  Adaptive Server may report 694 or 823 errors and require                restarting the server. Oracle Bug 15868517 refers to backport of Oracle Bug 16054425                for S10U11.

Former Member
0 Kudos

> We see some really slow IO read times at the OS level.


What times are you seeing for a physical page read when its slow and when its quick ?

What size are your pages ?

Are you running process or thread model ?

Are you using SAN or local disks ? (If SAN, does it have dynamic tiering ?)