on 06-22-2015 4:51 PM
Hi Folks,
Wanted to know if anybody can share their experience with the above combination installation.
We are trying to test it out and are getting some really skewed results.
Are there any tuning parameters for ZFS which needs to be taken care of for ASE 15.7 or vice versa? We see some really slow IO read times at the OS level.
Recordsize
--------------
We experimented and found that recordsize can affect some performance aspects of ZFS and database usage. Check your settings for recordsize. 16Kb on DATA volumes seems like a reasonable compromise for us, using a 2K ASE page size.
Larger recordsize for transaction logs (64Kb) is half of the default recordsize for ZFS (128Kb), but should better support the sequential IO nature of tran logs, while saving some controller bandwidth on frequent writes done to transaction logs.
Hard to say all of this without testing, but settling for out-of-the-box default of 128Kb for everything is just a waste. It needs to be smaller for DB’s imho.
ZFS Pools
------------
Ideal situation is to have one pool for each filesystem. If not, the “*DATA” filesystems can be created from one main ZFS pool, and the “*LOG” filesystems from another log ZFS pool. Again, though, we like ZFS pools to be separate, and even be back ended by different Raid groups (separate spindles) on the SAN. At the very least, we like to have our DATA and LOGS on separate Raid groups on the SAN.
Device IO Queue Size settings
---------------------------------
By default, the queue size (outstanding io’s) is set to 10 in ZFS. What we did here is to not have to tune this setting, but ask for about as many LUN’s to make up a ZFS Pool as there are backing spindles in the SAN Raid group. So, if the backing Raid group has 8 spindles, would be nice to have 6 to 8 LUNS carved from that and assigned to the pool (rather than just a few, or one, large LUN comprising the space). This will allow ZFS better queueing capacity for outstanding io’s to the ZFS pool.
RAIDZ (not)
---------------
RAID-Z is not a good fit for random reads. We’d prefer a straight n-mirrored configuration over RAID-Z for ZFS pools. Anything other than RAID-Z if possible.
Filesystem DBMS Isolation
-------------------------------
Again, we’d like to have nothing but DBMS device files exist on these filesystems, and separate filesystems used to store application files.
ARC Cache Limits
---------------------
We recommended that ARC cache be limited so there is available memory for applications, and even ASE to grow into. We think it is still wise to impose a reasonable limit for ZFS so that other processes can obtain sufficient memory when needed to avoid performance and possible outage scenarios as we had experienced.
Testing is of course the standard recommendation, but these are just some food for thought suggestions that we implemented on our systems running ZFS.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Along with what Kevin said, Oracle has a number of tuning white papers for Oracle RDBMS on ZFS. Most should translate to ASE is little to no modification as long as you keep in mind the block sizes for ASE is considerably smaller.
Tuning ZFS for Database Products - Oracle Solaris 11.1 Tunable Parameters Reference Manual
As always don't deduplicate the zfs volumes, use compression instead - you'll get better throughput.(hint: use lzjb or lz4 compression if available)
We did play around with almost all the options listed above by Kevin(recordsize, ARC cache, RAID levels) but we still didn't see any improvement in the disk IOs.
We didn't really look into the compression aspect which Jason pointed out but I guess we would be moving to Linux at this point in time as we are clearly not going anywhere on this and we are running out of time.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Make sure you have these patches installed.
These patches address an Oracle bug for Solaris::
See Solaris config param "Async I/O mode" at this link:SyBooks Online
This isn't specific to ZFS,
solaris async i/o mode:
Allows you to select various asynchronous IO modes on the Solaris platform. This parameter is effective if Adaptive Server is running in threaded kernel mode. This parameter is static, therefore is effective after restarting Adaptive Server.
0 – (Default) Use this mode if the Solaris patch containing the fix for Oracle BugID 16054425 is not installed. You may see sub-optimal IO performance.
1 – (Recommended) You must have the Solaris patch containing the fix for Oracle BugID 16054425 installed.
Install the following Oracle patch for your platform:
Note: If solaris async i/o mode is set to 1 without the patch for Oracle BugID 16054425, Adaptive Server may report 694 or 823 errors and require restarting the server. Oracle Bug 15868517 refers to backport of Oracle Bug 16054425 for S10U11.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
> We see some really slow IO read times at the OS level.
What times are you seeing for a physical page read when its slow and when its quick ?
What size are your pages ?
Are you running process or thread model ?
Are you using SAN or local disks ? (If SAN, does it have dynamic tiering ?)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
95 | |
11 | |
10 | |
9 | |
9 | |
7 | |
6 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.