Recordsize
--------------
We experimented and found that recordsize can affect some performance aspects of ZFS and database usage. Check your settings for recordsize. 16Kb on DATA volumes seems like a reasonable compromise for us, using a 2K ASE page size.
Larger recordsize for transaction logs (64Kb) is half of the default recordsize for ZFS (128Kb), but should better support the sequential IO nature of tran logs, while saving some controller bandwidth on frequent writes done to transaction logs.
Hard to say all of this without testing, but settling for out-of-the-box default of 128Kb for everything is just a waste. It needs to be smaller for DB’s imho.
ZFS Pools
------------
Ideal situation is to have one pool for each filesystem. If not, the “*DATA” filesystems can be created from one main ZFS pool, and the “*LOG” filesystems from another log ZFS pool. Again, though, we like ZFS pools to be separate, and even be back ended by different Raid groups (separate spindles) on the SAN. At the very least, we like to have our DATA and LOGS on separate Raid groups on the SAN.
Device IO Queue Size settings
---------------------------------
By default, the queue size (outstanding io’s) is set to 10 in ZFS. What we did here is to not have to tune this setting, but ask for about as many LUN’s to make up a ZFS Pool as there are backing spindles in the SAN Raid group. So, if the backing Raid group has 8 spindles, would be nice to have 6 to 8 LUNS carved from that and assigned to the pool (rather than just a few, or one, large LUN comprising the space). This will allow ZFS better queueing capacity for outstanding io’s to the ZFS pool.
RAIDZ (not)
---------------
RAID-Z is not a good fit for random reads. We’d prefer a straight n-mirrored configuration over RAID-Z for ZFS pools. Anything other than RAID-Z if possible.
Filesystem DBMS Isolation
-------------------------------
Again, we’d like to have nothing but DBMS device files exist on these filesystems, and separate filesystems used to store application files.
ARC Cache Limits
---------------------
We recommended that ARC cache be limited so there is available memory for applications, and even ASE to grow into. We think it is still wise to impose a reasonable limit for ZFS so that other processes can obtain sufficient memory when needed to avoid performance and possible outage scenarios as we had experienced.
Testing is of course the standard recommendation, but these are just some food for thought suggestions that we implemented on our systems running ZFS.