> Why do you want to get 100% cache hit ratio ?
Because SAN is about 20 - 40 times slower than cache.
Consider,.
if I have 30,000 reports to run each day and the data is in cache it takes 2s to run each report.
If the data isn't in cache then it takes 20s - 40s to run each report.
So in a simplistic way, if all the data is in cache - that equates to 70,000s = 16 hours (30,000 * 2)
If 95% of the data is in cache, it takes 28 hours to run the reports. (28,500 *2) + (1500 * ~30)
Quite simply, the more data in cache the more reports I can run.
> You will always have situation where data isn't in the cache and needs to be read from disks
Obviously, as I have several Tbs of data and can't afford several Tb RAM I don't expect to perform no disk reads.
The other thing to think about is the variable performance of SAN. As a shared resource, between multiple teams - it can only achieve a certain throughput - as the load increases from other machines connected to the SAN, the throughput for a single machine goes down. A page read on the SAN can vary from 2ms to 10ms and sometimes 20ms.
> From my experience a page size (for you 16K) and an extent size pool (128K in your case) is enough.
Enough for what ? I don't understand your comment ?
> Seperate the log into its own cache
Agreed.
> and check if you have 'hot tables' - for these tables a new named cache makes sense.
I've done this - we're adding more RAM - now I need to know if I should add to the 16k pool or the 128k pool.