Without going into a lot of details about your processing requirements, there are several areas you could look at in terms of improving performance ...
For monitoring purposes you can look at monDataCache, monCachePool, monCachedObject, monOpenObjectActivity. I'd also suggest running sp_sysmon at continuous 15-minute intervals and then reviewing the cache and disk io subsections. [Sorry, I'm not aware of an easy way to analyze objects down to the pool level.]
Keep in mind that by the time you slice-n-dice a pool based on # of partitions, APF %, wash size, overhead ... you may be left with a cachelet that's too small for the optimizer to consider using. (See Jeff Tallman's presentation for a good breakdown of cache configuration issues.) If the optimizer is choosing to ignore your large IO pools then you could be a) obtaining poor query plans and/or b) flooding your disk subsystem with a lot of smallish IOs (as opposed to fewer largish IOs).
If your 30K reports need to process 100GB of data/indexes and you only have 50GB of cache then you're obviously going to need to read some data from disk. ("Duh, Mark!" ?) The question then becomes one of how to limit the number of times a chunk of data/index has to be read from disk, eg, if 5K of your reports process the same 10GB of data over and over then it would make sense to run these 5K reports together so as to re-use as many in-cache pages as possible ... as opposed to spreading those 5K reports out over several hours and risk later reports having to re-read pages from disk. Obviously (?) this type of report scheduling is going to require detailed knowledge of the various reports and the chunks of data required by each report.
I often come across clients who are running with default configurations for 'disk i/o structures' (2048) and 'max async ios per engine/server' (2048 - depends on ASE/OS version). On today's hardware these settings are typically too small and thus introduce an artificial limitation on the number of IOs the dataserver can have outstanding at any one time. I typically recommend bumping these numbers up to 10K-20K (or more), with the objective being to push any IO bottlenecks out to the OS. (Yeah, this will take some trial and error to see what works well in your environment.)
If day-to-day operations are of a OLTP nature, while your after-hours/weekend reports are of a DSS nature, you may need to consider dynamically reconfiguring your caches before/after running your reports in order to make best use of your memory.
------------------------
For a 1TB dataserver with 30K reports I'm assuming you have a decent number of engines (eg, 16, 20, 32), and if you've partitioned your (default) data cache to support this largish number of engines then there's a good chance that 8GB/128K pool may be too small ... especially if your reports are processing large volumes of data.
Again, without a lot of details (about the 30K reports, about engine counts, about cache configurations), I'm probably going to be leaning towards adding more memory to the 128K pool ... possibly tweaking the APF percentage ... and then using sp_sysmon to evaluate the effectiveness of this configuration. And yes, unfortunately, this is going to require some trial and error ... because your configurations will depend heavily on the details of your processing requirements.
Also keep in mind that changes in your cache configurations can lead to good/bad changes in your query plans, so it may also become necessary to tune some queries if you see degraded performance after making configuration changes to your cache(s).