Quantcast
Channel: SCN: Message List - SAP Adaptive Server Enterprise (SAP ASE) for Custom Applications
Viewing all articles
Browse latest Browse all 3587

Re: How much Solaris memory allocated by ASE 15.0.3?

$
0
0

John,

 

It wasn't clear from your initial post how you had allocated memory to the dataserver.  'course, I made some assumptions without stating them ... that's my bad.

 

My working *assumption* was that you had started up the dataserver with a smallish amount of memory actually allocated for use and over time had increased the allocations (eg, max=15GB, initial startup with 3GB allocated to caches/etc, over time you bumped up caches several times, increased other allocations over time, etc).  In this scenario the dataserver would have started with an initial shared mem fragment of 3GB, then requested additional shared mem fragments each time you allocated more memory (eg, increase default data cache size, create a new data cache, enable MDA pipes, setup/increase statement and/or proc cache, etc).

 

This is a common issue with new dataservers where the DBA is sizing the system on-the-fly ... and there's nothing wrong with this as long as you realize you could be creating a largish number of shared memory segments ... which can lead to memory issues if a) you max out the number of shared mem segments configured on the machine or b) shared mem becomes too fragmented to allocate new segments of the desired size.

 

Setting 'allocate max shared memory'=1 means that at startup the dataserver is going to request a single segment of 15GB (in your case) so little chance of fragmenting shared mem or maxing out the number of segments.  [NOTE: Depending on what else is running on the machine you could still end up with multiple segments if there's not enough room for a single block of contiguous memory.]  Now, upon bouncing the dataserver you should see all those segments pulled into a single segment as the dataserver knows up front (based on the allocations in the cfg file) how much memory to request from the OS.

 

In your latest post you mentioned a 13GB fragment and a few smaller ones, so it sounds like the last time the dataserver was bounced you already had 13GB of memory pre-allocated (per the entries in the dataserver's cfg file), and then you made a couple config changes that increased total memory usage (thus requiring a couple new, smallish, shared mem segments to be allocated).

 

And yes, 'allocate max shared memory'=1 would force the dataserver to grab all 15GB at startup ... and while that's more than you're actually using, it would reduce (if not eliminate) the possibility of fragmenting shared memory.  Again, this setting would be of benefit if your next bootup shows total memory being quite a bit less than max memory and you plan on making a lot of memory-related config changes that will require allocating additional shared memory.

 

Net result is that my initial assumption (bootup with little memory pre-allocated, lots of new memory allocations after booting => lots of shared mem segments) was invalid.  Having said that, you could still have an issue on the machine in terms of shared mem fragmentation ... but that would require knowing a) the max number of shared mem segments defined and b) the total number of shared mem segments in use (by ASE and whatever else is running on the machine).

 

As for how/why the various OS utilities show 'different' memory usage numbers than you expect ... *shrug* ... depends on the utility.  For example, top is working with the high-water mark (ie, max memory) passed along by the dataserver as opposed to the total memory used by ASE.  ASE actually uses shared and 'normal' memory, so some utilities (eg, ipcs) will show the shared memory setting while others will show the 'normal' memory (on one *nix platform I recall the local top/nmon showing ~300M of memory usage for the dataserver's 'normal' memory and ipcs showing 8GB which matched ASE's total memory).  Net result is that which utility to use depends on the OS and what exactly you're trying to measure.

 

As Javier's mentioned, it's not uncommon for SAs to use a standard ballpark figure (eg, 20%) when allocating memory to the OS ... without regard for what's actually needed by the OS (eg, if the OS can run just fine with 1GB on a machine with 8GB of RAM, why does the same OS need 20GB on a machine with 100GB of RAM?).  So if the OS (or Sybase, or some other process) is over-subscribed on memory then you'll obviously run into memory issues.  Net result is that you'll need to sit down with the SA, and the owners of any other processes running on the machine, to figure out the size of everyone's piece of the memory pie.


Viewing all articles
Browse latest Browse all 3587

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>