Quantcast
Channel: SCN: Message List - SAP Adaptive Server Enterprise (SAP ASE) for Custom Applications
Viewing all articles
Browse latest Browse all 3587

Re: Performance cluster on ASE

$
0
0

Simple example using a 1-row next_key table to generate a ticket number ...

 

NOTE: Keep in mind that for a process to modify a piece of data, said piece of data must reside in a (local) data cache.

 

normal/1-node processing:

 

- process1 needs a new ticket so it attempts to update the next_key table to obtain a new ticket number

- the lock manager grants an exclusive lock on next_key to process1

- process2 also needs a new ticket so it attempts to update the next_key table to obtain a new ticket number, but it is blocked by process1's lock on next_key

- once process1 completes its transaction the lock is released

- the lock manager now grants an exclusive lock on next_key to process2 so that it can perform its update on next_key

- as you've pointed out ... process2's wait is normal ... that's life

 

2-node processing:

 

- process1 (on node1) needs a new ticket so it attempts to update the next_key table to obtain a new ticket number

- lock_manager1 (on node1) contacts lock_manager2 (on node2) to make sure it can place an exclusive lock on next_key for process1; lock_manager2 (on node2) confirms the request and both lock managers place an exclusive lock on their local copies of next_key

- process2 attempts to perform an update on its local copy of next_key but finds that its request is blocked by the lock on the local copy of next_key

- process1 completes its transaction against the local copy of next_key

- lock_manager1 notifies lock_manager2 that the lock can be released; both lock managers release the lock on their local copy of next_key

- before process2 can update the local copy of next_key, the newly updated value over on node1 must be copied over to node2's cache

- once the local copy of next_key has been updated, lock_manager2 contacts lock_manager1 to make sure it can place an exclusive lock on next_key for process2; lock_manager1 confirms the request and both lock managers place an exclusive lock on their local copies of next_key

- process2 completes its transaction against the local copy of next_key

- lock_manager2 notifies lock_manager1 that the lock can be released; both lock managers release the lock on their local copy of next_key

- the new next_key value is copied to node1 to make sure its local copy of next_key is kept in sync

- again, the fact that process2 has to wait is part of life ... but in this scenario the length of the wait is going to be a bit longer due to all the back-n-forth between communications between lock managers plus the shipping of updated records (to keep caches in sync)

 

n-node processing:

 

- expand all of the 2-node steps to consider the handshakes between 'n' lock managers, plus the need to keep 'n' data caches in sync

- the length of time spent waiting on locks starts to climb rather quickly and those waits for a lock become 'a life time'

 

If you plan on using ASE/CE then you'll need to be able to split your processing and data in such a way that you minimize cross-node communications (for lock management, for keeping primary data in sync between nodes/caches), otherwise you'll likely see performance degraded due to the overhead for managing multi-node locks and data cache sync's.

 

NOTE: Yeah, I'm taking some liberties with some of the details on the back-n-forth handshaking to manage locks, and the shipping of data between caches; just trying to jot down the general concepts before heading out the door to work ...


Viewing all articles
Browse latest Browse all 3587

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>