> - the dataserver has to make a remote call (small performance hit); probably not something you'll want to do in a high-volume logging situation
True but the performance hit of sending a record over a UDP message is less than a write to the SAN and at least the UDP message isn't likely to content on SAN or cache etc. Besides I don't really want the main db server to hold millions of lines of messages.
> - what happens if the 'listener' is down (syb_sendmsg either errors out or hangs?)
No its a UDP message. Connectionless and "unreliable" which is fine of logging and messages.
In practice I used these messages a lot and so long as you're only a switch away from the server very little is lost.
> - does the 'listener' serialize its incoming messages (eg, what happens if multiple dataserver processes are sending a (relatively) large volume of messages in a short period of time => lengthy delays as messages queue up for the 'listener'?
Its just a UDP message but in general you don't send long message over this. Less than a network packet size and you're ok.
The delivery isn't guarantteed but that fine for our requirements and also that why we want to make it configurable so important message are stored in tables and minor message aren't stored.
Agree with all your other comments except "Writing an OS-level process (eg, shell script) to process the log files should be trivial." We have a grid of 20 grid machines each running 16 processes, add into that 4 application servers, 2 web servers and spread it all over 2 countries. Trivial it is NOT!!!
And Yes it would be easier if we were developing the application from scratch but most applications we see have been around for years and years. So changing thousands of stored procs isn't a small task.
Even if we were only using print - it would be better to create a wrapper around it to standardize our calls so they call match the same format and all done in the same way.