In principle it is usable (or maybe was?), agreed :-) I too have used it many times in the past.
But..until someone shows me some evidence on 15.7 SP13x of it not being exponentially slower than syslocks when there is a large amount of locks then it will continue to be unusable in our environments.
As far as I can tell it is not repeatedly going back through the CIS functions, it remains in the hashtable scan function for the entire duration.
Adding in filtering by SPID/KPID or anything else makes zero difference to the performance of the query (or what routine it sits in). The count(*) version ends up being the easiest way to demonstrate the problem, whether it's filtered, inner side of join or accessed any other way it makes no odds.
It gets significantly worse dependent on the number of locks and changes to the the number of hash buckets (changing lock hashtable size) also appears to make no difference.
(This is with SPID/KPID filtering in the query)
select count(*) from master..monLocks where SPID=24 and KPID=7143479
50000 locks:
Execution Time 3.
Adaptive Server cpu time: 304 ms. Adaptive Server elapsed time: 304 ms.
Execution time: 0.31 seconds
100000 locks:
Execution Time 15.
Adaptive Server cpu time: 1484 ms. Adaptive Server elapsed time: 1484 ms.
Execution time: 1.489 seconds
200000 locks:
Execution Time 113.
Adaptive Server cpu time: 11321 ms. Adaptive Server elapsed time: 11323 ms.
Execution time: 11.346 seconds
500000 locks:
Execution Time 811.
Adaptive Server cpu time: 81097 ms. Adaptive Server elapsed time: 81109 ms.
Execution time: 81.114 seconds
1000000 locks (as per Mike's findings also):
Execution Time 3547.
Adaptive Server cpu time: 354714 ms. Adaptive Server elapsed time: 354734 ms.
Execution time: 354.744 seconds
So each doubling is 4-7 times longer so I'm sure you can see where the 90 minutes came from for the 4 million test.