The time that test took is determined pretty much solely by the speed of the network and the disk being written to. The results start returning immediately.
If you run an equivalent select from monLocks for a million locks however it will take 14 minutes, 5 minutes doing what appears to be nothing (which is our problem here, becomes hours if there are many millions), followed by the 9 minutes sending the results and writing to the file.
Anyway my test was really to highlight that the problem was monLocks itself rather than the native RPC.
To verify this you can create a second proxy table using the pre-15 method to map to the RPC. Just create a loopback server to point to the @@servername. This shows that the CIS side of things looks OK, at least when using the classic definition.
create existing table monLocks2
..blah,blah
external procedure at 'loopback...$monLocks'
Table: monLocks2 scan count 1, logical reads: (regular=0 apf=0 total=0), physical reads: (regular=0 apf=0 total=0), apf IOs used=0
Total writes for this command: 0
Execution Time 82.
Adaptive Server cpu time: 8174 ms. Adaptive Server elapsed time: 8279 ms.
Execution time: 8.283 seconds
Now it's back to 8 seconds, whereas with monLocks using the 'materialized at "$monLocks"' definition:
Execution Time 3547.
Adaptive Server cpu time: 354714 ms. Adaptive Server elapsed time: 354734 ms.