By itself the getutcdate() call is treated like a literal and thus only processed once.
To have getutcdate() (or any system/user-defined function) processed for each row you have to combine the function call with a reference to a column from a table you're querying. A quick example using the integer value from the syobjects.id column:
===============
select top 10
id,
getutcdate(),
dateadd(dd,id,getutcdate())
from sysobjects
order by id
===============
With a little bit of ingenuity it's possible to reference char/date/float/whatever datatypes, too.
NOTE: If you don't want the column value to come through in your final results then negate/NULL the column's value, eg:
===============
select top 10
id,
getutcdate(),
dateadd(dd,id*0,getutcdate())
from sysobjects
order by id
===============
As for what you're trying to do ... hmmm ...
Obviously you may find you get a different result set size based on logical vs physical reads, time on cpu, speed of the cpu, etc.
I'm guessing the query may run longer than you think since your SARG will have to be applied against all rows to make sure they meet the desired requirement, ie, I don't see (at the moment) how you'll get the query to *stop* at a particular time. My guess would be that the query will process then entire dataset but only those rows that are processed before your cutoff date/time will be part of the final result set.
--------------
While there may be some convoluted way to auto-stop a query based on a SARG, I'm thinking it might make more sense to dynamically invoke a resource limit just before executing your query, then disable the resource limit once the query has completed.
See:
-- enable use of resource limits
sp_configure 'allow resource limit',1
-- limit a login's queries to XX seconds of elapsed time (or rowcount, io cost, tempdb space)
sp_add_resource_limit
You'll need to play with the various resource limit settings to determine what happens when the time limit is met (eg, issue a warning, terminate the batch, kill/rollback a txn, kill the login).