Sorry, but there's no easy/magic way of estimating the duration of a DML statement other than running some tests in your particular environment. There are just too many variables that are going to be specific to your environment. For example ... consider a single UPDATE statement ...
- various query plans can mean the difference between a slow and fast run, eg, I've seen deferred updates that take 30x longer than the comparable direct update [addressing deferred updates is another lengthy post/discussion all by itself]
- keep in mind that all data changes must be performed in a data cache, so if a row to be updated is not in cache it must be read from disk; net result is that if you're reading a lot of data from disk then disk read speeds can adversely affect total query run times
- user defined functions (UDF), SQL and java, incur some extra overhead (cpu/time) for their invocation (especially true for SQL UDFs); net result is that if your query is invoking any UDFs then the overhead for calling said UDFs could increase the run time of your query
- updating a record may also require updating some indexes, and those indexes must also be in cache; net result is that updating large numbers of indexes can cause your query to run longer, and of course if the indexes have to be read from disk then the disk read times also become a factor
- if the query references a proxy table (or any object on a remote server), especially as an inner table for a join, then the overhead for the CIS call (plus any potential network lag/delays) can add to the total run time of the query
- keep in mind that triggers fire synchronously with the DML statement that triggered them, eg, an UPDATE statement is not 'complete' until a) the actual UPDATE has completed and b) the trigger has completed; so total query run time will include the time required to complete any trigger processing [keep in mind that triggers may contain multiple DML statements, cause other triggers to fire, and reference remote servers, too]
- if your table has FK<-->PK referential integrity constraints, any RI checks triggered by your UPDATE statement will mean a longer run time for your query [yes, all RI checks are done against contents of data caches, so you may incur additional time delays if having to wait for RI data to be read from disk]
- all DML statements (except for IMDB) require data changes to be successfully written to disk before the DML statement is considered 'complete'; net result is that slow writes to your log device can extend the total time required for your query to complete
- and of course any contention for dataserver resources (eg, cpu, disk, locks, spinlocks) can also lead to poor performance of any queries trying to access said resources
-------------
Your question about estimating the remaining duration of your upgrade is obviously (?) more complicated as you now have to look at all of the remaining queries and commands ... each of which could have their own potential issues/delays.