Quantcast
Channel: SCN: Message List - SAP Adaptive Server Enterprise (SAP ASE) for Custom Applications
Viewing all articles
Browse latest Browse all 3587

Re: log increase too fast

$
0
0

When you update a table, the action is logged so that the server can rollback the transaction if an error happens. Therefore, your log needs to be big enough to hold all that information.

The log can't be truncated during that time since that only removes finished transactions.

 

The server willl also lock the pages or rows that are updated, so if the server can't get a table lock, it will most likely run out of locks if you update a large number of rows with one update.

 

And during the update, the table will not be accessible (which can be a problem if there are users using the table during that time).

 

When I have to do an update that affects a lot of tuples, I always generate updates that update only one row. The total process will take more time, but the log isn't filling (at least it can be dumped or truncated), there is no problem with locks and users are minimal affected.

 

(I use sqsh which allowes ';' as statement separator and has io redirection, so I'd use something like :

print "use <database>;"; > scriptfile

select "update <table> set <column> = <value> where <pk> = " + <pk> + ";" from <table>; >> scriptfile

and then I execute the scriptfile to do the actual updates).

 

Other options are : 1) setting a rowcount and do update <table> set <column> = <value> where <column> != <value> repeatedly until all tuples are updated (there should be an index on <column> otherwise this would result in repeated table scans and take forever); or 2) split the update in multiple updates based on the value of some indexed column, f.i. :

update ... where <column> between 1 and 100

update ... where <column> between 101 and 200

etc.

 

When you do a large number of updates, the transaction log will filll up, so it will be necessary to empty the log to continue the operation.

This can be done with dump transaction statements (and doing these automatically with a threshold action).

If you don't need these transaction dumps, you can set :

sp_dboption <database>, 'trunc', 'true'

before the update, and

sp_dboption <database>, 'trunc', 'false'

after the update and then do a database dump.

 

Luc.


Viewing all articles
Browse latest Browse all 3587

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>