-- SYBASE VERSION --
Adaptive Server Enterprise/15.7.0/EBF 24718 SMP SP64 /P/Sun_svr4/OS 5.10/ase157x/3384/64-bit/FBO/Thu Jun 25 11:22:09 2015
-- END SYBASE VERSION --
Hi all,
I request your advices to find a way to limit the increase of my log segment to complete my SQL statement.
I have a database with 40Gb of log segment with one table changing its structure.
The table information is
1> sp_spaceused TABLE_A
2> go
name rowtotal reserved data index_size unused
------------ -------- ----------- ----------- ---------- ------
TABLE_A 30318974 31208544 KB 30318976 KB 888360 KB 632 KB
My initial sql statements are:
alter table TABLE_A add M_COMMENT_2BT char(100) default ' ' not null
go
alter table TABLE_A replace M_COMMENT_2BT default null
go
update TABLE_A set M_COMMENT_2BT = M_COMMENT where M_COMMENT is not null
go
Executing them with isql with -p and -e option gives me a total execution time of 1959862 ms.
My goal is to reduce this execution time
My remarks about the 1st SQL:
Because we're adding a 'non null' column, the ALTER will act like a SELECT INTO + INDEXES recreation. this is confirmed if I checked the values of syspartition.datoampage and syspartitions.indoampage before and after the statement --> values changed.
Moreover activating 'set statistics io' before the statements returns the following.
2> alter table TABLE_A add M_COMMENT_2BT char(100) default ' ' not null
Table: #syb__altab scan count 0, logical reads: (regular=35139295 apf=0
total=35139295), physical reads: (regular=0 apf=0 total=0), apf IOs used=0
Table: TABLE_A scan count 1, logical reads: (regular=7801974 apf=0
total=7801974), physical reads: (regular=3517 apf=429892 total=433409), apf IOs
used=429849
Non-clustered index (index id = 2) is being rebuilt.
Non-clustered index (index id = 3) is being rebuilt.
Total writes for this command: 1342908
Execution Time (ms.): 1617204 Clock Time (ms.): 1617204
2> update TABLE_A set M_COMMENT_2BT = M_COMMENT where M_COMMENT is not null
Table: TABLE_A scan count 0, logical reads: (regular=0 apf=0 total=0),
physical reads: (regular=0 apf=0 total=0), apf IOs used=0
Table: TABLE_A scan count 1, logical reads: (regular=7642524 apf=0
total=7642524), physical reads: (regular=0 apf=0 total=0), apf IOs used=0
Total writes for this command: 89284
Execution Time (ms.): 342622 Clock Time (ms.): 342622
I checked with our development team whether there's a reason to have a 'non null' column and they couldn't answer me.
Therefore I changed by original SQL with
alter table TRN_HDRF_DBF add M_COMMENT_2BT char(100) null
go
update TRN_HDRF_DBF set M_COMMENT_2BT = M_COMMENT where M_COMMENT is not null
go
This time the execution time is: 1165890 ms which is a bit better (I expected more) but this test raised two remarks
1.I faced a LOG SUSPEND during 'update TRN_HDRF_DBF set M_COMMENT_2BT = M_COMMENT where M_COMMENT is not null'.
I added 20Gb of log to my database then I executed every 30s "select lct_admin('logsegment_freepages',db_id('${DBASE}'))" in a file and analyzed the output. I reached a maximum usage of 61Gb of my logsegment.
Question: How couldd the same UPDATE statement filled my log segment? My assumption is that in this case, my ALTER statement was a fully logged operation and it filled the log segment that couldn't empied fast enough to allow the UPDATE to execute successfully. Am I right?, is there a way to prevent this behavior?
2. Looking at the timing the UPDATE statement was very long
2> update TABLE_A set M_COMMENT_2BT = M_COMMENT where M_COMMENT is not null
Table: TABLE_A scan count 0, logical reads: (regular=31 apf=0 total=31),
physical reads: (regular=6 apf=0 total=6), apf IOs used=0
Table: TRN_HDRF_DBF scan count 1, logical reads: (regular=7642489 apf=0
total=7642489), physical reads: (regular=197672 apf=43521 total=241193), apf IOs
used=43521
Total writes for this command: 682190
Execution Time (ms.): 1165870 Clock Time (ms.): 1165870
Why do I have more writes? does is explain the differences?
Thanks all for your valuable inputs