on 04-15-2013 4:07 PM
Hi,
We have a table of 12 million records and we want update 1 column of 1,9 million rows so we know the present records are old values. No indexes on that table, primary key= 5 columns, table=8 columns, update column not in pk.
When we do a simple count(*) of these rows, we see (taskmanager) dbeng9.exe increase from 36 MB to 400 MB and it takes a while. It's like the table become total in memory. Is this normal behavior? How can we speed this up? Because it's a 1 time a week this must be happen and we can't keep the table in memory just for that.
ASA 9.0.2.3702
PB 11.5
Hi Danny,
The “SELECT COUNT(*)” statement must sequentially scan each row in the table which could result in the poor performance you are seeing. This statement will return the number of rows in the table as of the last commit. As an alternative you could try the statement:
SELECT count FROM SYSTABLE where table_name='YourTableName';
Which will return the number of rows in the table as of the last checkpoint. The server periodically performs checkpoints or you can invoke one with the “checkpoint” statement. This statement should execute much faster.
Hope this helps,
Mikel Rychliski
SAP Active Global Support
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
74 | |
9 | |
7 | |
7 | |
6 | |
6 | |
6 | |
6 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.