cancel
Showing results for 
Search instead for 
Did you mean: 

slow select in big table

Former Member
0 Kudos

Hi,

We have a table of 12 million records and we want update 1 column of 1,9 million rows so we know the present records are old values. No indexes on that table, primary key= 5 columns, table=8 columns, update column not in pk.

When we do a simple count(*) of these rows, we see (taskmanager) dbeng9.exe increase from 36 MB to 400 MB and it takes a while. It's like the table become total in memory. Is this normal behavior? How can we speed this up? Because it's a 1 time a week this must be happen and we can't keep the table in memory just for that.

ASA 9.0.2.3702

PB 11.5

Accepted Solutions (0)

Answers (1)

Answers (1)

former_member244518
Participant
0 Kudos

Hi Danny,

The “SELECT COUNT(*)” statement must sequentially scan each row in the table which could result in the poor performance you are seeing. This statement will return the number of rows in the table as of the last commit. As an alternative you could try the statement:

SELECT count FROM SYSTABLE where table_name='YourTableName';

Which will return the number of rows in the table as of the last checkpoint. The server periodically performs checkpoints or you can invoke one with the “checkpoint” statement. This statement should execute much faster.

Hope this helps,

Mikel Rychliski

SAP Active Global Support