cancel
Showing results for 
Search instead for 
Did you mean: 

Change in performance of INSERT after IQ Page Size change

kc-676583
Explorer

Hi IQ Experts,

We have been testing our applications with a change in the IQ page size on our database.

We noticed that the INSERT statements on our biggest table slowed down by about 50% with the higher IQ page size. We have not noticed any other SQLs slowing down. We have also noticed that while the disk utilization has remained below 30% on the database total IO has doubled with the change in IQ page size.

We changed the IQ Page Size and Hardware RAID stripe size from 128k to 256k between the tests. Everything else remained the same.

The table in question has six columns (a date, two varchar(50), a small int, decimal(32,8) and a char(1)). It has a primary key composed of five columns and we have an HG index on each of the columns. The table has 5 billion rows. The number of rows inserted is typically around 20,000. We are looking at removing some of the HG indexes.

Are there scenarios where an INSERT statement could slow down with an increase in IQ page size? Is this something we can try changing to improve this?

Regards, KC

Accepted Solutions (1)

Accepted Solutions (1)

RobertWaywell
Product and Topic Expert
Product and Topic Expert
0 Kudos

The performance issue is due to your insert batch size relative to your database page size. When you modify records in the database - inserting new records in this case - the modified database pages need to be written to disk. Since SAP IQ is a column oriented database, each column in a table is store on a separate set of pages.

Inserting a single record, would require IQ to write out 1 page to disk for each column in the table. By batching your inserts, you reduce the total # of page writes that have to happen so you are on the right track with the batching.

However, the largest columns in your table are the 2 VARCHAR(50) columns. At a batch size of 20,000 inserts then even if the full 50 bytes is used for each record, you only have 976 k per insert batch ( [20,000 records X 50 bytes]/1024 bytes/kb ) . That means IQ is writing 8 pages per VARCHAR(50) column with each batch. That is fairly reasonable but you even with the 128 k page size that column would benefit from a larger batch size. For your CHAR(1) column you only have ~20,000 bytes = 19 k per batch which is far less than your original 128k page size.

With the new 256k page size, IQ is now having to write 256k per page update to disk which is why the I/O has doubled. Since it takes longer to write 256k to disk than it did to write 128k to disk, it is taking longer to complete each batch of 20,000 inserts.

Doubling your batch size to 40,000 inserts should show a clear improvement in the performance, but given the size of some of the columns (for example your CHAR(1) column) you will likely find that the optimal batch size is quite a bit larger than 40,000.

Answers (0)