cancel
Showing results for 
Search instead for 
Did you mean: 

Maximum number of rows before ABAP "commit work" on HANA

Fukuhara
Advisor
Advisor
0 Kudos

Hi experts!

I have a question about "commit work".

If there are so many records of insertion to a table before "commit work", is it better to split insert and "commit work" for stable process?

What I'm afraid of is that short dump occurs during "commit work" due to memory error or something like that.
In my environment, I can insert 2,500,000 records and commit work them on ABAP on HANA.
If it is unstable, I would loop the process and divide them.

I saw some discussions about Oracle.

They said it may be unstable due to UNDO area shortage on Oracle.

But HANA basically has large memory on server, and I don't know the best practices for ABAP on HANA.

https://archive.sap.com/discussions/thread/1103591

https://archive.sap.com/discussions/thread/1290462

Regards,

Yohei

Accepted Solutions (1)

Accepted Solutions (1)

lbreddemann
Active Contributor

Your notion of "unstable" is odd. This choice of words implies that there is something wrong with the transaction processing of the DBMS used by your ABAP stack.

That's very likely not the case; instead, it is normal and correct behavior. However, none of the supported DBMS provides transaction management that is independent of the transaction volume. Systems with MVCC have to keep original and changed versions of data available in some way. Oracle does that with UNDO tablespaces stored on disk while SAP HANA uses in-memory undo-files.

If these storage facilities are filled up, the transaction will be aborted by either DBMS as successful processing cannot be guaranteed anymore.

This means, of course, that the DBMS needs to be correctly sized and configured to cope with large transaction volume. You can do that with both Oracle and SAP HANA.

Instead of being afraid of "unstable" functionality and looking into discussions that have been closed more than a decade ago, it would be better to understand how ABAP transaction processing and DB transaction processing fit together.

The question of whether or not you can split up one transaction chiefly depends on what your data validity rules allow. If you are e.g. loading into a kind of staging area then it may be perfectly fine and even beneficial to cover the workload in multiple, maybe parallel transactions. If, on the other hand, the data is only correctly usable as a whole, then allowing an intermediate inconsistent state to be committed comes with a heap of additional challenges as your application code would have to handle this.

Concerning your last comment: Oracle allows an automatic extension of the UNDO tablespace which could avoid aborting transactions due to lack of UNDO storage capacity. SAP HANA cannot extend the server RAM or start to page to disk once all memory resources are used up.

Answers (0)