Skip to Content

Hitting the 2 billion record limit on HANA on a TDMS read - why? Table is partitioned

Hi

Seems we've hit the HANA '2 billion records' issue!

During the "Preparation of Tables for Data Transfer" step of a TDMS time slice transfer, we get a short dump in the HANA Sender system:

DBSQL_SQL_ERROR - CX_SY_OPEN_SQL_DB SQL

error "SQL code: 129" occurred while accessing table "BSEG".
Database error text: "SQL message: transaction rolled back by an internal error: Search result size limit exceeded: 4776242334"

I've read OSS "2154870 - How-To: Understanding and defining SAP HANA Limitations" and a few related forums. However, the table it is occurring on in the sender system (a HANA system) is BSEG, which is already partitioned (6 partitions of approx 680 million rows each, totalling 4.7 billion rows).

How come we get the limit issue - is the TDMS code somehow not taking partitions into account?

The TDMS system itself is a Sybase ASE one but that shouldn't matter.

Cheers
Ross

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

1 Answer

  • Mar 19 at 10:14 AM

    Fix provided by SAP - it's in a SAP Note but it's as clear as mud.

    "You can use the view V_CNVMBTPARAMS to maintain entry in CNVMBTPARAMS table. NODATA_SET_NEW_SELECTION to X in cnv_mbt_params table. However, this is not required as the Z-report mentioned in the OSS note (2295463) already makes the customizing required to prevent this issue for current package as well as future packages"

    Add comment
    10|10000 characters needed characters exceeded