Hope everyone is doing fine!
We have our own data management framework implemented using SAP HANA XS and SQLScript.
We have a SAP BW DTP like stored procedure where we call a SAP BW Transformation like SQLScript procedure which returns the result package as a SQLScript table variable.
The "DTP"-like procedure then attempts to join this table variable with the actual SAP HANA column table (which is kind of the SAP BW DSO active table) to detect the delta (the changed records, the new records and the deleted records)
The table variable that contains the result package records is not that big e.g. around 50K. The physical column table (mimicking the active table) is much bigger (e.g. 50M and growing) and the number of columns to be compared is quite high! The (2) columns involved in the join are defined as the primary key on the column store table (hence I assume already would have the concat attribure created on it)
But what we don't much know about is the behaviour of the table variable! How is the join between a table variable and a physical column table actually performed? I assume the table variable gets first materialized! Will concat attribute columns still be created on this materialized view? Does HANA still create and maintain translation tables behind the scnenes?
As the number of records increase in the active column store table in production, we realised that these joins got slower and slower even though the number of records in the table variable remained constant around 50K.
Is there a way to force this join to be executed similar to what we do in ABAP SQL with "FOR ALL ENTRIES IN ..." on the big column table only wihout triggering translation table maintenance (if I am not wrong and if this is what HANA is really doing behind the scenes of course)?
Any help would be appreciated.