Skip to Content
0
Jun 17, 2020 at 05:22 PM

Reading SAP HANA data in chunks to reduce memory usage

208 Views

We have 50+ millions of data to transform/process and consuming huge memory to read the data.

Do we have option to read data in chunks and process before moving to next chunk? i.e: read 1 million at a time, process data and move to next 1 million of data fetch

We have this chunk-size read option for oracle and able to handle 100+ millions of data with low memory usage as it consumes memory required only the chunk of data we read at a time.