Hello community,
I'm writing this question in relation to the excellent blog "Tips for running CAP Node.js in SAP BTP" and SAP Note "3219884 - CAP application (written in node.js) does not release memory"
As suggested, we set the environment variable OPTIMIZE_MEMORY in order to optimize the node.js memory and the garbage collector.
However, at this point, I'm encountering issues when using it together with the Autoscaler service.
As I previously commented on the mentioned blog, it appears that in some cases, memory is not released, leading to a situation where the instance started by the autoscaler services is continuously created and destroyed.
This is my situation:
I send some multiple requests to the endpoint: the memory is increased as expected (e.g. 78%) and a new instance is created.
Then after the creation of the 2nd instance , considering for example a memory usage of 50%, this is correctly destroyed.
In the meantime, the memory of 1st instance is still at 78%. Then a 2nd instance is created and destroyed again...and so on..
It seems that despite using 'optimize_memory,' memory is not being released in a way that falls below the threshold defined for the autoscaler
Have you ever faced a similar situation? Any advice or Am I missing something?
Thanks