on 08-18-2015 4:16 PM
All,
I am having an issue where the XS engine is running an out of memory allocation error. I am using HANA SP09 (1.0.96)
I have an XSJS service which calls a stored procedure returning 180k records. if I add a filter, the XS API is returning successful, however, if i return a bigger result set (180k rows in the db) then I get a service unavailable error. the error on the performance tab states "invalid protocol or service shutdown during distributed query execution" a recent post suggested to add a jsvm property in the xsengine.ini and set it to 100GB. that solution didnt work in my case.
while debugging, i am getting the error while assignign the output table type from the stored procedure call into a JS variable.
any other suggestion is highly appreciated.
Hi Sergio,
I've seen this sort of situation before. It's turning into a classic I think. I assume it's a CSV file generated in XSJS. Did I get it right?
Why are you using XS engine for mass data extraction? It's definitely not meant for that. If you want to perform such a huge extraction, you should use a suitable tool (such as ODBC or JDBC ).
Generating 180k records (times the avg amount of bytes per columns) you could easily reach the XS engine memory allocation, thus OOM'ing your service. And that is totally acceptable, given that XS Engine is a lightweight web container.
In other words, it's better to review your application and use a suitable tool for that kind of job instead of squeezing in some much data into an HTTP based conversation and hoping that your service will keep up.
BRs,
Lucas de Oliveira
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Admittedly this is one area where we know we have a gap with XS. XS was always designed for lightweight pass through. The rule - never send more data to XS then what you want to present to the client - works fine for UI. In a table control you can use data pages to have a large result set but only send a small chunk at a time to browser. HANA is powerful enough that it doesn't mind re-executing the same query over and over again with a different limit/offset parameter. This is exactly what OData does with the Top/Skip parameters. However this theory breaks down once you have a file download; at least as long as you have to put the entire response body into a single variable.
We always planned to address this by adding chunked/streaming response capabilities to the $.response API. This way you could read a section of the data, write that section into the response and even have the response part forwarded to the client and then start with the next data window. This way the memory on the XS level never has to grow beyond a set point.
Although we never got past the design phase on this part of the $.response API, the good news is that node.js is planned to be coming to XS very soon. Node.js brings with it a whole set of functionality around chunked, streamed and even web sockets. It even has modules specific to the download operation like this one: fast-download. I know this doesn't help you solve your problem using XS today, but hopefully shows that the future holds considerable promise in this area.
Former Member Thank you for the response and yes we are aware of this not beign an issue if we used OData to feed a table in sapui5, however this is a web service call with no UI. I was able to solve the issue by doing a string_agg in the stored procedure , then doing a array.split on XS - this solved my issue and i was able to return this via the web call.. i still think we need a different approach but for now, we should be good.
Thank you for the prompt response and looking forward to using node.js soon... any idea if this will be a feature on SP11? 12? or has it not been decided yet?
>looking forward to using node.js soon... any idea if this will be a feature on SP11? 12? or has it not been decided yet?
There is a hit squad of SAP lawyers that would hunt me down if I answered that question. Let's just say that you should keep a close eye on the news that comes out of TechEd in October/November.
Hi Thomas,
Same problem here. I need to generate and download a BIG text file. Knowing that XS doesn't handle big result sets that good, we usually generate the file at the database level, concatenating the strings into a CLOB.
But, unfortunately, by using the SQL command CONCAT in HANA the maximum length of the concatenated string is 8,388,607 and we need more than that. Send it to the application cause an out of memory error and as you said above, it's not a good idea..
I was trying to understand this sections of the XS Reference:
The second parameter of setBody method cannot be used for a mult-part respose ?
setBody(body, index)
Sets the body of the entity; The method supports all elemental JavaScript types, ArrayBuffers, WebResponses and WebEntityResponses. When using WebResponse or WebEntityResponse as arguments, the headers identifying that the body is a WebResponse/WebEntityResponse (Content-Type: application/http, content-transfer-encoding: binary) are automatically added to the parent WebEntityResponse.
Name | Type | Argument | Description |
---|---|---|---|
body | any | ArrayBuffer | $.web.WebResponse |$.web.WebEntityResponse | Can be any elemental JavaScript type, an ArrayBuffer, WebResponse or WebEntityResponse. | |
index | Number | <optional> | If the first argument is of type ResultSet, the number specifies the index of a Blob column. |
And we also have this example ;
// Handling of multipart requests and responses in xsjs files: var i; var n = $.request.entities.length; var client = new $.net.http.Client(); for (i = 0; i < n; ++i) { var childRequest = $.request.entities[i].body.asWebRequest(); client.request(childRequest, childRequest.headers.get("Host") + childRequest.path); var childResponse = client.getResponse(); var responseEntity = $.response.entities.create(); responseEntity.setBody(childResponse); }
But none of those were able to handle a mult-part response for downloading... That leaves me with my hands tied, I can't generate the file using the database neither using the XS, I've to go back and generate it using ABAP. I look forward for the improvement in the XS to handle this kind of situation..
Thanks a lot!
Hello Sergio,
Well, for the time being you could bypass XS and use another client application using JDBC/ODBC. Surely will demand extra development effort, but it is a workaround due to the nature of this application (web mass data extraction).
However, I understand you could make it work after tweaking your application. Did I get it wrong?
BRs,
Lucas de Oliveira
Hi Sergio,
That's great. I've seen a scenario where tweaking the xsjs it was possible to increase 'extraction' sizes but only up until a certain point. When more records were desired (btw, much less than yours) XS would oom. I hope you don't fall into that scenario.
ps.: what about sharing your scenario in a blog post ?
BRs,
Lucas de Oliveira
Hi Sergio,
Can you provide us more information about how you made it work using a stored procedure?
I worked on a XSJS code a few weeks ago that was working fine to export almost 1M rows (from a table with 30 columns), but it appeared to be an unstable solution as it suddenly stopped working later.
Thanks!
Andre Rodrigues
Hi Andre,
please check out my quick blog about my approach
http://scn.sap.com/people/sergio.guerrero/blog/2015/08/21/xsjs-out-of-memory--maybe-not
I hope this helps
User | Count |
---|---|
78 | |
10 | |
9 | |
7 | |
6 | |
6 | |
5 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.