Dear colleagues,
while running a custom report on a NW EP 7.0 im getting following exception:
Exception com.sap.engine.services.connector.exceptions.BaseResourceException: Cannot get connection for 120 seconds. Possible reasons: 1) Connections are cached within SystemThread(can be any server service or any code invoked within SystemThread in the SAP J2EE Engine), 2) The pool size of adapter "SAPSR3DB" is not enough according to the current load of the system or 3) The specified time to wait for connection is not enough according to the pool size and current load of the system. In case 1) the solution is to check for cached connections using the Connector Service list-conns command, in case 2) to increase the size of the pool and in case 3) to increase the time to wait for connection property. In case of application thread, there is an automatic mechanism which detects unclosed connections and unfinished transactions. at com.sap.engine.services.connector.jca.ConnectionHashSet.match(ConnectionHashSet.java:311) at com.sap.engine.services.connector.jca.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:262) at com.sap.engine.services.dbpool.cci.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:51) at com.sap.caf.eu.gp.base.db.ConnectionPoolJ2EE.getConnection(ConnectionPoolJ2EE.java:89) at com.sap.caf.eu.gp.base.db.TransactionController.openTransaction(TransactionController.java:159) at com.sap.caf.eu.gp.model.co.execute.impl.CallableObjectExecutorPersistenceImpl.openTransaction(CallableObjectExecutorPersistenceImpl.java:126) at com.sap.caf.eu.gp.model.co.execute.impl.CallableObjectExecutorPersistenceImpl.readOverdueTasks(CallableObjectExecutorPersistenceImpl.java:563) at com.sap.caf.eu.gp.model.co.execute.impl.CallableObjectExecutorTimeoutProcessor.process(CallableObjectExecutorTimeoutProcessor.java:160) at com.sap.caf.eu.gp.model.co.execute.impl.CallableObjectExecutorTimeoutProcessor.run(CallableObjectExecutorTimeoutProcessor.java:209) at com.sap.engine.frame.core.thread.Task.run(Task.java:64) at com.sap.engine.core.thread.impl5.SingleThread.execute(SingleThread.java:83) at com.sap.engine.core.thread.impl5.SingleThread.run(SingleThread.java:156)
which describes pretty exact the reason of my problem: the number of open JDBC connections reaches the max allowed value. After this error occurs, the portal is going down, as its not able to accept any further requests. To check if nwa tells me the truth on this issue, i took a look on the JDBC monitoring graph in VA (Server * -> Services -> JDBC Connector -> DataSources -> SAPSR3DB -> Monitoring). It behaves like the exception saying: if im starting my report, the number of open connections increasing till it reaches the max value (50) and kills the EP.
I already tried following:
1.)Increasing of maximumConnections parameter in VA: works, as the report needs a max value of 65 to run through. The strange thing about this is, after the report is done it dont release the used connections by them self. After appr. 10min it releases the used connections, i guess according to connectionLifeTime parameter, which is set to 600s. I dont really like to suggest this solution for prod systems, i think its better to find the leak.
2.)Regarding to that, I also created a debug session and tryed to find out which part of my report cause that high number of open JDBC connections. The report is doing following:
i.)it reads recursivly a large number of KM files starting from a given folder and puts all xml's in an IResourceList. While doing this the number of open JDBC connections stays nearly constant by 0
ii.)after that im iterating over founded files and reading the file content in a string:
....
try {
//read founded xml or html file into a string
String line = "";
xml = "";
InputStream inputStream = rs.getContent().getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream));
while ((line = reader.readLine()) != null) { xml += line; }
reader.close();
inputStream.close();
}
iii.)afterward im parsing the string for links,if find an invalid one and note it in the report results:
prooveLinks(xml, rs.getDisplayName(), receiver, docRid);
private void prooveLinks(String file, String filename, IResultReceiver receiver, RID rid)
{
//parse founded KM file for links
NodeList links = null;
try {
Parser parser = new Parser(file);
parser.setInputHTML(file);
links = parser.extractAllNodesThatMatch(new NodeClassFilter(LinkTag.class));
}
catch (ParserException e){
log.debugT("Parser exception while executing invalid links report occured. " + e.getMessage());
}
.....
for (int i=0; i<links.size();i++){
//Element link = (Element) links.get(i); -> jsoup
LinkTag link = (LinkTag)links.elementAt(i);
String typ = "";
String valid = "";
String name = link.getLinkText();
String href = link.getLink();
.....
if (StringUtils.startsWithIgnoreCase(href, "/irj/go/km/docs")) { typ = "Document"; valid = linkIsValid(href); }
.....
if (StringUtils.equalsIgnoreCase(valid, "false")) {
//user notification would be possible here, e.g. an email for the responsible user
Map pm = new HashMap();
try {
pm.put(InvalidLinksReport.m_sapDisplayname, new Property(InvalidLinksReport.m_sapDisplayname, filename));
receiver.addItem(ResultItem.getInstance(rid, pm, new ArrayList(), true, "Broken link found! Link type: " + typ + ", Name: " + name + " Target: " + href)); }
......
Im making a lookup for KM links in following manner:
//proove KM document links
if (StringUtils.startsWith(path, "/irj/go/km/docs"))
{
try { path = URLDecoder.decode(path,"UTF-8"); } catch (UnsupportedEncodingException e) { }
rid_path = StringUtils.substringAfter(path, "/irj/go/km/docs");
IResource res = null;
RID rid = RID.getRID(rid_path);
try { res = ResourceFactory.getInstance().getResource(rid, ctx); }
catch (ResourceException e1) { valid = "false"; return valid; }
if (res != null) { valid = "true"; } else { valid = "false"; }
res = null;
}
By doing this i realized this:
It seems there is no specific directive which causes the increasing number of jdbc connections. I tryed to debug step by step and keep the VA JDBC connection pool in sight, but i couldnt find any directive which directly causes an increase of connection. Since im not accessing the database directly, i have no influence on opening or closing JDBC connections. The report is reading a large amount of data, its something about 6-7k of KM files with more than 10k invalid KM links. I already tested this report on smaller data without reaching the connection pool limit.
I have no idea what to do further in such a case. My question is: where could be the leak? I am thankful for any clues how to figure the problem out. Thank you in advance,
regards