Skip to Content
avatar image
Former Member

SAP Spark Controller fails to start due to a missing Java class

Hello SAP community,

I want to set up SAP HANA Spark Controller 2.0 SP01 PL01 using Ambari for a kerberized HDP 2.5 cluster.

For this purpose I'm following the installation guide and I apply the fix for the datanucleus-* missing classes issue thanks to the SAP Note 2386949.

However when I start the service through Ambari, it stopped automatically few seconds after.

By looking at the "hana_controller.log" file, it seems that the java exception "

java.lang.ClassNotFoundException: javax.jdo.JDOQLTypedQuery" is the root issue causing Hive to have an error.

Which .jar do I need to include in order to resolve this issue ? Or is it due for another problem ?

You can find the complete log as an attachement.

log-hana-controller.txt

Thanks in advance for your help.

Yong-Eun

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

1 Answer

  • Best Answer
    avatar image
    Former Member
    Jun 20, 2017 at 08:53 AM

    Hello,

    I found a solution to my problem. I downloaded the JDO API in its 3.2.0 version from the Mavenrepository website and exported it via adding its path in to the “HADOOP_CLASSPATH” value in Spark Controller settings in Ambari.

    I choose the 3.2.0 version of the JDO API because I did not found the JDOQLTypedQuery class in earlier versions by browsing the Datanucleus' javadocs.

    However it leads to another problem regarding hive because when Spark Controller is initialized, it throws a "org.datanucleus.api.jdo.exceptions.ClassNotPersistenceCapableException: The class "org.apache.hadoop.hive.metastore.model.MVersionTable" is not persistable."

    Yong Eun KIM

    Add comment
    10|10000 characters needed characters exceeded

    • Former Member

      Hello,

      My last issue was caused by the fact I used the latest datanucleus jar from Datanucleus website for datanucleus-core, datanucleus-api-jdo and datanucleus-rdbms. I deleted these jars from my Hadoop cluster and use the jars included with HDP instead.

      Yong Eun KIM