cancel
Showing results for 
Search instead for 
Did you mean: 

ERROR SPS10 com.sap.hana.spark.conf.HanaESConfig - Hana Extened Store Configuration is missing

Former Member
0 Kudos

I'm Using SAP HANA SPS10

When I try to start the Spark Controller I get the following.

[root@quickstart bin]# ./hanaes start

+ export HANA_ES_HEAPSIZE=8172

+ HANA_ES_HEAPSIZE=8172

+ export HANA_ES_PIDFILE=/tmp/hana.spark.controller

+ HANA_ES_PIDFILE=/tmp/hana.spark.controller

++ dirname ./hanaes

+ bin=.

++ cd .

++ pwd

+ bin=/usr/sap/spark/controller/bin

+ DEFAULT_ESCONF_DIR=/usr/sap/spark/controller/bin/../conf

+ '[' -f /usr/sap/spark/controller/bin/../conf/hana_hadoop-env.sh ']'

+ . /usr/sap/spark/controller/bin/../conf/hana_hadoop-env.sh

++ export HADOOP_CONF_DIR=/etc/hadoop/conf

++ HADOOP_CONF_DIR=/etc/hadoop/conf

++ export HIVE_CONF_DIR=/etc/hive/conf

++ HIVE_CONF_DIR=/etc/hive/conf

+ '[' 1 -gt 1 ']'

+ '[' -e /conf/hadoop-env.sh ']'

+ DEFAULT_CONF_DIR=etc/hadoop/conf

+ export HADOOP_CONF_DIR=/etc/hadoop/conf

+ HADOOP_CONF_DIR=/etc/hadoop/conf

+ '[' -f /etc/hadoop/conf/hadoop-env.sh ']'

+ . /etc/hadoop/conf/hadoop-env.sh

+++ [[ ! /usr/lib/hadoop-mapreduce =~ CDH_MR2_HOME ]]

+++ echo /usr/lib/hadoop-mapreduce

++ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce

++ HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce

++ export 'YARN_OPTS=-Xms52428800 -Xmx52428800 -Djava.net.preferIPv4Stack=true '

++ YARN_OPTS='-Xms52428800 -Xmx52428800 -Djava.net.preferIPv4Stack=true '

++ export 'HADOOP_CLIENT_OPTS=-Djava.net.preferIPv4Stack=true '

++ HADOOP_CLIENT_OPTS='-Djava.net.preferIPv4Stack=true '

+ '[' -f /libexec/hdfs-config.sh ']'

+ [[ -z /usr/java/jdk1.7.0_67-cloudera ]]

+ JAVA=/usr/java/jdk1.7.0_67-cloudera/bin/java

+ '[' 8172 '!=' '' ']'

+ JAVA_HEAP_MAX=-Xmx8172m

+ CLASSPATH='/usr/sap/spark/controller/bin/../conf:/etc/hadoop/conf:/etc/hive/conf:../*:../lib/*:/*:/lib/*:/*:/lib/*'

+ CLASSPATH='/usr/jars/*:/usr/sap/spark/controller/bin/*'

+ HANAES_OUT=/var/log/hanaes/hana_controller.log

+ case $1 in

+ echo -n 'Starting HANA Spark Controller ... '

Starting HANA Spark Controller ... + '[' -f /tmp/hana.spark.controller ']'

++ cat /tmp/hana.spark.controller

+ kill -0 15035

+ echo ' Class path is /usr/jars/*:/usr/sap/spark/controller/bin/*'

Class path is /usr/jars/*:/usr/sap/spark/controller/bin/*

+ '[' 0 -eq 0 ']'

+ /bin/echo -n 20646

+ nohup /usr/java/jdk1.7.0_67-cloudera/bin/java -cp '/usr/jars/*:/usr/sap/spark/controller/bin/*' -XX:PermSize=128m -XX:MaxPermSize=256m -Xmx8172m com.sap.hana.spark.network.Launcher

+ sleep 1

+ echo STARTED

STARTED

### OUTPUT FROM THE LOG FILE

[root@quickstart bin]# cat /var/log/hanaes/hana_controller.log

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr/jars/slf4j-simple-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/jars/spark-assembly-1.3.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/jars/livy-assembly-3.7.0-cdh5.4.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/jars/avro-tools-1.7.6-cdh5.4.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/jars/pig-0.12.0-cdh5.4.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]

[main] ERROR com.sap.hana.spark.conf.HanaESConfig - Hana Extened Store Configuration is missing

[main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Exception in thread "main" java.lang.IllegalArgumentException: Can not create a Path from an empty string

at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)

        at org.apache.hadoop.fs.Path.<init>(Path.java:135)

        at com.sap.hana.spark.network.Launcher$.setupClassPath(Launcher.scala:36)

        at com.sap.hana.spark.network.Launcher$delayedInit$body.apply(Launcher.scala:19)

        at scala.Function0$class.apply$mcV$sp(Function0.scala:40)

        at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)

        at scala.App$$anonfun$main$1.apply(App.scala:71)

        at scala.App$$anonfun$main$1.apply(App.scala:71)

        at scala.collection.immutable.List.foreach(List.scala:318)

        at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)

        at scala.App$class.main(App.scala:71)

        at com.sap.hana.spark.network.Launcher$.main(Launcher.scala:17)

        at com.sap.hana.spark.network.Launcher.main(Launcher.scala)

Any help would be greatly appreciated.

Accepted Solutions (0)

Answers (1)

Answers (1)

0 Kudos

Hi James,

did you check the SAP Note 2177933 already? It has got the installation and configuration guide

Check that out and see if it works

Regards

Kingsley

Former Member
0 Kudos

Hi Kingsley,

I am really sorry to hijack this post. But I have a doubt in SAP Note 2177933.

In SAP note it is mentioned that while creating remote source we can provide any user name and password. so I have used "hanaes" user name and password.


CREATE REMOTE SOURCE "spark_demo" ADAPTER "sparksql"

CONFIGURATION 'port=7860;ssl_mode=disabled;server=<actual server>'

WITH CREDENTIAL TYPE 'PASSWORD' USING 'user=hanaes;password=hanaes';

But I am getting the below error

"SAP DBTech JDBC: [403]: internal error: Cannot get remote source objects: Credential not found"


Please let me know what credentials should I use while creating remote source.


Thank you.


Best regards,

Ram

marcowahler
Explorer
0 Kudos

Hi Ram,

I had the same issue. You need to change the password on OS level for User "hanaes".

Login as root user and type command: passwd hanaes

Nevertheless I still have a connection issue: SAP DBTech JDBC: [403] ...

Cheers

Marco

Former Member
0 Kudos

Hello Marco,

I am also facing same issue.

We are able to connect to Hadoop system. But after creating virtual table, we are not able to see any content in the table. we are getting the below error.

Could not execute 'SELECT TOP 1000 * FROM "HDPUSER"."spark_demo_products"' SAP DBTech JDBC: [403]: internal error: Error opening the cursor for the remote database for query "SELECT "spark_demo_products"."productid",

Are you facing the same issue Marco?

Best regards,

Ram