Hi @marcos,
Thank you for your reply. I knew that Bitnami shipped without the hive jars, so I did the very thing you mentioned prior to starting the thrift server. After examining look at the logs, this issue appears to be with the DERBY metastore that is included in the Bitnami stack..
To start the thrift server is actually a 3 step process:
Step 1:Start master (This works)
$ sudo /opt/bitnami/hadoop/spark/sbin/start-master.sh
Step 2: Start slave (This also works)
$ sudo /opt/bitnami/hadoop/spark/sbin/start-slave.sh spark://hostname:7077
Step 3 - Start Thrift server (This does NOT work)
$ sudo /opt/bitnami/hadoop/spark/sbin/start-thriftserver.sh --hiveconf hive.server2.thrift.port=10001 hive.server2.transport.mode=binary
Logs are generated for each step per the terminal in
sudo /opt/bitnami/hadoop/spark/logs
The Thrift server log is the one that shows the error related to the Hive metastore:
WARN metastore.HiveMetaStore: Retrying creating default database after error: Error creating transactional connection factory
javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788)
.....
Derby as a metastore is very limited (only one connection at a time). Some Hadoop users have "switched" from using
Derby to MySQL or Postgres for the Hive metastore.
Has the team looked into fixing the Derby metastore issue, or possibly testing MySQL or PostgresQL as a workaround?
Thanks!