Please make sure that Hadoop and Hive MetaStore are running properly.
See these pages for more details about Apache Hadoop and Apache Hive:
Install and configure Apache Hadoop (single node cluster)
Install and configure Apache Hive
You can find bellow the "
pom.xml
" file that list the required dependencies to execute the Spark-SQL sample application.
The java code shows how to create the Spark Session:
How to list existing Hive databases:
How to list existing tables of a specific Hive database:
How to insert new data in a table:
How to print data of a table:
The code below is using the Spark local master url:
You can also use the Spark Standalone cluster master url:
Please make sure that Spark is running properly.
See this page for more details about Apache Spark:
Install and configure Apache Spark (standalone)
See this page for more details about Spark master URLs:
https://spark.apache.org/docs/latest/submitting-applications.html#master-urls