一、启动zk、启动hdfs集群、启动hive服务(hive --service metastore)
二、修改spark中conf下的hive-site.xml配置文件(就和hive的client的配置文件相同)
注:这里配置文件只需要修改一台主机就可以了,这里相当于client
三、启动spark集群
四、启动spark sql(./bin/spark-sql --master spark://node11:7077 --executor-memory 512m)
注:配置文件spark-env.sh中如果配置的是ip 那么命令就需要用ip,如果配置文件中配置的是主机名,那么命令就要用主机名;
SparkSQL thrift server环境搭建
1、hive-site.xml 中添加配置
<property>
<name>hive.server2.thrift.min.worker.threads</name>
<value>5</value>
<description>Minimum number of Thrift worker threads</description>
</property>
<property>
<name>hive.server2.thrift.max.worker.threads</name>
<value>500</value>
<description>Maximum number of Thrift worker threads</description>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
<description>Port number of HiveServer2 Thrift interface. Can be overridden by setting $HIVE_SERVER2_THRIFT_PORT</description>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>node12</value><!-- 这里就是当前主机名 -->
<description>Bind host on which to run the HiveServer2 Thrift interface.Can be overridden by setting$HIVE_SERVER2_THRIFT_BIND_HOST</description>
</property>
二、启动spark thrift server
./sbin/start-thriftserver.sh --master spark://192.168.57.4:7077 --executor-memory 512M
三、启动以后可以通过bin目录下beeline访问
./bin/beeline
!connect jdbc:hive2://node12:10000
注:这样可以进入sparksql控制台,但是不能查询,报错,需要将hdfs集群中core-site.xml 和 hdfs-site.xml拷贝到spark中conf下(每个spark集群节点都要拷贝)