irpas技术客

Hadoop环境搭建(hive环境配置)_温海强_hadoophive环境搭建

未知 2079

1、将hive安装包放到software里

安装包版本:apache-hive-3.1.2-bin.tar.gz

2、将安装包解压在moudule文件夹里

命令:

cd /opt/software/ tar -zxvf 压缩包名 -C /opt/module/

3、修改环境变量

命令:vi /etc/profile

在打开的页面中添加如下内容

4、重启环境配置

命令:source /etc/profile

5、修改你hive环境变量

命令:cd ?/opt/module/apache-hive-3.1.2-bin/bin/

(1)、配置hive-config.sh文件

命令:vi hive-config.sh

在页面下添加如下内容

export JAVA_HOME=/opt/module/jdk1.8.0_212 export HIVE_HOME=/opt/module/apache-hive-3.1.2-bin export HADOOP_HOME=/opt/module/hadoop-3.2.0 export HIVE_CONF_DIR=/opt/module/apache-hive-3.1.2-bin/conf

?6、拷贝hive文件

命令:

cd ?/opt/module/apache-hive-3.1.2-bin/conf/ cp hive-default.xml.template hive-site.xml

7、修改hive配置文件,按下图修改

命令:vi hive-site.xml

<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> ? <property> ? ? <name>javax.jdo.option.ConnectionDriverName</name> ? ? <value>com.mysql.cj.jdbc.Driver</value> ? ? <description>Driver class name for a JDBC metastore</description> ? </property> <property> ? ? <name>javax.jdo.option.ConnectionUserName</name> ? ? <value>root</value> ? ? <description>Username to use against metastore database</description> ? </property> <property> ? ? <name>javax.jdo.option.ConnectionPassword</name> ? ? <value>123456</value> # 自定义密码 ? ? <description>password to use against metastore database</description> ? </property> <property> ? ? <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.1.100:3306/hive?useUnicode=true&amp;characterEncoding=utf8&amp;useSSL=false&amp;serverTimezone=GMT</value> ? ? <description> ? ? ? JDBC connect string for a JDBC metastore. ? ? ? To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL. ? ? ? For example, jdbc:postgresql://myhost/db?ssl=true for postgres database. ? ? </description> ? </property> ? <property> ? ? <name>datanucleus.schema.autoCreateAll</name> ? ? <value>true</value> ? ? <description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description> ? </property> <property> ? ? <name>hive.metastore.schema.verification</name> ? ? <value>false</value> ? ? <description> ? ? ? Enforce metastore schema version consistency. ? ? ? True: Verify that version information stored in is compatible with one from Hive jars. ?Also disable automatic ? ? ? ? ? ? schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures ? ? ? ? ? ? proper metastore schema migration. (Default) ? ? ? False: Warn if the version information stored in metastore doesn't match with one from in Hive jars. ? ? </description> ? </property> <property> ? ? <name>hive.exec.local.scratchdir</name> ? ? <value>/opt/module/apache-hive-3.1.2-bin/tmp/${user.name}</value> ? ? <description>Local scratch space for Hive jobs</description> ? </property> ? <property> <name>system:java.io.tmpdir</name> <value>/opt/module/apache-hive-3.1.2-bin/iotmp</value> <description/> </property> ? ? <property> ? ? <name>hive.downloaded.resources.dir</name> <value>/opt/module/apache-hive-3.1.2-bin/tmp/${hive.session.id}_resources</value> ? ? <description>Temporary local directory for added resources in the remote file system.</description> ? </property> <property> ? ? <name>hive.querylog.location</name> ? ? <value>/opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name}</value> ? ? <description>Location of Hive run time structured log file</description> ? </property> ? <property> ? ? <name>hive.server2.logging.operation.log.location</name> <value>/opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name}/operation_logs</value> ? ? <description>Top level directory where operation logs are stored if logging functionality is enabled</description> ? </property> ? <property> ? ? <name>hive.metastore.db.type</name> ? ? <value>mysql</value> ? ? <description> ? ? ? Expects one of [derby, oracle, mysql, mssql, postgres]. ? ? ? Type of database used by the metastore. Information schema &amp; JDBCStorageHandler depend on it. ? ? </description> ? </property> ? <property> ? ? <name>hive.cli.print.current.db</name> ? ? <value>true</value> ? ? <description>Whether to include the current database in the Hive prompt.</description> ? </property> ? <property> ? ? <name>hive.cli.print.header</name> ? ? <value>true</value> ? ? <description>Whether to print the names of the columns in query output.</description> ? </property> ? <property> ? ? <name>hive.metastore.warehouse.dir</name> ? ? <value>/opt/hive/warehouse</value> ? ? <description>location of default database for the warehouse</description> ? </property> </configuration>

8、上传mysql驱动包到/opt/module/apache-hive-3.1.2-bin/lib/文件夹下驱动包:mysql-connector-java-8.0.15.zip,解压后从里面获取jar包

9、在数据库中新建一个名为hive的数据库

mysql> create database hive;

10、初始化元数据库

命令:schematool -dbType mysql -initSchema

11、群起

命令:

start-all.sh ? ?Hadoop100上 start-yarn.sh ? ?Hadoop101上

12、启动hive

命令:hive

13、检测是否启动成功

命令:show databases;

注:出现各类数据库则配置成功


1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,会注明原创字样,如未注明都非原创,如有侵权请联系删除!;3.作者投稿可能会经我们编辑修改或补充;4.本站不提供任何储存功能只提供收集或者投稿人的网盘链接。

标签: #hadoophive环境搭建 #optsoftwaretar #zxvf #压缩包名 #C