详解:hive启动hiveserver2连JDBC报错:Could not open client transport with JDBC Uri 解决方案

冷不防 2024-02-19 21:59 75阅读 0赞

hive启动hiveserver2连JDBC报错:Could not open client transport with JDBC Uri 解决方案

  1. [hadoop@hadoop001 bin]$ ./beeline -u jdbc:hive2://hadoop001:10000/default -n hadoop
  2. ls: cannot access /home/hadoop/app/spark-2.4.0-bin-2.6.0-cdh5.7.0/lib/spark-assembly-*.jar: No such file or directory
  3. which: no hbase in (/home/hadoop/app/sqoop-1.4.6-cdh5.7.0/bin:/home/hadoop/app/spark-2.4.0-bin-2.6.0-cdh5.7.0/bin:/home/hadoop/app/scala-2.11.8/bin:/home/hadoop/app/hive-1.1.0-cdh5.7.0/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin:/usr/java/jdk1.8.0_45/bin:/opt/software/protobuf/bin:/bin:/opt/software/apache-maven-3.3.9/bin:/usr/java/jdk1.8.0_45/bin:/opt/software/apache-maven-3.3.9/bin:/usr/java/jdk1.8.0_45/bin:/usr/java/jdk1.8.0_45/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin)
  4. scan complete in 2ms
  5. Connecting to jdbc:hive2://localhost:10000/default
  6. Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000/default: java.net.ConnectException: Connection refused (state=08S01,code=0)
  7. Beeline version 1.1.0-cdh5.7.0 by Apache Hive
  8. 0: jdbc:hive2://localhost:10000/default (closed)>

图片展示:

在这里插入图片描述

解决方案:

原因:hiveserver2增加了权限控制,需要在hadoop的配置文件中配置

解决方法:在hadoop的core-site.xml中添加如下内容,然后重启hadoop,再使用beeline连接即可

参考官网:

https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Superusers.html

  1. [hadoop@hadoop001 hadoop]$ pwd

/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop[hadoop@hadoop001 sbin]$
[hadoop@hadoop001 hadoop]$ vi core-site.xml

hadoop.proxyuser.hadoop.hosts

hadoop.proxyuser.hadoop.groups

注意补充:

在这里插入图片描述
对于每个superUser用户,hosts必须进行配置,而groups和users至少需要配置一个。

这几个配置项的值都可以使用*来表示允许所有的主机/用户组/用户。

例如:

hadoop.proxyuser.userA.hosts * hadoop.proxyuser.userA.users user1,user2

表示允许用户userA,在任意主机节点,代理用户user1和user2

代理用户权限相关配置的改动,需要修改core-site.xml文件中相关配置。修改后配置并不会自动更新到服务器(这与fair- scheduler.xml配置会自动更新不同)。修改配置后需要执行以下命令进行同步,分别将信息更新到namenode和 resourcemananger上。

  1. hdfs dfsadmin refreshSuperUserGroupsConfiguration
  2. yarn rmadmin refreshSuperUserGroupsConfiguration

配置完,需要重新启动

  1. [hadoop@hadoop001 sbin]$ pwd
  2. /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/sbin
  3. [hadoop@hadoop001 sbin]$ ./stop-all.sh
  4. [hadoop@hadoop001 sbin]$ ./start-all.sh

在去连接就OK了啦

  1. [hadoop@hadoop001 bin]$ pwd
  2. /home/hadoop/app/hive-1.1.0-cdh5.7.0/bin
  3. [hadoop@hadoop001 bin]$ ./beeline -u jdbc:hive2://hadoop001:10000/default -n hadoop
  4. ls: cannot access /home/hadoop/app/spark-2.4.0-bin-2.6.0-cdh5.7.0/lib/spark-assembly-*.jar: No such file or directory
  5. which: no hbase in (/home/hadoop/app/sqoop-1.4.6-cdh5.7.0/bin:/home/hadoop/app/spark-2.4.0-bin-2.6.0-cdh5.7.0/bin:/home/hadoop/app/scala-2.11.8/bin:/home/hadoop/app/hive-1.1.0-cdh5.7.0/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin:/usr/java/jdk1.8.0_45/bin:/opt/software/protobuf/bin:/bin:/opt/software/apache-maven-3.3.9/bin:/usr/java/jdk1.8.0_45/bin:/opt/software/apache-maven-3.3.9/bin:/usr/java/jdk1.8.0_45/bin:/usr/java/jdk1.8.0_45/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin)
  6. scan complete in 2ms
  7. Connecting to jdbc:hive2://hadoop001:10000/default
  8. Connected to: Apache Hive (version 1.1.0-cdh5.7.0)
  9. Driver: Hive JDBC (version 1.1.0-cdh5.7.0)
  10. Transaction isolation: TRANSACTION_REPEATABLE_READ
  11. Beeline version 1.1.0-cdh5.7.0 by Apache Hive
  12. 0: jdbc:hive2://hadoop001:10000/default>

发表评论

表情:
评论列表 (有 0 条评论,75人围观)

还没有评论,来说两句吧...

相关阅读