Bootstrap

hive配置Kerbros安全认证_hive kereveros(1)

kerbros的服务端执行即可。
​
[root@hadoop01 ~]# chkconfig krb5kdc on
[root@hadoop01 ~]# chkconfig kadmin on
[root@hadoop01 ~]# service krb5kdc start
[root@hadoop01 ~]# service kadmin start
[root@hadoop01 ~]# service krb5kdc status
1.3.6.3 kerberos的管理员创建
在kerbros服务端执行如下命令。
​
kadmin.local输入后,,添加规则:addprinc admin/[email protected]。
[root@hadoop01 ~]# kadmin.local
Authenticating as principal root/[email protected] with password.
继续如下图的填写:

输入规则和密码,,两次密码相同即可,我是用的是root。

最后使用q、quit或者exist退出即可。

第二章 hadoop集群配置Kerbros

一些概念:
Kerberos principal用于在kerberos加密系统中标记一个唯一的身份。
kerberos为kerberos principal分配tickets使其可以访问由kerberos加密的hadoop服务。
对于hadoop,principals的格式为username/[email protected].

keytab是包含principals和加密principal key的文件。 keytab文件对于每个host是唯一的,因为key中包含hostname。keytab文件用于不需要人工交互和保存纯文本密码,实现到kerberos上验证一个主机上的principal。 因为服务器上可以访问keytab文件即可以以principal的身份通过kerberos的认证,所以,keytab文件应该被妥善保存,应该只有少数的用户可以访问。

hive配置kerberos的前提是Hadoop集群已经配置好Kerberos,因此我们先来配置Hadoop集群的认证。

2.1 添加用户

如下的创建用户,密码都是用户名。可以随意设置。
#创建hadoop用户
[root@hadoop01 hadoop]# useradd hadoop
[root@hadoop01 hadoop]# passwd hadoop
​
[root@hadoop02 hadoop]# useradd hadoop
[root@hadoop02 hadoop]# passwd hadoop
​
[root@hadoop03 hadoop]# useradd hadoop
[root@hadoop03 hadoop]# passwd hadoop
​
#新建用户yarn,其中需设定userID<1000,命令如下:
[root@hadoop01 ~]# useradd -u 502 yarn -g hadoop
#并使用passwd命令为新建用户设置密码
[root@hadoop01 ~]# passwd yarn
passwd yarn 输入新密码
​
#创建hdfs用户
[root@hadoop01 hadoop]# useradd hdfs -g hadoop
[root@hadoop01 hadoop]# passwd hdfs
​
[root@hadoop02 hadoop]# useradd hdfs -g hadoop
[root@hadoop02 hadoop]# passwd hdfs
​
[root@hadoop03 hadoop]# useradd hdfs -g hadoop
[root@hadoop03 hadoop]# passwd hdfs
​
#创建HTTP用户
[root@hadoop01 hadoop]# useradd HTTP
[root@hadoop01 hadoop]# passwd HTTP
​
[root@hadoop02 hadoop]# useradd HTTP
[root@hadoop02 hadoop]# passwd HTTP
​
[root@hadoop03 hadoop]# useradd HTTP
[root@hadoop03 hadoop]# passwd HTTP

2.2 创建 kerberos的普通用户及密钥文件,为配置 YARN kerberos security 时,各节点可以相互访问用

在服务端节点的root用户下分别执行以下命令:
​
[root@hadoop01 ~]# cd /var/kerberos/krb5kdc/
#登录管理用户
[root@hadoop01 krb5kdc]# kadmin.local
#创建用户
addprinc -randkey yarn/[email protected]
addprinc -randkey yarn/[email protected]
addprinc -randkey yarn/[email protected]
addprinc -randkey hdfs/[email protected]
addprinc -randkey hdfs/[email protected]
addprinc -randkey hdfs/[email protected]
addprinc -randkey HTTP/[email protected]
addprinc -randkey HTTP/[email protected]
addprinc -randkey HTTP/[email protected]
#生成密钥文件(生成到当前路径下)
[root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k yarn.keytab  yarn/[email protected]"
[root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k yarn.keytab  yarn/[email protected]"
[root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k yarn.keytab  yarn/[email protected]"
​
[root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k HTTP.keytab  HTTP/[email protected]"
[root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k HTTP.keytab  HTTP/[email protected]"
[root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k HTTP.keytab  HTTP/[email protected]"
​
[root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k hdfs-unmerged.keytab hdfs/[email protected]"
[root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k hdfs-unmerged.keytab  hdfs/[email protected]"
[root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k hdfs-unmerged.keytab hdfs/[email protected]"
​
#合并成一个keytab文件,rkt表示展示,wkt表示写入
[root@hadoop01 krb5kdc]# ktutil
ktutil:  rkt hdfs-unmerged.keytab
ktutil:  rkt HTTP.keytab
ktutil:  rkt yarn.keytab
ktutil:  wkt hdfs.keytab
ktutil:  q
注意:ktutil:以后面的是输入的。
​
#查看
[root@hadoop01 krb5kdc]# klist -ket  hdfs.keytab
Keytab name: FILE:hdfs.keytab
KVNO Timestamp           Principal
---- ------------------- ------------------------------------------------------
   3 04/14/2020 15:48:21 hdfs/[email protected] (aes128-cts-hmac-sha1-96)
   3 04/14/2020 15:48:21 hdfs/[email protected] (des3-cbc-sha1)
   3 04/14/2020 15:48:21 hdfs/[email protected] (arcfour-hmac)
   3 04/14/2020 15:48:21 hdfs/[email protected] (camellia256-cts-cmac)
   3 04/14/2020 15:48:21 hdfs/[email protected] (camellia128-cts-cmac)
   3 04/14/2020 15:48:21 hdfs/[email protected] (des-hmac-sha1)
   3 04/14/2020 15:48:21 hdfs/[email protected] (des-cbc-md5)
   3 04/14/2020 15:48:21 hdfs/[email protected] (aes128-cts-hmac-sha1-96)
   3 04/14/2020 15:48:21 hdfs/[email protected] (des3-cbc-sha1)
   3 04/14/2020 15:48:21 hdfs/[email protected] (arcfour-hmac)
   3 04/14/2020 15:48:21 hdfs/[email protected] (camellia256-cts-cmac)
   3 04/14/2020 15:48:21 hdfs/[email protected] (camellia128-cts-cmac)
   3 04/14/2020 15:48:21 hdfs/[email protected] (des-hmac-sha1)
   3 04/14/2020 15:48:21 hdfs/[email protected] (des-cbc-md5)
   8 04/14/2020 15:48:21 HTTP/[email protected] (aes128-cts-hmac-sha1-96)
   8 04/14/2020 15:48:21 HTTP/[email protected] (des3-cbc-sha1)
   8 04/14/2020 15:48:21 HTTP/[email protected] (arcfour-hmac)
   8 04/14/2020 15:48:21 HTTP/[email protected] (camellia256-cts-cmac)
   8 04/14/2020 15:48:21 HTTP/[email protected] (camellia128-cts-cmac)
   8 04/14/2020 15:48:21 HTTP/[email protected] (des-hmac-sha1)
   8 04/14/2020 15:48:21 HTTP/[email protected] (des-cbc-md5)
   6 04/14/2020 15:48:21 HTTP/[email protected] (aes128-cts-hmac-sha1-96)
   6 04/14/2020 15:48:21 HTTP/[email protected] (des3-cbc-sha1)
   6 04/14/2020 15:48:21 HTTP/[email protected] (arcfour-hmac)
   6 04/14/2020 15:48:21 HTTP/[email protected] (camellia256-cts-cmac)
   6 04/14/2020 15:48:21 HTTP/[email protected] (camellia128-cts-cmac)
   6 04/14/2020 15:48:21 HTTP/[email protected] (des-hmac-sha1)
   6 04/14/2020 15:48:21 HTTP/[email protected] (des-cbc-md5)
   6 04/14/2020 15:48:21 HTTP/[email protected] (aes128-cts-hmac-sha1-96)
   6 04/14/2020 15:48:21 HTTP/[email protected] (des3-cbc-sha1)
   6 04/14/2020 15:48:21 HTTP/[email protected] (arcfour-hmac)
   6 04/14/2020 15:48:21 HTTP/[email protected] (camellia256-cts-cmac)
   6 04/14/2020 15:48:21 HTTP/[email protected] (camellia128-cts-cmac)
   6 04/14/2020 15:48:21 HTTP/[email protected] (des-hmac-sha1)
   6 04/14/2020 15:48:21 HTTP/[email protected] (des-cbc-md5)
   7 04/14/2020 15:48:21 HTTP/[email protected] (aes128-cts-hmac-sha1-96)
   7 04/14/2020 15:48:21 HTTP/[email protected] (des3-cbc-sha1)
   7 04/14/2020 15:48:21 HTTP/[email protected] (arcfour-hmac)
   7 04/14/2020 15:48:21 HTTP/[email protected] (camellia256-cts-cmac)
   7 04/14/2020 15:48:21 HTTP/[email protected] (camellia128-cts-cmac)
   7 04/14/2020 15:48:21 HTTP/[email protected] (des-hmac-sha1)
   7 04/14/2020 15:48:21 HTTP/[email protected] (des-cbc-md5)
   4 04/14/2020 15:48:21 yarn/[email protected] (aes128-cts-hmac-sha1-96)
   4 04/14/2020 15:48:21 yarn/[email protected] (des3-cbc-sha1)
   4 04/14/2020 15:48:21 yarn/[email protected] (arcfour-hmac)
   4 04/14/2020 15:48:21 yarn/[email protected] (camellia256-cts-cmac)
   4 04/14/2020 15:48:21 yarn/[email protected] (camellia128-cts-cmac)
   4 04/14/2020 15:48:21 yarn/[email protected] (des-hmac-sha1)
   4 04/14/2020 15:48:21 yarn/[email protected] (des-cbc-md5)
   4 04/14/2020 15:48:21 yarn/[email protected] (aes128-cts-hmac-sha1-96)
   4 04/14/2020 15:48:21 yarn/[email protected] (des3-cbc-sha1)
   4 04/14/2020 15:48:21 yarn/[email protected] (arcfour-hmac)
   4 04/14/2020 15:48:21 yarn/[email protected] (camellia256-cts-cmac)
   4 04/14/2020 15:48:21 yarn/[email protected] (camellia128-cts-cmac)
   4 04/14/2020 15:48:21 yarn/[email protected] (des-hmac-sha1)
   4 04/14/2020 15:48:21 yarn/[email protected] (des-cbc-md5)
   4 04/14/2020 15:48:21 yarn/[email protected] (aes128-cts-hmac-sha1-96)
   4 04/14/2020 15:48:21 yarn/[email protected] (des3-cbc-sha1)
   4 04/14/2020 15:48:21 yarn/[email protected] (arcfour-hmac)
   4 04/14/2020 15:48:21 yarn/[email protected] (camellia256-cts-cmac)
   4 04/14/2020 15:48:21 yarn/[email protected] (camellia128-cts-cmac)
   4 04/14/2020 15:48:21 yarn/[email protected] (des-hmac-sha1)
   4 04/14/2020 15:48:21 yarn/[email protected] (des-cbc-md5)

将生成的hdfs.keytab文件复制到hadoop配置路径下,并授权 后面经常会遇到使用keytab login失败的问题,首先需要检查的就是文件的权限。

[root@hadoop01 krb5kdc]# cp ./hdfs.keytab /usr/local/hadoop-2.7.6/etc/hadoop/
[root@hadoop01 krb5kdc]# cd /usr/local/hadoop-2.7.6/etc/hadoop/
[root@hadoop01 krb5kdc]# chown hdfs:hadoop hdfs.keytab && chmod 400 hdfs.keytab

2.3 配置hadoop集群

core-site.xml配置:

<!--添加以下配置-->
<property>
    <name>hadoop.security.authorization</name>
    <value>true</value>
</property>
<property>
    <name>hadoop.security.authentication</name>
    <value>kerberos</value>
</property>
​

yarn-site.xml

<!--添加以下内容,内存不足就不要配置
<property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>1024</value>
</property>
-->
<!-- ResourceManager security configs -->
<property>
  <name>yarn.resourcemanager.keytab</name>
  <value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
</property>
<property>
  <name>yarn.resourcemanager.principal</name>
  <value>hdfs/[email protected]</value>
</property>
<!-- NodeManager security configs -->
<property>
  <name>yarn.nodemanager.keytab</name>
  <value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
</property>
<property>
  <name>yarn.nodemanager.principal</name>
  <value>hdfs/[email protected]</value>
</property>
<property>
  <name>yarn.nodemanager.container-executor.class</name>
  <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
</property>
<property>
  <name>yarn.nodemanager.linux-container-executor.group</name>
  <value>yarn</value>
</property>
<property>
  <name>yarn.resourcemanager.proxy-user-privileges.enabled</name>
  <value>true</value>
</property>
<property>
  <name>yarn.nodemanager.local-dirs</name>
  <value>/usr/local/hadoop-2.7.6/tmp/nm-local-dir</value>
</property>
​

hdfs-site.xml

<!--添加以下内容-->
<property>
  <name>dfs.block.access.token.enable</name>
  <value>true</value>
</property>
<property>  
  <name>dfs.datanode.data.dir.perm</name>  
  <value>700</value>  
</property>
<property>
  <name>dfs.namenode.keytab.file</name>
  <value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
</property>
<property>
  <name>dfs.namenode.kerberos.principal</name>
  <value>hdfs/[email protected]</value>
</property>
<property>
  <name>dfs.namenode.kerberos.https.principal</name>
  <value>HTTP/[email protected]</value>
</property>
<property>
  <name>dfs.datanode.address</name>
  <value>0.0.0.0:1004</value>
</property>
<property>
  <name>dfs.datanode.http.address</name>
  <value>0.0.0.0:1006</value>
</property>
<property>
  <name>dfs.datanode.keytab.file</name>
  <value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
</property>
<property>
  <name>dfs.datanode.kerberos.principal</name>
  <value>hdfs/[email protected]</value>
</property>
<property>
  <name>dfs.datanode.kerberos.https.principal</name>
  <value>HTTP/[email protected]</value>
</property>
​
<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>
 
<property>
  <name>dfs.web.authentication.kerberos.principal</name>
  <value>HTTP/[email protected]</value>
</property>
 
<property>
  <name>dfs.web.authentication.kerberos.keytab</name>
  <value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
</property>
​
<property>
<name>dfs.secondary.namenode.keytab.file</name>
<value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
</property>
​
<property>
<name>dfs.secondary.namenode.kerberos.principal</name>
<value>hdfs/[email protected]</value>
</property>
​
<property>
  <name>hadoop.tmp.dir</name>
  <value>/usr/local/hadoop-2.7.6/tmp</value>
</property>
​

mapred-site.xml:

<!--添加以下内容-->
<property>
  <name>mapreduce.jobhistory.keytab</name>
  <value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
</property>
<property>
  <name>mapreduce.jobhistory.principal</name>
  <value>hdfs/[email protected]</value>
</property>
<property>
  <name>mapreduce.jobhistory.http.policy</name>
  <value>HTTPS_ONLY</value>
</property>

container-executor.cfg

<!--覆盖以下内容-->
yarn.nodemanager.linux-container-executor.group=hadoop
​
#configured value of yarn.nodemanager.linux-container-executor.group
​
banned.users=hdfs
​
#comma separated list of users who can not run applications
​
min.user.id=0
​
#Prevent other super-users
​
allowed.system.users=root,yarn,hdfs,mapred,nobody
​
##comma separated list of system users who CAN run applications

2.4 编译安装JSVC

当设置了安全的datanode时,启动datanode需要root权限,需要修改hadoop-env.sh文件.且需要安装jsvc,同时重新下载编译包commons-daemon-1.0.15.jar,并把$HADOOP_HOME/share/hadoop/hdfs/lib下替换掉.
否则报错Cannot start secure DataNode without configuring either privileged resources

启动datanode具体报错如下:

2020-04-14 15:56:35,164 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.lang.RuntimeException: Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP.  Using privileged resources in combination with SASL RPC data transfer protection is not supported.
        at org.apache.hadoop.hdfs.server.datanode.DataNode.checkSecureConfig(DataNode.java:1208)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1108)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:429)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2414)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2301)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2348)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2530)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2554)
2020-04-14 15:56:35,173 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2020-04-14 15:56:35,179 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
2.4.1 下载安装包
下载解压commons-daemon-1.2.2-src.tar.gz及commons-daemon-1.2.2-bin.tar.gz
2.4.2 安装操作
[root@hadoop01 hadoop]# cd /usr/local
[root@hadoop01 local]# cd ./JSVC_packages/
[root@hadoop01 JSVC_packages]# wget http://apache.fayea.com//commons/daemon/source/commons-daemon-1.2.2-src.tar.gz
[root@hadoop01 JSVC_packages]# wget http://apache.fayea.com//commons/daemon/binaries/commons-daemon-1.2.2-bin.tar.gz
[root@hadoop01 JSVC_packages]# tar xf commons-daemon-1.2.2-bin.tar.gz
[root@hadoop01 JSVC_packages]# tar xf commons-daemon-1.2.2-src.tar.gz
​
[root@hadoop01 JSVC_packages]# ll
total 472
drwxr-xr-x. 3 root root    278 Apr 14 16:25 commons-daemon-1.2.2
-rw-r--r--. 1 root root 179626 Apr 14 16:24 commons-daemon-1.2.2-bin.tar.gz
drwxr-xr-x. 3 root root    180 Apr 14 16:25 commons-daemon-1.2.2-src
-rw-r--r--. 1 root root 301538 Apr 14 16:24 commons-daemon-1.2.2-src.tar.gz
​
#编译生成jsvc,并拷贝至指定目录
[root@hadoop01 JSVC_packages]# cd commons-daemon-1.2.2-src/src/native/unix/
[root@hadoop01 unix]# ./configure
[root@hadoop01 unix]# make
[root@hadoop01 unix]# cp ./jsvc /usr/local/hadoop-2.7.6/libexec/
​
#拷贝commons-daemon-1.2.2.jar
[root@hadoop01 unix]# cd /usr/local/JSVC_packages/commons-daemon-1.2.2/
[root@hadoop01 commons-daemon-1.2.2]# cp /usr/local/hadoop-2.7.6/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar /usr/local/hadoop-2.7.6/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar.bak
​
[root@hadoop01 commons-daemon-1.2.2]# cp ./commons-daemon-1.2.2.jar /usr/local/hadoop-2.7.6/share/hadoop/hdfs/lib/
​
​
[root@hadoop01 /opt/JSVC_packages/commons-daemon-1.2.2]# cd /opt/hadoop-2.7.2/share/hadoop/hdfs/lib/
[root@hadoop01 /opt/hadoop-2.7.2/share/hadoop/hdfs/lib]# chown hdfs:hadoop commons-daemon-1.2.2.jar 
2.4.3 hadoop-env.sh
[root@hadoop01 hadoop-2.7.6]# vi ./etc/hadoop/hadoop-env.sh
​
追加如下内容:
export HADOOP_SECURE_DN_USER=hdfs
export JSVC_HOME=/usr/local/hadoop-2.7.6/libexec/

2.5 分发到其它服务器

[root@hadoop01 local]# scp -r /usr/local/hadoop-2.7.6/ hadoop02:/usr/local/
​
[root@hadoop01 local]# scp -r /usr/local/hadoop-2.7.6/ hadoop03:/usr/local/

2.6 启动hadoop集群

​
[root@hadoop01 hadoop-2.7.6]# kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab hdfs/[email protected]
[root@hadoop02 hadoop-2.7.6]# kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab hdfs/[email protected]
[root@hadoop03 hadoop-2.7.6]# kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab hdfs/[email protected]
​
[root@hadoop02 krb5kdc]# cd /usr/local/hadoop-2.7.6/etc/hadoop/
[root@hadoop02 krb5kdc]# chown hdfs:hadoop hdfs.keytab && chmod 400 hdfs.keytab
​
[root@hadoop03 krb5kdc]# cd /usr/local/hadoop-2.7.6/etc/hadoop/
[root@hadoop03 krb5kdc]# chown hdfs:hadoop hdfs.keytab && chmod 400 hdfs.keytab
​
[root@hadoop01 hadoop-2.7.6]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs/[email protected]
​
Valid starting       Expires              Service principal
04/14/2020 16:49:17  04/15/2020 16:49:17  krbtgt/[email protected]
        renew until 04/21/2020 16:49:17
        
 
 
 
 [root@hadoop02 ~]# useradd hdfs
 [root@hadoop02 hadoop-2.7.6]# passwd hdfs
 [root@hadoop03 ~]# useradd hdfs
 [root@hadoop03 hadoop-2.7.6]# passwd hdfs
 
 #启动hdfs,,直接root用户
[root@hadoop01 hadoop-2.7.6]# start-dfs.sh
#启动DataNode,直接root用户
[root@hadoop01 hadoop-2.7.6]# start-secure-dns.sh
#启动yarn,直接root用户启动即可(亲测没有问题)
[root@hadoop01 hadoop-2.7.6]# start-yarn.sh
 #启动historyserver,,直接root用户
[root@hadoop01 hadoop-2.7.6]# mr-jobhistory-daemon.sh start historyserver
​
​
停止集群:
#停止DataNode,需要切换到root用户
[root@hadoop01 hadoop-2.7.6]# stop-secure-dns.sh
 #停止hdfs
[root@hadoop01 hadoop-2.7.6]# stop-dfs.sh
​
#停止yarn,直接root用户启动即可(亲测没有问题)
[root@hadoop01 hadoop-2.7.6]# stop-yarn.sh
​

2.7 测试hadoop集群

访问地址:http://hadoop01:50070

yarn的访问地址:http://hadoop01:8088

hdfs的测试:

[root@hadoop01 hadoop-2.7.6]# hdfs dfs -ls /
[root@hadoop01 hadoop-2.7.6]# hdfs dfs -put /home/words /
[root@hadoop01 hadoop-2.7.6]# hdfs dfs -cat /words
hello qianfeng
hello flink
wuhan jiayou hello wuhan wuhan hroe
​
​
# 如下使用hdfs测试,当hdfs未获取授权验证,是不能访问hdfs的文件系统的
[hdfs@hadoop02 hadoop]$ hdfs dfs -cat /words
20/04/15 15:04:41 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
cat: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "hadoop02/192.168.216.112"; destination host is: "hadoop01":9000;
​
#解决方法:
[hdfs@hadoop02 hadoop]$ kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab hdfs/[email protected]
[hdfs@hadoop02 hadoop]$ hdfs dfs -cat /words
hello qianfeng
hello flink
wuhan jiayou hello wuhan wuhan hroe

yarn的测试:

[root@hadoop01 hadoop-2.7.6]# kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab yarn/[email protected]
​
[root@hadoop01 hadoop-2.7.6]# yarn jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.6.jar wordcount /words /out/00
​
错误1:
20/04/15 23:42:45 INFO mapreduce.Job: Job job_1586934815492_0008 failed with state FAILED due to: Application application_1586934815492_0008 failed 2 times due to AM Container for appattempt_1586934815492_0008_000002 exited with  exitCode: -1000
For more detailed output, check application tracking page:http://hadoop01:8088/cluster/app/application_1586934815492_0008Then, click on links to logs of each attempt.
Diagnostics: Application application_1586934815492_0008 initialization failed (exitCode=255) with output: Requested user hdfs is banned
​
错误2:
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
解决方案:
hdfs-site.xml中配置临时目录
yarn-site.xml中也要配置零食目录,,并且和hdfs中的前边一样,后边加一点固定的
​
#再次测试:
[root@hadoop01 hadoop-2.7.6]# yarn jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.6.jar wordcount /words /out/02
20/04/16 02:55:38 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.216.111:8032
20/04/16 02:55:38 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 61 for yarn on 192.168.216.111:9000
20/04/16 02:55:38 INFO security.TokenCache: Got dt for hdfs://hadoop01:9000; Kind: HDFS_DELEGATION_TOKEN, Service: 192.168.216.111:9000, Ident: (HDFS_DELEGATION_TOKEN token 61 for yarn)
20/04/16 02:55:39 INFO input.FileInputFormat: Total input paths to process : 1
20/04/16 02:55:39 INFO mapreduce.JobSubmitter: number of splits:1
20/04/16 02:55:39 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1586976916277_0001
20/04/16 02:55:39 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: 192.168.216.111:9000, Ident: (HDFS_DELEGATION_TOKEN token 61 for yarn)
20/04/16 02:55:41 INFO impl.YarnClientImpl: Submitted application application_1586976916277_0001
20/04/16 02:55:41 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1586976916277_0001/
20/04/16 02:55:41 INFO mapreduce.Job: Running job: job_1586976916277_0001
20/04/16 02:56:11 INFO mapreduce.Job: Job job_1586976916277_0001 running in uber mode : false
20/04/16 02:56:11 INFO mapreduce.Job:  map 0% reduce 0%
20/04/16 02:56:13 INFO mapreduce.Job: Task Id : attempt_1586976916277_0001_m_000000_0, Status : FAILED
Application application_1586976916277_0001 initialization failed (exitCode=20) with output: main : command provided 0
main : user is yarn
main : requested yarn user is yarn
Permission mismatch for /usr/local/hadoop-2.7.6/tmp/nm-local-dir for caller uid: 0, owner uid: 502.
Couldn't get userdir directory for yarn.
20/04/16 02:56:20 INFO mapreduce.Job:  map 100% reduce 0%
20/04/16 02:56:28 INFO mapreduce.Job:  map 100% reduce 100%
20/04/16 02:56:28 INFO mapreduce.Job: Job job_1586976916277_0001 completed successfully
20/04/16 02:56:28 INFO mapreduce.Job: Counters: 51
        File System Counters
                FILE: Number of bytes read=81
                FILE: Number of bytes written=251479
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=154
                HDFS: Number of bytes written=51
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters
                Failed map tasks=1
                Launched map tasks=2
                Launched reduce tasks=1
                Other local map tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=4531
                Total time spent by all reduces in occupied slots (ms)=3913
                Total time spent by all map tasks (ms)=4531
                Total time spent by all reduce tasks (ms)=3913
                Total vcore-milliseconds taken by all map tasks=4531
                Total vcore-milliseconds taken by all reduce tasks=3913
                Total megabyte-milliseconds taken by all map tasks=4639744
                Total megabyte-milliseconds taken by all reduce tasks=4006912
        Map-Reduce Framework
                Map input records=3
                Map output records=10
                Map output bytes=103
                Map output materialized bytes=81
                Input split bytes=91
                Combine input records=10
                Combine output records=6
                Reduce input groups=6
                Reduce shuffle bytes=81
                Reduce input records=6
                Reduce output records=6
                Spilled Records=12
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=192
                CPU time spent (ms)=2120
                Physical memory (bytes) snapshot=441053184
                Virtual memory (bytes) snapshot=4211007488
                Total committed heap usage (bytes)=277348352
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=63
        File Output Format Counters
                Bytes Written=51

错误1:

2020-04-15 14:38:36,457 INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user hdfs/[email protected] using keytab file /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab
2020-04-15 14:38:36,961 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /home/hdfs/hadoopdata/dfs/data :
​
解决方案(如果满足下面的要求,不用做)
第1步:
[root@hadoop02 ~]#  useradd hdfs -g hadoop
[root@hadoop02 ~]#  passwd hdfs
​
[root@hadoop03 ~]#  useradd hdfs -g hadoop
[root@hadoop03 ~]#  passwd hdfs
​
第2步(那一台报错在那一台执行):
[root@hadoop02 hadoop]# chown -R hdfs:hadoop /home/hdfs/hadoopdata/
[root@hadoop02 hadoop]# chown -R hdfs:hadoop /home/hdfs/hadoopdata/
[root@hadoop03 hadoop]# chown -R hdfs:hadoop /home/hdfs/hadoopdata/

错误2:

启动datanode报错:
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/home/hdfs/hadoopdata/dfs/data"
​
解决方案(确定没有手动创建都可以):
[root@hadoop02 hadoop-2.7.6]# mkdir -p /home/hdfs/hadoopdata/dfs/data
[root@hadoop03 hadoop-2.7.6]# mkdir -p /home/hdfs/hadoopdata/dfs/data
​

错误3:

启动yarn时报错:
Caused by: java.io.IOException: Login failure for hdfs/[email protected] from keytab /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user
​
解决(那一台报错就在那一台是对应执行):
[root@hadoop02 hadoop-2.7.6]# kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab hdfs/[email protected]
[root@hadoop03 hadoop-2.7.6]# kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab hdfs/[email protected]

错误4:

启动yarn时报错如下:
Caused by: ExitCodeException exitCode=24: File /usr/local/hadoop-2.7.6/etc/hadoop/container-executor.cfg must be owned by root, but is owned by 20415
​
将container-executor.cfg的所有父目录及本身文件都修改成root:root即可:
[root@hadoop01 hadoop-2.7.6]# chown  root:root /usr/local/hadoop-2.7.6/etc/
[root@hadoop01 hadoop-2.7.6]# chown  root:root /usr/local/hadoop-2.7.6/etc/hadoop/
[root@hadoop01 hadoop-2.7.6]# chown  root:root /usr/local/hadoop-2.7.6/etc/hadoop/container-executor.cfg

错误5:

启动yarn时报错如下:
Caused by: ExitCodeException exitCode=22: Invalid permissions on container-executor binary.
​
解决方法:
[root@hadoop01 hadoop-2.7.6]# chown root:hadoop $HADOOP_HOME/bin/container-executor
[root@hadoop01 hadoop-2.7.6]# chmod 6050 $HADOOP_HOME/bin/container-executor
​
[root@hadoop02 hadoop-2.7.6]# chown root:hadoop $HADOOP_HOME/bin/container-executor
[root@hadoop02 hadoop-2.7.6]# chmod 6050 $HADOOP_HOME/bin/container-executor
​
[root@hadoop03 hadoop-2.7.6]# chown root:hadoop $HADOOP_HOME/bin/container-executor
[root@hadoop03 hadoop-2.7.6]# chmod 6050 $HADOOP_HOME/bin/container-executor

错误6:

#运行案例报错
java.io.IOException: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=1536, maxMemory=1024


### 给大家的福利


**零基础入门**


对于从来没有接触过网络安全的同学,我们帮你准备了详细的学习成长路线图。可以说是最科学最系统的学习路线,大家跟着这个大的方向学习准没问题。


![](https://img-blog.csdnimg.cn/img_convert/95608e9062782d28f4f04f821405d99a.png)


同时每个成长路线对应的板块都有配套的视频提供:


![在这里插入图片描述](https://img-blog.csdnimg.cn/direct/a91b9e8100834e9291cfcf1695d8cd42.png#pic_center)


因篇幅有限,仅展示部分资料

**网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。**

**[需要这份系统化资料的朋友,可以点击这里获取](https://bbs.csdn.net/topics/618540462)**

**一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**

;