Hadoop 集群环境上删除节点时,配置完exclues文件后,执行hdfs dfsadmin -refreshNodes
命令,但节点一直无法删除,状态一直保持Normal。
[https://www.cndba.cn@hadoopmaster hadoop]$ hdfs dfsadmin -refreshNodes
Refresh nodes successful
[https://www.cndba.cn@hadoopmaster hadoop]$
[https://www.cndba.cn@hadoopmaster ~]$ hdfs dfsadmin -report
…
Name: 192.168.20.85:9866 (hadoopslave5)
Hostname: hadoopslave5
Decommission Status : Normal
Configured Capacity: 89936470016 (83.76 GB)
DFS Used: 233910272 (223.07 MB)
Non DFS Used: 4794318848 (4.47 GB)
DFS Remaining: 84908240896 (79.08 GB)
DFS Used%: 0.26%
DFS Remaining%: 94.41%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Jan 24 05:33:11 CST 2019
Last Block Report: Thu Jan 24 04:03:38 CST 2019
Num of Blocks: 608
期间没有任何错误,所以怀疑是配置文件hdfs-site.xml的问题,重新对配置文件的格式进行了排版,在执行,果然正常执行了:
[https://www.cndba.cn@hadoopmaster hadoop]$ vim hdfs-site.xml
<property>
<name>dfs.hosts.exclude</name>
<value>/home/cndba/hadoop/etc/hadoop/excludes</value>
</property>
[https://www.cndba.cn@hadoopmaster ~]$ hdfs dfsadmin -report
…
Decommissioning datanodes (1):
Name: 192.168.20.85:9866 (hadoopslave5)
Hostname: hadoopslave5
Decommission Status : Decommission in progress
Configured Capacity: 89936470016 (83.76 GB)
DFS Used: 233910272 (223.07 MB)
Non DFS Used: 4794646528 (4.47 GB)
DFS Remaining: 84907913216 (79.08 GB)
DFS Used%: 0.26%
DFS Remaining%: 94.41%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Jan 24 05:42:24 CST 2019
Last Block Report: Thu Jan 24 05:36:57 CST 2019
Num of Blocks: 608
版权声明:本文为博主原创文章,未经博主允许不得转载。