本文共 3323 字,大约阅读时间需要 11 分钟。
检查hdfs数据大小
The metadata checkpointing in HDFS is done by the Secondary NameNode to merge the fsimage and the edits log files periodically and keep edits log size within a limit. For various reasons, the checkpointing by the Secondary NameNode may fail. For , HDFS SecondaraNameNode log shows errors in its log as follows.
HDFS中的元数据检查点由辅助NameNode完成,以定期合并fsimage和编辑日志文件,并使编辑日志大小保持在限制范围内。 由于各种原因,次要NameNode的检查点可能会失败。 ,HDFS SecondaraNameNode日志在其日志中显示错误,如下所示。
2017-08-06 10:54:14,488 org.apache..hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpointjava.io.IOException: Inconsistent checkpoint fields. LV = -63 namespaceID = 1920275013 cTime = 0 ; clusterId = CID-f38880ba-3415-4277-8abf-b5c2848b7a63 ; blockpoolId = BP-578888813-10.6.1.2-1497278556180. Expecting respectively: -63; 263120692; 0; CID-d22222fd-e28a-4b2d-bd2a-f60e1f0ad1b1; BP-622207878-10.6.1.2-1497242227638. at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:134) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:531) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:395) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:361) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:357)
This post introduces how to force a metadata checkpointing in HDFS.
这篇文章介绍了如何在HDFS中强制元数据检查点。
On the NameNode, save latest metadata to the fsimage as the HDFS super user (e.g. the user that runs the HDFS daemons) by running following commands:
在NameNode上,通过运行以下命令,以HDFS超级用户(例如,运行HDFS守护程序的用户)的身份将最新的元数据保存到fsimage中:
$ hdfs dfsadmin -safemode enter$ hdfs dfsadmin -safemode get # to confirm and ensure it is in safemode$ hdfs dfsadmin -saveNamespace$ hdfs dfsadmin -safemode leave
On the Secondary NameNode as the HDFS super user, stop Secondary NameNode service.
以HDFS超级用户身份在Secondary NameNode上,停止Secondary NameNode服务。
$ hadoop-daemon.sh stop secondarynamenode
Use jps
to make sure the secondarynamenode
process is indeed stopped.
使用jps
确保secondarynamenode
进程确实已停止。
Find out the value of dfs.namenode.checkpoint.dir
for the Secondary NameNode:
找出辅助NameNode的dfs.namenode.checkpoint.dir
值:
$ hdfs getconf -confKey dfs.namenode.checkpoint.dir
An example output is
一个示例输出是
file:///home/hadoop/tmp/dfs/namesecondary
Then, move/rename the current
dir under dfs.namenode.checkpoint.dir
so that it can be rebuilt again. For the above example, the command will be
然后,在dfs.namenode.checkpoint.dir
下移动/重命名current
目录,以便可以再次重建它。 对于上面的示例,命令将是
$ mv /home/hadoop/tmp/dfs/namesecondary /home/hadoop/tmp/dfs/namesecondary.old
Run following command on the Secondary NameNode:
在辅助NameNode上运行以下命令:
$ hdfs secondarynamenode -checkpoint force
Then the secondarynamenode back
然后 secondarynamenode
$ hadoop-daemon.sh start secondarynamenode
All should be back now.
现在应该都回来了。
翻译自:
检查hdfs数据大小
转载地址:http://lqowd.baihongyu.com/