博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
检查hdfs数据大小_如何在HDFS中强制执行元数据检查点
阅读量:2513 次
发布时间:2019-05-11

本文共 3323 字,大约阅读时间需要 11 分钟。

检查hdfs数据大小

The metadata checkpointing in HDFS is done by the Secondary NameNode to merge the fsimage and the edits log files periodically and keep edits log size within a limit. For various reasons, the checkpointing by the Secondary NameNode may fail. For , HDFS SecondaraNameNode log shows errors in its log as follows.

HDFS中的元数据检查点由辅助NameNode完成,以定期合并fsimage和编辑日志文件,并使编辑日志大小保持在限制范围内。 由于各种原因,次要NameNode的检查点可能会失败。 ,HDFS SecondaraNameNode日志在其日志中显示错误,如下所示。

2017-08-06 10:54:14,488 org.apache..hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpointjava.io.IOException: Inconsistent checkpoint fields.  LV = -63 namespaceID = 1920275013 cTime = 0 ; clusterId = CID-f38880ba-3415-4277-8abf-b5c2848b7a63 ; blockpoolId = BP-578888813-10.6.1.2-1497278556180.  Expecting respectively: -63; 263120692; 0; CID-d22222fd-e28a-4b2d-bd2a-f60e1f0ad1b1; BP-622207878-10.6.1.2-1497242227638.  at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:134)  at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:531)  at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:395)  at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:361)  at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)  at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:357)

This post introduces how to force a metadata checkpointing in HDFS.

这篇文章介绍了如何在HDFS中强制元数据检查点。

第一步:通过NameNode将最新的HDFS元数据保存到fsimage (Step one: Save latest HDFS metadata to the fsimage by the NameNode)

On the NameNode, save latest metadata to the fsimage as the HDFS super user (e.g. the user that runs the HDFS daemons) by running following commands:

在NameNode上,通过运行以下命令,以HDFS超级用户(例如,运行HDFS守护程序的用户)的身份将最新的元数据保存到fsimage中:

$ hdfs dfsadmin -safemode enter$ hdfs dfsadmin -safemode get # to confirm and ensure it is in safemode$ hdfs dfsadmin -saveNamespace$ hdfs dfsadmin -safemode leave

第二步:清理Secondary NameNode旧数据目录 (Step two: clean the Secondary NameNode old data dir)

On the Secondary NameNode as the HDFS super user, stop Secondary NameNode service.

以HDFS超级用户身份在Secondary NameNode上,停止Secondary NameNode服务。

$ hadoop-daemon.sh stop secondarynamenode

Use jps to make sure the secondarynamenode process is indeed stopped.

使用jps确保secondarynamenode进程确实已停止。

Find out the value of dfs.namenode.checkpoint.dir for the Secondary NameNode:

找出辅助NameNode的dfs.namenode.checkpoint.dir值:

$ hdfs getconf -confKey dfs.namenode.checkpoint.dir

An example output is

一个示例输出是

file:///home/hadoop/tmp/dfs/namesecondary

Then, move/rename the current dir under dfs.namenode.checkpoint.dir so that it can be rebuilt again. For the above example, the command will be

然后,在dfs.namenode.checkpoint.dir下移动/重命名current目录,以便可以再次重建它。 对于上面的示例,命令将是

$ mv /home/hadoop/tmp/dfs/namesecondary /home/hadoop/tmp/dfs/namesecondary.old

第三步:通过辅助NameNode强制HDFS元数据检查点 (Step three: force a HDFS metadata checkpointing by the Secondary NameNode)

Run following command on the Secondary NameNode:

在辅助NameNode上运行以下命令:

$ hdfs secondarynamenode -checkpoint force

Then the secondarynamenode back

然后 secondarynamenode

$ hadoop-daemon.sh start secondarynamenode

All should be back now.

现在应该都回来了。

翻译自:

检查hdfs数据大小

转载地址:http://lqowd.baihongyu.com/

你可能感兴趣的文章
UNIX基础知识
查看>>
bzoj 1179: [Apio2009]Atm
查看>>
利用LDA进行文本聚类(hadoop, mahout)
查看>>
第三周作业
查看>>
js添加删除行
查看>>
浏览器性能测试网址
查看>>
[MTK FP]用Python把图片资源image.rar中为.pbm后缀的文件更改为.bmp后缀的方法
查看>>
实验二
查看>>
[LeetCode]203. Remove Linked List Elements 解题小结
查看>>
测试一下
查看>>
vue base64
查看>>
【Django实战开发】案例一:创建自己的blog站点-1.安装及搭建开发环境
查看>>
Pie(二分)
查看>>
Mysql 索引优化
查看>>
09湖州二模(自选模块不等式)
查看>>
Mybatis Batch 批量操作
查看>>
Ubuntu server搭建Java web服务器
查看>>
WSGI学习系列WSME
查看>>
java读取xml配置文件和properties配置文件
查看>>
HDU 4300 Contest 1
查看>>