报错记录

hadoop-root-datanode-f72e3728f11d.log

[code=csharp]

2017-05-19 03:52:54,753 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 starting to offer service
2017-05-19 03:52:54,763 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2017-05-19 03:52:54,765 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2017-05-19 03:52:55,204 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2017-05-19 03:52:55,220 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /tmp/hadoop-root/dfs/data/in_use.lock acquired by nodename [email protected]
2017-05-19 03:52:55,223 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/tmp/hadoop-root/dfs/data/
java.io.IOException: Incompatible clusterIDs in /tmp/hadoop-root/dfs/data: namenode clusterID = CID-7f25bc20-e822-4b15-9063-4da48884cb60; datanode clusterID = CID-911d3bd8-bf2f-4cb7-8401-d470c89798e4
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
        at java.lang.Thread.run(Thread.java:745)
2017-05-19 03:52:55,226 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
        at java.lang.Thread.run(Thread.java:745)
2017-05-19 03:52:55,227 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000
2017-05-19 03:52:55,232 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2017-05-19 03:52:57,232 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-05-19 03:52:57,233 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2017-05-19 03:52:57,234 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
[/code]

hadoop-root-namenode-f72e3728f11d.log
[code=csharp]
p-root/dfs/name/current/edits_inprogress_0000000000000005535 -> /tmp/hadoop-root/dfs/name/current/edits_0000000000000005535-0000000000000005536
2017-05-21 01:43:53,830 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 5537
2017-05-21 01:44:53,881 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2017-05-21 01:44:53,881 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2017-05-21 01:44:53,881 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 5537
2017-05-21 01:44:53,881 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 27
2017-05-21 01:44:53,885 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 31
2017-05-21 01:44:53,886 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /tmp/hadoop-root/dfs/name/current/edits_inprogress_0000000000000005537 -> /tmp/hadoop-root/dfs/name/current/edits_0000000000000005537-0000000000000005538
2017-05-21 01:44:53,886 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 5539
2017-05-21 01:45:53,931 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2017-05-21 01:45:53,931 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2017-05-21 01:45:53,931 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 5539
2017-05-21 01:45:53,931 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 27
2017-05-21 01:45:53,934 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 30
2017-05-21 01:45:53,934 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /tmp/hadoop-root/dfs/name/current/edits_inprogress_0000000000000005539 -> /tmp/hadoop-root/dfs/name/current/edits_0000000000000005539-0000000000000005540
2017-05-21 01:45:53,935 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 5541
[/code]

hadoop-root-secondarynamenode-f72e3728f11d.log
[code=csharp]
2017-05-21 01:45:53,969 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
java.io.IOException: Inconsistent checkpoint fields.
LV = -63 namespaceID = 920772853 cTime = 0 ; clusterId = CID-7f25bc20-e822-4b15-9063-4da48884cb60 ; blockpoolId = BP-610225686-172.17.0.2-1495163265222.
Expecting respectively: -63; 260347945; 0; CID-911d3bd8-bf2f-4cb7-8401-d470c89798e4; BP-108388655-172.17.0.4-1495157412977.
        at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:134)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:531)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:395)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:361)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:357)
        at java.lang.Thread.run(Thread.java:745)
2017-05-21 01:46:54,020 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
java.io.IOException: Inconsistent checkpoint fields.
LV = -63 namespaceID = 920772853 cTime = 0 ; clusterId = CID-7f25bc20-e822-4b15-9063-4da48884cb60 ; blockpoolId = BP-610225686-172.17.0.2-1495163265222.
Expecting respectively: -63; 260347945; 0; CID-911d3bd8-bf2f-4cb7-8401-d470c89798e4; BP-108388655-172.17.0.4-1495157412977.
        at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:134)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:531)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:395)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:361)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:357)
        at java.lang.Thread.run(Thread.java:745)
2017-05-21 01:47:54,072 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
java.io.IOException: Inconsistent checkpoint fields.
LV = -63 namespaceID = 920772853 cTime = 0 ; clusterId = CID-7f25bc20-e822-4b15-9063-4da48884cb60 ; blockpoolId = BP-610225686-172.17.0.2-1495163265222.
Expecting respectively: -63; 260347945; 0; CID-911d3bd8-bf2f-4cb7-8401-d470c89798e4; BP-108388655-172.17.0.4-1495157412977.
        at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:134)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:531)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:395)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:361)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:357)
        at java.lang.Thread.run(Thread.java:745)
[/code]

yarn-root-nodemanager-f72e3728f11d.log

[code=csharp]
2017-05-19 03:53:09,381 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-05-19 03:53:09,381 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-05-19 03:53:09,390 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /node/*
2017-05-19 03:53:09,390 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2017-05-19 03:53:09,923 INFO org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2017-05-19 03:53:09,927 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 8042
2017-05-19 03:53:09,927 INFO org.mortbay.log: jetty-6.1.26
2017-05-19 03:53:09,958 INFO org.mortbay.log: Extract jar:file:/root/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar!/webapps/node to /tmp/Jetty_0_0_0_0_8042_node____19tj0x/webapp
2017-05-19 03:53:11,645 INFO org.mortbay.log: Started [email protected]:8042
2017-05-19 03:53:11,645 INFO org.apache.hadoop.yarn.webapp.WebApps: Web app node started at 8042
2017-05-19 03:53:11,653 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8031
2017-05-19 03:53:11,682 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 0 NM container statuses: []
2017-05-19 03:53:11,689 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registering with RM using containers :[]
2017-05-19 03:53:11,924 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager: Rolling master-key for container-tokens, got key with id 1923995252
2017-05-19 03:53:11,927 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerInNM: Rolling master-key for container-tokens, got key with id 1202604938
2017-05-19 03:53:11,927 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registered with ResourceManager as f72e3728f11d:33073 with total resource of <memory:8192, vCores:8>
2017-05-19 03:53:11,927 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Notifying ContainerManager to unblock new container-requests
2017-05-20 03:53:07,184 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager: Rolling master-key for container-tokens, got key with id 1923995253
2017-05-20 03:53:07,185 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerInNM: Rolling master-key for container-tokens, got key with id 1202604939
[/code]

yarn-root-resourcemanager-f72e3728f11d.log

[code=csharp]
2017-05-19 03:53:11,212 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2017-05-19 03:53:11,212 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8033: starting
2017-05-19 03:53:11,902 INFO org.apache.hadoop.yarn.util.RackResolver: Resolved f72e3728f11d to /default-rack
2017-05-19 03:53:11,904 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node f72e3728f11d(cmPort: 33073 httpPort: 8042) registered with capability: <memory:8192, vCores:8>, assigned nodeId f72e3728f11d:33073
2017-05-19 03:53:11,908 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: f72e3728f11d:33073 Node Transitioned from NEW to RUNNING
2017-05-19 03:53:11,912 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node f72e3728f11d:33073 clusterResource: <memory:8192, vCores:8>
2017-05-19 04:03:06,854 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler: Release request cache is cleaned up
2017-05-20 03:53:07,002 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Rolling master-key for amrm-tokens
2017-05-20 03:53:07,003 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating AMRMToken
2017-05-20 03:53:07,004 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: Rolling master-key for container-tokens
2017-05-20 03:53:07,004 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: Going to activate master-key with key-id 1923995253 in 900000ms
2017-05-20 03:53:07,004 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Rolling master-key for nm-tokens
2017-05-20 03:53:07,004 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Going to activate master-key with key-id 1202604939 in 900000ms
2017-05-20 03:53:11,000 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2017-05-20 03:53:11,000 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 3
2017-05-20 03:53:11,001 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing RMDTMasterKey.
2017-05-20 03:53:13,175 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2017-05-20 04:08:07,004 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Activating next master key with id: -1734027871
2017-05-20 04:08:07,005 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating AMRMToken
2017-05-20 04:08:07,005 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: Activating next master key with id: 1923995253
2017-05-20 04:08:07,005 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Activating next master key with id: 1202604939
[/code]

时间: 2024-10-08 10:22:00

报错记录的相关文章

duplicate symbol _OBJC_METACLASS_$ 报错记录

duplicate symbol _OBJC_METACLASS_$_TabbarButton in: /Users/hw201406/Library/Developer/Xcode/DerivedData/xxx-gafskbgawbctznekgfxqhaugwjce/Build/Intermediates/xxx.build/Debug-iphonesimulator/xxx06.build/Objects-normal/i386/TabbarButton-FDEB19611A30D765

InnoDB: The InnoDB memory heap is disabled报错记录

报错记录: [[email protected] ~]# cat /data/3307/data/localhost.localdomain.err  150509 21:21:27 mysqld_safe Starting mysqld daemon with databases from /data/3307/data 150509 21:21:27 InnoDB: The InnoDB memory heap is disabled 150509 21:21:27 InnoDB: Mute

Spring Boot 报错记录

Spring Boot 报错记录 由于新建的项目没有配置数据库连接启动报错,可以通过取消自动数据源自动配置来解决 解决方案1: @SpringBootApplication(exclude = DataSourceAutoConfiguration.class) //@SpringBootApplication @MapperScan("com.example.*") //扫描:该包下相应的class,主要是MyBatis的持久化类. 解决方案2: #去配置文件中配置数据库连接参数 #

django-crontab定时任务报错记录

1.使用命令python manage.py crontab add时报错:'/temp/xxxxxx', 10 , bad hour解决方法:看报错的字面意思就是时间填错了,然后我试了各种时间格式,依然报bad hour,查了网上一般报时间错误的都是bad minute,我意识到不是这个问题,然后仔细比对了下配置文件,发现最后面的逗号没写,一般认为这个元组里面只有一个元素会忽略,但是这里请一定要加上,虽然我也不知道为什么!下面把我的配置贴上:CRONJOBS = (('/5 *', 'Appn

Centos7系统修改密码报错记录

报错信息:passwd: Have exhausted maximum number of retries for service详细信息: [[email protected]_2_49_centos /zhangsan]# echo magedu1| passwd --stdin nginx Changing password for user nginx. passwd: Have exhausted maximum number of retries for service 解决办法:

Tomcat8连接Redis3的一次报错记录

最近两天在测试tomcat8配合Redis做session共享,今天调试的过程中发现如下报错: 一开始我以为是我以为是自己Tomcat连接redis的某一部分出现了问题,排查发现提示是连接不到redis的pool,然后我使用telnet了下redis的6379,意料之外,居然不通,查看redis服务,发现服务正常运行,检查了下iptables,发现还真是这个问题,使用我清空了下策略,报错问题得到解决. 小结:很多时候,我们遇到问题,往往容易忽略就在眼前的信息,所以排查思路非常重要,要习惯看日志,

Django学习报错记录

1. 运行manage.py任务  makemigrations时,报错: doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS. 解决:在全局setting.py的 INSTALLED_APPS中 添加 app的名字,如 2. 在添加一个生日字段 (日期类型)时报错: You are trying to add a non-nullable field 'email' to use

代码报错记录-MAVEN

报错: COMPILATION ERROR : 程序包不存在. 说是找不到程序包,我的JUNIT是父项目中的,子项目的POM文件是父项目生成子项目时自己生成的,按理应该能找到的,后来发现 原因: 即对父项目的引用在构建的下面,导致构建时没有能获取父项目中的依赖, 修改: 将<parent></parent>有内容移到<build></build>有前面,问题解决. 原文地址:https://www.cnblogs.com/liunianfeiyu/p/815

ansible报错记录

ansible报错: The full traceback is:Traceback (most recent call last):  File "/usr/lib/python2.7/site-packages/ansible-2.5.4-py2.7.egg/ansible/executor/task_executor.py", line 138, in run    res = self._execute()  File "/usr/lib/python2.7/site

MySQL 报错记录

#-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- #  报错1 Reading table information for completion of table and colum