Directory /usr/local/hadoop/tmp/tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible

解决方法:

 <property>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/hadoop/tmp</value>
        </property>

改成如下:

<property>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/hadoop/tmp/hadoop-${user.name}</value>
        </property>
时间: 2024-11-06 23:59:09

Directory /usr/local/hadoop/tmp/tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible的相关文章

CentOS安装软件出现错误:bash: /usr/local/bin/rar: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

CentOS安装软件出现错误: bash: /usr/local/bin/rar: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directorygoogle了一把才发现是64位系统中安装了32位程序解决方法:yum install glibc.i686 重新安装以后还有如下类系错误 再继续安装包 error while loading shared libraries: libstdc++.so.6: cannot open

Hadoop学习之Hadoop集群搭建

1.检查网络状况 Dos命令:ping ip地址,同时,在Linux下通过命令:ifconfig可以查看ip信息2.修改虚拟机的ip地址    打开linux网络连接,在桌面右上角,然后编辑ip地址,修改ip地址后,重启网络服务:service network restart,如果网络重启失败,可以在虚拟机网络设置中心删除网络连接,然后重新启动Linux,接着在网络设置中心重新设置ip地址,最后重启Linux3.修改从节点主机名    vi /etc/sysconfig/network 修改主机

vim /usr/local/apache2/conf/httpd.conf

[[email protected] ~]# vim -n /usr/local/apache2/conf/httpd.conf 1 # 2 # This is the main Apache HTTP server configuration file. It contains the 3 # configuration directives that give the server its instructions. 4 # See <URL:http://httpd.apache.org/

[/usr/local/openssl//.openssl/include/openssl/ssl.h] Error 127

/bin/sh: line 2: ./config: No such file or directorymake[1]: *** [/usr/local/ssl/.openssl/include/openssl/ssl.h] Error 127make[1]: Leaving directory `/usr/local/src/nginx-1.9.9'make: *** [build] Error 2需要说明的是,我这里编译所使用的Nginx源码是1.9.9的.根据报错信息我们知道,出错是因为N

org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/usr/local/spark/zytdemo

意思说在 hdfs://localhost:9000/usr/local/spark/zytdemo找不到响应的文件,我们可以分析的得到他并不是加载本地文件,而是区hdfs上查找. 这是由于我们在之前配置时修改过 /usr/local/hadoop/etc/hadoop下的core-site.xml 所以我们要将spark读取的文件路径改为hdfs上的路径. 原文地址:https://www.cnblogs.com/zyt-bg/p/11477449.html

hadoop安装时报错 /usr/local/hadoop-2.6.0-stable/hadoop-2.6.0-src/hadoop-hdfs-project/hadoop-hdfs/target/findbugsXml.xml does not exist

安装时报错:Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: input file /usr/local/hadoop-2.6.0-stable/hadoop-2.6.0-src/hadoop-hdfs-project/hadoop-hdfs/target/find

启动mysql 失败,“Warning:The /usr/local/mysql/data directory is not owned by the &#39;mysql&#39; or &#39;_mysql&#39; ”

一.Mac OS X的升级或其他原因可能会导致MySQL启动或开机自动运行时 在MySQL操作面板上会提示“Warning:The /usr/local/mysql/data directory is not owned by the 'mysql' or '_mysql' ”, 这应该是某种情况下导致/usr/local/mysql/data的宿主发生了改变, 只需要运行“sudo chown -R mysql /usr/local/mysql/data”即可 mac 下面运行 “sudo c

解决hadoop中 bin/hadoop fs -ls ls: `.&#39;: No such file or directory问题

出现这样的问题确实很苦恼...使用的是2.7版本..一般论坛上的都是1.x的教程,搞死人 在现在的2.x版本上的使用bin/hadoop fs -ls  /就有用 应该使用绝对路径就不会有问题....mkdir也是一样的..具体原因不知,我使用相对路径会出现错误.... 解决hadoop中 bin/hadoop fs -ls ls: `.': No such file or directory问题

启动 mysql 失败 Warning:The /usr/local/mysql/data directory is not owned by the &#39;mysql&#39; or &#39;_mysql&#39;

Warning:The /usr/local/mysql/data directory is not owned by the 'mysql' or '_mysql' 这应该是某种情况下导致/usr/local/mysql/data的宿主发生了改变. 解决方法:打开终端运行 sudo chown -R mysql /usr/local/mysql/data 即可. mac 下运行  sudo chown -R  _mysql:wheel  /usr/local/mysql/data . -c 显