Could not locate executable null 解决办法

问题导读:

1.建一个MapReduce Project,运行时发现出问题:Could not locate executable null,该如何解决?
2.Could not locate executabl   ....\hadoop-2.2.0\hadoop-2.2.0\bin\winutils.exe in the Hadoop binaries.该如何解决?

1.创建一个MapReduce Project,运行时发现出问题了。

java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

跟代码就去发现是HADOOP_HOME的问题。如果HADOOP_HOME为空,必然fullExeName为null\bin\winutils.exe。解决方法很简单啦,乖乖的配置环境变量吧,不想重启电脑可以在MapReduce程序里加上System.setProperty("hadoop.home.dir", "...");暂时缓缓。

org.apache.hadoop.util.Shell.java

  public static final String getQualifiedBinPath(String executable)
  throws IOException {
    // construct hadoop bin path to the specified executable
    String fullExeName = HADOOP_HOME_DIR + File.separator + "bin"
      + File.separator + executable;

    File exeFile = new File(fullExeName);
    if (!exeFile.exists()) {
      throw new IOException("Could not locate executable " + fullExeName
        + " in the Hadoop binaries.");
    }

    return exeFile.getCanonicalPath();
  }

private static String HADOOP_HOME_DIR = checkHadoopHome();
private static String checkHadoopHome() {

    // first check the Dflag hadoop.home.dir with JVM scope
    String home = System.getProperty("hadoop.home.dir");

    // fall back to the system/user-global env variable
    if (home == null) {
      home = System.getenv("HADOOP_HOME");
    }
     ...
}

2.这个时候得到完整的地址fullExeName,我机器上是D:\Hadoop\tar\hadoop-2.2.0\hadoop-2.2.0\bin\winutils.exe。继续执行代码又发现了错误

Could not locate executable D:\Hadoop\tar\hadoop-2.2.0\hadoop-2.2.0\bin\winutils.exe in the Hadoop binaries.

就去一看,没有winutils.exe这个东西。去https://github.com/srccodes/hadoop-common-2.2.0-bin下载一个,放就去即可。
3.继续出问题

  1. at org.apache.hadoop.util.Shell.execCommand(Shell.java:661)at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:639)at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:435)

复制代码

继续跟代码org.apache.hadoop.util.Shell.java

  1. <span style="line-height: 1.5;"> </span> public static String[] getSetPermissionCommand(String perm, boolean recursive,
  2. String file) {
  3. String[] baseCmd = getSetPermissionCommand(perm, recursive);
  4. String[] cmdWithFile = Arrays.copyOf(baseCmd, baseCmd.length + 1);
  5. cmdWithFile[cmdWithFile.length - 1] = file;
  6. return cmdWithFile;
  7. }
  8. /** Return a command to set permission */
  9. public static String[] getSetPermissionCommand(String perm, boolean recursive) {
  10. if (recursive) {
  11. return (WINDOWS) ? new String[] { WINUTILS, "chmod", "-R", perm }
  12. : new String[] { "chmod", "-R", perm };
  13. } else {
  14. return (WINDOWS) ? new String[] { WINUTILS, "chmod", perm }
  15. : new String[] { "chmod", perm };
  16. }
  17. }

复制代码

cmdWithFile数组的内容为{"D:\Hadoop\tar\hadoop-2.2.0\hadoop-2.2.0\bin\winutils.exe", "chmod", "755", "xxxfile"},我把这个单独在cmd里执行了一下,发现

无法启动此程序,因为计算机中丢失 MSVCR100.dll

那就下载一个呗http://files.cnblogs.com/sirkevin/msvcr100.rar,丢到C:\Windows\System32里面。再次cmd执行,又来了问题

应用程序无法正常启动(0xc000007b)

下载http://blog.csdn.net/vbcom/article/details/7245186 DirectX_Repair来解决这个问题吧。记得修复完后要重启电脑。搞定后cmd试一下,很棒。
4.到了这里,已经看到曙光了,但问题又来了

Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z

代码就去

  1. /** Windows only method used to check if the current process has requested*  access rights on the given path. */
  2. private static native boolean access0(String path, int requestedAccess);

复制代码

显然缺少dll文件,还记得https://github.com/srccodes/hadoop-common-2.2.0-bin下载的东西吧,里面就有hadoop.dll,最好的方法就是用hadoop-common-2.2.0-bin-master/bin目录替换本地hadoop的bin目录,并在环境变量里配置PATH=HADOOP_HOME/bin,重启电脑。
1.下载地址

 
hadoop家族、strom、spark、Linux、flume等jar包、安装包汇总下载(持续更新)

注意的问题:
环境变量一定配置正确,否则还是不能运行
PATH=HADOOP_HOME/bin,如果这个不行,可以换成绝对路径

5.终于看到了MapReduce的正确输出output99。

总结

  • hadoop eclipse插件不是必须的,其作用在我看来就是如下三点(这个是一个错误的认识,具体请参考http://zy19982004.iteye.com/blog/2031172)。study-hadoop是一个普通project,直接运行(不通过Run on Hadoop这只大象),一样可以调试到MapReduce。

      • 对hadoop中的文件可视化。
      • 创建MapReduce Project时帮你引入依赖的jar。
      • Configuration conf = new Configuration();时就已经包含了所有的配置信息。
  • 还是自己下载hadoop2.2的源码编译好,应该是不会有任何问题的(没有亲测)。

六. 其它问题

    • 还是

      1. Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z

      复制代码

      代码跟到org.apache.hadoop.util.NativeCodeLoader.java去看

      1. static {
      2. // Try to load native hadoop library and set fallback flag appropriately
      3. if(LOG.isDebugEnabled()) {
      4. LOG.debug("Trying to load the custom-built native-hadoop library...");
      5. }
      6. try {
      7. System.loadLibrary("hadoop");
      8. LOG.debug("Loaded the native-hadoop library");
      9. nativeCodeLoaded = true;
      10. } catch (Throwable t) {
      11. // Ignore failure to load
      12. if(LOG.isDebugEnabled()) {
      13. LOG.debug("Failed to load native-hadoop with error: " + t);
      14. LOG.debug("java.library.path=" +
      15. System.getProperty("java.library.path"));
      16. }
      17. }
      18. if (!nativeCodeLoaded) {
      19. LOG.warn("Unable to load native-hadoop library for your platform... " +
      20. "using builtin-java classes where applicable");
      21. }
      22. }

      复制代码

      这里报错如下

      1. DEBUG org.apache.hadoop.util.NativeCodeLoader - Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: HADOOP_HOME\bin\hadoop.dll: Can‘t load AMD 64-bit .dll on a IA 32-bit platform

      复制代码

      怀疑是32位jdk的问题,替换成64位后,没问题了

      1. 2014-03-11 19:43:08,805 DEBUG org.apache.hadoop.util.NativeCodeLoader - Trying to load the custom-built native-hadoop library...
      2. 2014-03-11 19:43:08,812 DEBUG org.apache.hadoop.util.NativeCodeLoader - Loaded the native-hadoop library

      复制代码

      这里也解决了一个常见的警告

      1. WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

      复制代码

时间: 2024-10-30 21:31:57

Could not locate executable null 解决办法的相关文章

java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries

在已经搭建好的集群环境Centos6.6+Hadoop2.7+Hbase0.98+Spark1.3.1下,在Win7系统Intellij开发工具中调试Spark读取Hbase.运行直接报错: ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 15/06/11 15:35:50 ERROR Shell: Failed to locate the winutils binary in the

spark开发常见问题之一:java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

最近在学习研究pyspark机器学习算法,执行代码出现以下异常: 19/06/29 10:08:26 ERROR Shell: Failed to locate the winutils binary in the hadoop binary pathjava.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.at org.apache.hadoop.util.Shel

Hbase出现ERROR: Can&#39;t get master address from ZooKeeper; znode data == null解决办法

问题描述如下: hbase(main):001:0> list TABLE ERROR: Can't get master address from ZooKeeper; znode data == null Here is some help for this command: List all tables in hbase. Optional regular expression parameter could be used to filter the output. Examples:

Unable to locate package错误解决办法

新装了VMWare Player,结果装上Ubuntu12.04后安装软件都提示:Unable to locate package错误,解决方法非常简单,终端输入以下命令即可: [cpp] view plaincopy sudo apt-get update 究其原因,应该是刚安装,软件源还来不及更新,所以才会无法找到包.我猜测在更换软件源之后,也很可能会出现这个问题. Unable to locate package错误解决办法,布布扣,bubuko.com

getActiveWorkbenchWindow() return null 解决办法

getActiveWorkbenchWindow 有如下声明 /** * Returns the currently active window for this workbench (if any). Returns * <code>null</code> if there is no active workbench window. Returns * <code>null</code> if called from a non-UI thread. *

windows 中使用hbase 异常:java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

平时一般是在windows环境下进行开发,在windows 环境下操作hbase可能会出现异常(java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.),以前也遇到过这个问题,今天又有小伙伴遇到这个问题,就顺带记一笔,异常信息如下: 2016-05-23 17:02:13,551 WARN [org.apache.hadoop.util.NativeCodeLoa

Spakr- ERROR Shell: Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

运行 mport org.apache.log4j.{Level, Logger} import org.apache.spark.rdd.RDD import org.apache.spark.{SparkConf, SparkContext} /** * Created by Lee_Rz on 2017/8/30. */ object SparkDemo { def main(args: Array[String]) { Logger.getLogger("org.apache.spark

SQLNestedException: Cannot create JDBC driver of class &#39;&#39; for connect URL &#39;null&#39; 解决办法

当跑jndi项目时抛出:org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot create JDBC driver of class '' for connect URL 'null'异常 解决办法: 1.在Tomcat 6.0\lib下加入oracle驱动包ojdbc14_g-10.2.0.1.0.jar 2.在Tomcat 6.0\conf下的context.xml文件中加入 [javascript] view plaincopy <R

Android界面隐藏软键盘的探索(兼findViewById返回null解决办法)

最近写的APP,老师说我的登陆界面虽然有ScrollView滑动,但用户体验不太好,因为软键盘会挡住输入框或登录button(小米Pad,横屏,当指定只能输入数字时没找到关闭系统自带键盘的下箭头). 虽然我觉得ScrollView就够用了,能找到登录按钮…… 在默默吐槽了下连搜狗都有的功能小米没有后,上网上搜索了下解决办法. 首先,当activity加载完成后,屏蔽EditText自动弹出软键盘,需要一句话: 1 getWindow().setSoftInputMode(WindowManage