spark取得lzo压缩文件报错 java.lang.ClassNotFoundException

恩,这个问题,反正是我从来没有注意的问题,但今天还是写出来吧

配置信息

hadoop core-site.xml配置

<property>
   <name>io.compression.codecs</name>
        <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.LzmaCodec</value>
    </property>

    <property>
        <name>io.compression.codec.lzo.class</name>
        <value>com.hadoop.compression.lzo.LzoCodec</value>
    </property>12345678910

io compression codec 是lzo

spark-env.sh配置

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/nativeexport SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/nativeexport SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/cluster/apps/hadoop/share/hadoop/yarn/:/home/cluster/apps/hadoop/share/hadoop/yarn/lib/:/home/cluster/apps/hadoop/share/hadoop/common/:/home/cluster/apps/hadoop/share/hadoop/common/lib/:/home/cluster/apps/hadoop/share/hadoop/hdfs/:/home/cluster/apps/hadoop/share/hadoop/hdfs/lib/:/home/cluster/apps/hadoop/share/hadoop/mapreduce/:/home/cluster/apps/hadoop/share/hadoop/mapreduce/lib/:/home/cluster/apps/hadoop/share/hadoop/tools/lib/:/home/cluster/apps/spark/spark-1.4.1/lib/123

操作信息

启动 spark-shell 
执行如下代码

 val lzoFile  = sc.textFile("/tmp/data/lzo/part-m-00000.lzo")
 lzoFile.count12

具体报错信息

java.lang.RuntimeException: Error in configuring object 
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) 
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
        at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:190) 
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:203) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) 
        at scala.Option.getOrElse(Option.scala:120) 
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) 
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) 
        at scala.Option.getOrElse(Option.scala:120) 
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) 
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) 
        at scala.Option.getOrElse(Option.scala:120) 
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) 
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) 
        at scala.Option.getOrElse(Option.scala:120) 
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) 
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1781) 
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:885) 
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) 
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) 
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:286) 
        at org.apache.spark.rdd.RDD.collect(RDD.scala:884) 
        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:105) 
        at org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult(HiveContext.scala:503) 
        at org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:58) 
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283) 
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423) 
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218) 
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) 
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
        at java.lang.reflect.Method.invoke(Method.java:606) 
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665) 
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170) 
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193) 
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112) 
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
Caused by: java.lang.reflect.InvocationTargetException 
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
        at java.lang.reflect.Method.invoke(Method.java:606) 
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
        ... 45 more 
Caused by: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found. 
        at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:135) 
        at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:175) 
        at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45) 
        ... 50 more 
Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found 
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1803) 
        at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128) 
        ... 52 more 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263

然后如何解决呢

后来有点怀疑 hadoop core-site.xml配置格式问题,然后让同事帮我跟进hadoop 源码,可以肯定不是hadoop问题 
然后 我就想了想,之前也遇到类似的问题,我是这样配置spark-env.sh

export SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/home/stark_summer/opt/hadoop/hadoop-2.3.0-cdh5.1.0/lib/native/Linux-amd64-64/*:/home/stark_summer/opt/hadoop/hadoop-2.3.0-cdh5.1.0/share/hadoop/common/hadoop-lzo-0.4.15-cdh5.1.0.jar:/home/stark_summer/opt/spark/spark-1.3.1-bin-hadoop2.3/lib/*
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/stark_summer/opt/hadoop/hadoop-2.3.0-cdh5.1.0/share/hadoop/common/hadoop-lzo-0.4.15-cdh5.1.0.jar:/home/stark_summer/opt/spark/spark-1.3.1-bin-hadoop2.3/lib/*12

这个配置是之前fix这个问题的,但是 是很久之前的事情,我早已经忘了,所以这是平日写博客的好处,把每次遇到的问题全部记录下来 
恩?如果我指定具体.jar包,那就没问题了,但是在spark中 难道必须要用 * 来指定某个目录下的所有jar么?那这个跟hadoop还真不一样呢,在hadoop中 我们要指定某个目录下的jar包,都是/xxx/yyy/lib/ 
而spark必须要求/xxx/yyy/lib/*,才能加载到这个目录下的jar包,否则就会包如上错误

修改后的spark-env.sh配置文件

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/nativeexport SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/nativeexport SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/cluster/apps/hadoop/share/hadoop/yarn/*:/home/cluster/apps/hadoop/share/hadoop/yarn/lib/*:/home/cluster/apps/hadoop/share/hadoop/common/*:/home/cluster/apps/hadoop/share/hadoop/common/lib/*:/home/cluster/apps/hadoop/share/hadoop/hdfs/*:/home/cluster/apps/hadoop/share/hadoop/hdfs/lib/*:/home/cluster/apps/hadoop/share/hadoop/mapreduce/*:/home/cluster/apps/hadoop/share/hadoop/mapreduce/lib/*:/home/cluster/apps/hadoop/share/hadoop/tools/lib/*:/home/cluster/apps/spark/spark-1.4.1/lib/*123

当再次执行上述代码就没有问题了

但是 但是 但是

如果 我把 /home/cluster/apps/hadoop/lib/native 改成/home/cluster/apps/hadoop/lib/native/*

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/native/*export SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/native/*export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/cluster/apps/hadoop/share/hadoop/yarn/*:/home/cluster/apps/hadoop/share/hadoop/yarn/lib/*:/home/cluster/apps/hadoop/share/hadoop/common/*:/home/cluster/apps/hadoop/share/hadoop/common/lib/*:/home/cluster/apps/hadoop/share/hadoop/hdfs/*:/home/cluster/apps/hadoop/share/hadoop/hdfs/lib/*:/home/cluster/apps/hadoop/share/hadoop/mapreduce/*:/home/cluster/apps/hadoop/share/hadoop/mapreduce/lib/*:/home/cluster/apps/hadoop/share/hadoop/tools/lib/*:/home/cluster/apps/spark/spark-1.4.1/lib/*123

尼玛 就会报错如下:

spark.repl.class.uri=http://10.32.24.78:52753) error [Ljava.lang.StackTraceElement;@4efb0b1f2015-09-11 17:52:02,357 ERROR [main] spark.SparkContext (Logging.scala:logError(96)) - Error initializing SparkContext.java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
    at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
    at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:69)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:513)
    at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
    at $line3.$read$$iwC$$iwC.<init>(<console>:9)
    at $line3.$read$$iwC.<init>(<console>:18)
	at $line3.$read.<init>(<console>:20)
	at $line3.$read$.<init>(<console>:24)
	at $line3.$read$.<clinit>(<console>)
	at $line3.$eval$.<init>(<console>:7)
	at $line3.$eval$.<clinit>(<console>)
	at $line3.$eval.$print(<console>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
	at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
	at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
	at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
	at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
	at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
	at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:123)
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:122)
	at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
	at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122)
	at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
	at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
	at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:157)
	at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
	at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106)
	at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
	at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
	at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
	at org.apache.spark.repl.Main$.main(Main.scala:31)
	at org.apache.spark.repl.Main.main(Main.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.IllegalArgumentException
    at org.apache.spark.io.SnappyCompressionCodec.<init>(CompressionCodec.scala:155)
    ... 56 more1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162

此刻我想说

您们城里人就是会玩,我已经被打败了~

尊重原创,拒绝转载,http://blog.csdn.net/stark_summer/article/details/48375999

时间: 2024-10-13 04:04:24

spark取得lzo压缩文件报错 java.lang.ClassNotFoundException的相关文章

maven web 项目中启动报错java.lang.ClassNotFoundException: org.springframework.web.util.Log4jConfigListener

本篇文章主要介绍了"maven web 项目中启动报错java.lang.ClassNotFoundException: org.springframework.web.util.Log4jConfigListener ",主要涉及到maven web 项目中启动报错java.lang.ClassNotFoundException: org.springframework.web.util.Log4jConfigListener 方面的内容,对于maven web 项目中启动报错jav

MyEclipse2014报错java.lang.ClassNotFoundException

MyEclipse2014做web开发的时候,总是报错java.lang.ClassNotFoundException.而对应的类有能在包中找到.原因一般是由下面情况引起的. 在开发的时候,我们习惯性的在lib下面根据包类型或作用分文件夹来管理,然后手动将包导入到项目中(此时包会在Referenced Libraries中),而MyEclipse2014只会自动加载Web app Libraries文件夹中的包. 解决办法就是,将所有的支撑包都放到lib目录下,而不要在lib目录下分文件夹管理.

项目报错java.lang.ClassNotFoundException: org.common.SessionListener

现象:项目报错java.lang.ClassNotFoundException: org.common.SessionListener,并且myeclipse左侧Package Explorer中项目目录的WEB-INF下显示多了个classes文件夹.既然报错是ClassNotFoundException,就去tomcat的WEB-INF/classes下找该项目,的确没有class文件. 解决方法:右击项目,点击properties(或者右击项目,选择build path--config b

eclipse下执行wordcount报错 java.lang.ClassNotFoundException 解决办法

eclipse下执行wordcount报错 java.lang.ClassNotFoundException 17/08/29 07:52:54 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS 17/08/29 07:52:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for y

jdbc连接sqlserver报错java.lang.ClassNotFoundException: com.microsoft.jdbc.sqlserver.SQLServerDriver

使用2008的数据库, 我已经引入的sqljdbc4的包,单还是报这个错,很奇怪突然想到在配置hibernate的时候,是拷贝下来的代码 然后到网上查了下, 因为是2008的版本驱动和2000的有点不同, 之前的都是可能是2000或者2005的class是com.microsoft.jdbc.sqlserver.SQLServerDriver 可是2008  去是com.microsoft.sqlserver.jdbc.SQLServerDriver 就这么改过来就可以了 jdbc连接sqlse

运行hadoop的Wordcount程序报错java.lang.ClassNotFoundException: WordCount$TokenizerMapper

在运行hadoop的官方Wordcount程序时报错 java.lang.ClassNotFoundException: WordCount$TokenizerMapper 提示信息为找不到TokenizerMapper类,但程序师官方的,应该没错. 打包到Linux上可以运行,确定不是程序的错. 然后在网上搜索一番,看到有人说可能是eclipse版本原因,试了一下就ok了 使用的eclipse版本是3.5.1. 遇到此问题的兄弟们可以试一下

【web.xml】报错java.lang.ClassNotFoundException: org.springframework.web.context.ContextLoaderListener

今天搭建新的项目,虽然在web.xml中配置了ContextLoaderListener以及IntrospectorCleanupListener 如下: web.xml中部分代码: 1 <!-- 监听servletContext,启动contextConfigLocation中的spring配置信息 --> 2 <listener> 3 <listener-class>org.springframework.web.context.ContextLoaderListe

MapReduce 程序运行报错 java.lang.ClassNotFoundException解决方法

在创建自定义的Mapper时候,编译正确,但上传到集群执行时出现错误: 11/16/05 22:53:16 INFO mapred.JobClient: Task Id : attempt_201111301626_0015_m_000000_0, Status : FAILED java.lang.RuntimeException: java.lang.ClassNotFoundException: actiondemo.MyJob$MapClass at org.apache.Hadoop.

maven创建spring项目之后,启动报错java.lang.ClassNotFoundException: org.springframework.web.context.ContextLoaderListener

出错情景:maven中已经加载了spring的核心包,但是项目启动时,报错: org.apache.catalina.core.StandardContext listenerStart严重: Error configuring application listener of class org.springframework.web.context.ContextLoaderListenerjava.lang.ClassNotFoundException: org.springframewor