一、spark错误

1、

17/07/17 15:34:55 ERROR yarn.ApplicationMaster: User class threw exception: java.lang.UnsupportedOperationException: empty collection
java.lang.UnsupportedOperationException: empty collection
	at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$apply$40.apply(RDD.scala:1027)
	at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$apply$40.apply(RDD.scala:1027)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1027)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
	at org.apache.spark.rdd.RDD.reduce(RDD.scala:1007)
	at sparkoffline.DayCount$.dayCount(DayCount.scala:44)
	at sparkoffline.Main$.main(Main.scala:35)
	at sparkoffline.Main.main(Main.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:558)
17/07/17 15:34:55 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.UnsupportedOperationException: empty collection)
17/07/17 15:34:55 INFO spark.SparkContext: Invoking stop() from shutdown hook

  spark 从hbase过滤出数据形成RDD,然后再做计算,这个错误大概意思是  从hbase过滤出来的数据为空,也就是一个空的RDD

2、

org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 12
	at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:548)
	at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:544)
	at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
	at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
	at org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:544)
	at org.apache.spark.MapOutputTracker.getMapSizesByExecutorId(MapOutputTracker.scala:155)
	at org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:47)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:98)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
	at org.apache.spark.scheduler.Task.run(Task.scala:89)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745) 

org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle
解决方案:这种问题一般发生在有大量shuffle操作的时候,task不断的failed,然后又重执行,一直循环下去,直到application失败。一般遇到这种问题提高executor内存即可,同时增加每个executor的cpu,这样不会减少task并行度。

或者改代码,替代shuffle 算子(例如reducebykey 替代groupbykey)

时间: 2024-10-11 19:17:19

一、spark错误的相关文章

CM5.x配置spark错误解决

通过cloudera manager 5.x添加spark服务,在创建服务过程中,发现spark服务创建失败,可以通过控制台错误输出看到如下日志信息: + perl -pi -e 's#{{CMF_CONF_DIR}}#/etc/spark/conf.cloudera.spark_on_yarn/yarn-conf#g' /opt/cm-5.9.2/run/cloudera-scm-agent/process/ccdeploy_spark-conf_etcsparkconf.cloudera.s

spark错误信息

1.windows10使用idea创建wordcount时,hadoop 二进制   加    空指针异常.是因为没有hadoop,hadoop环境变量 解决:配置下载hadoop,配置环境变量 2.写的wordcount在spark集群上跑是 19/09/11 20:19:54 INFO spark.SparkContext: Created broadcast 0 from textFile at WordCount.scala:14Exception in thread "main&quo

学习记录(部分myeclipse快捷键,一些面试题),有点乱,但是挺有用

网页有错看最下面的错误的第一行 API:网页制作完全手册 系统管理步骤-1-设计实体(JAVABEAN) -2-写映射文件(Action) -3-建JSP-4-配struts-5-写Action/Service/Impl 6-在BaseAction 申明 自动提示功能 打开 Eclipse  -> Window -> Perferences -> Java -> Editor -> Content Assist,在右边最下面一栏找到 auto-Activation ,下面有三

spark 编译遇到的错误及解决办法(五)

终端错误提示: Saving to outputFile=/usr/spark/spark-2.0.2/streaming/target/scalastyle-output.xml Processed 195 file(s) Found 101 errors 打开scalastyle-output.xml文件后,发现错误全是由scalastyle引起的,类似: <file name="/usr/spark/spark-2.0.2/streaming/src/main/scala/org/a

java -jar运行spark程序找不到自己写的类的错误解决

错误信息: ..... 14/11/23 06:04:10 ERROR TaskSetManager: Task 2.0:1 failed 4 times; aborting job 14/11/23 06:04:10 INFO DAGScheduler: Failed to run sortByKey at Main.scala:29 Exception in thread "main" org.apache.spark.SparkException: Job aborted: Ta

spark 在yarn执行job时一直抱0.0.0.0:8030错误

近日新写完的spark任务放到yarn上面执行时,在yarn的slave节点中一直看到报错日志:连接不到0.0.0.0:8030 . 1 The logs are as below: 2 2014-08-11 20:10:59,795 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030 3 2014-08-11 20:11:01,838 INFO [ma

spark遇到的错误1-内存不足

原来的代码: JavaRDD<ArticleReply> javaRdd = rdd.flatMap(new FlatMapFunction<String, ArticleReply>() { private static final long serialVersionUID = 10000L; List<ArticleReply> newList = new ArrayList<ArticleReply>(); public Iterable<Ar

Spark随机深林扩展—OOB错误评估和变量权重

本文目的 当前spark(1.3版)随机森林实现,没有包括OOB错误评估和变量权重计算.而这两个功能在实际工作中比较常用.OOB错误评估可以代替交叉检验,评估模型整体结果,避免交叉检验带来的计算开销.现在的数据集,变量动辄成百上千,变量权重有助于变量过滤,去掉无用变量,提高计算效率,同时也可以帮助理解业务.所以,本人在原始代码基础上,扩展了这两个功能,下面记录实现过程,作为备忘录(参考代码). 整体思路 Random Forest实现中,大多数内部对象是私有(private[tree])的,所以

spark提交jar包时出现unsupported major.minor version 52.0错误的解决方案

一.问题: 最近在spark集群上做一个项目,打包提交jar包时,出现了unsupported major.minor version 52.0的报错,而在local模式运行却能正常运行! 二.错误原因: 查阅诸多资料得出的结论就是:项目编译得到的class文件的版本高于运行环境中jre的版本号,高版本JDK编译的class不能在低版本的jvm虚拟机下运行,否则就会报这类错,因此无法运行!49,50,51,52是Java编译器内部的版本号,版本对应信息如下: Unsupported major.