关于livy的 java api 报错org.apache.livy.shaded.kryo.kryo.KryoException: Unable to find class: com.xxx.wordcount.WordCountJavaSpark

Livy Java api

依赖

<dependency>
  <groupId>org.apache.livy</groupId>
  <artifactId>livy-client-http</artifactId>
  <version>0.5.0-incubating</version>
</dependency>
<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.11</artifactId>
  <version>2.2.2</version>
</dependency>

业务程序

public class WordCountJavaSpark implements Job<Object> {
    /**
     * call就是执行逻辑
     * @param jobContext
     * @return
     * @throws Exception
     */
    @Override
    public Object call(JobContext jobContext) throws Exception {
        JavaSparkContext sc = jobContext.sc();
        Map<String ,Integer> mp = new HashMap<String, Integer>();

        // 此处要使用hdfs的ha路径,则需要在livy的livy-env.sh中配置HADOOP_CONF_DIR
        JavaRDD<String> javaRDD = sc.textFile("hdfs://myha/data/livy/zookeeper.out");
        JavaRDD<String> flatMapedRDD = javaRDD.flatMap(new FlatMapFunction<String, String>() {
            @Override
            public Iterator<String> call(String s) throws Exception {
                return Arrays.asList(s.split(" ")).iterator();
            }
        });

        JavaPairRDD<String,Integer> mapedRDD = flatMapedRDD.mapToPair(new PairFunction<String, String, Integer>() {
            @Override
            public Tuple2<String, Integer> call(String s) throws Exception {
                return new Tuple2<String,Integer>(s,1);
            }
        });

        JavaPairRDD<String, Integer> reduceJavaPariRDD = mapedRDD.reduceByKey(new Function2<Integer, Integer, Integer>() {
            @Override
            public Integer call(Integer integer1, Integer integer2) throws Exception {
                return integer1 + integer2;
            }
        });

        reduceJavaPariRDD.collect().forEach((tuple2)->{
            System.out.println(tuple2._1+"<===>"+tuple2._2);
            mp.put(tuple2._1,tuple2._2);
        });

        return mp;
    }
}

启动程序

public class StartApp {
    private static LivyClient client = null;

    public static void main(String[] args) {

        String livyURI ="http://192.168.128.100:8998";

        //jar包的位置
        String file = "/Users/chenzhuanglan/WorkSpace/livyjavatest/out/artifacts/livyWC/WordCountJavaSpark.jar";

        try{
            client = new LivyClientBuilder().setURI(new URI(livyURI)).build();

            System.err.printf("Uploading %s to the Spark context...\n", file);
            // 将 spark job的 jar包上传到服务器上
            client.uploadJar(new File(file)).get();

            System.err.printf("Running WordCountJavaSpark ...\n");
            // 提交作业
            HashMap<String,Integer> map = (HashMap<String, Integer>) client.submit(new WordCountJavaSpark()).get();

            map.forEach((k,v)->{
                System.out.println(k+"==="+v);
            });

        } catch (IOException e) {
            e.printStackTrace();
        } catch (URISyntaxException e) {
            e.printStackTrace();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch (ExecutionException e) {
            e.printStackTrace();
        }

    }

}

注意、注意、注意!

log4j:WARN No appenders could be found for logger (org.apache.livy.shaded.apache.http.client.protocol.RequestAddCookies).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Uploading /Users/chenzhuanglan/WorkSpace/testwcjava/target/testwcjava-1.0-SNAPSHOT.jar to the Spark context...
Running WordCountJavaSpark ...
java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.apache.livy.shaded.kryo.kryo.KryoException: Unable to find class: com.czlan.wordcount.WordCountJavaSpark
org.apache.livy.shaded.kryo.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
org.apache.livy.shaded.kryo.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
org.apache.livy.shaded.kryo.kryo.Kryo.readClass(Kryo.java:656)
org.apache.livy.shaded.kryo.kryo.Kryo.readClassAndObject(Kryo.java:767)
org.apache.livy.client.common.Serializer.deserialize(Serializer.java:63)
org.apache.livy.rsc.driver.BypassJob.call(BypassJob.java:39)
org.apache.livy.rsc.driver.BypassJob.call(BypassJob.java:27)
org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:57)
org.apache.livy.rsc.driver.BypassJobWrapper.call(BypassJobWrapper.java:42)
org.apache.livy.rsc.driver.BypassJobWrapper.call(BypassJobWrapper.java:27)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
    at org.apache.livy.client.http.JobHandleImpl.get(JobHandleImpl.java:198)
    at org.apache.livy.client.http.JobHandleImpl.get(JobHandleImpl.java:88)
    at com.czlan.wordcount.StartApp2.main(StartApp2.java:32)

上面这个报错是因为

client.uploadJar(new File(file)).get();写成了client.uploadFile(new File(file)).get();

uploadJar 是上传要添加到Spark应用程序类路径中的jar

uploadFile 是上传要传递给Spark应用程序的文件

原文地址:https://www.cnblogs.com/czlan91/p/10353139.html

时间: 2024-10-09 23:08:16

关于livy的 java api 报错org.apache.livy.shaded.kryo.kryo.KryoException: Unable to find class: com.xxx.wordcount.WordCountJavaSpark的相关文章

用java运行Hadoop程序报错:org.apache.hadoop.fs.LocalFileSystem cannot be cast to org.apache.

用java运行Hadoop例程报错:org.apache.hadoop.fs.LocalFileSystem cannot be cast to org.apache.所写代码如下: package com.pcitc.hadoop; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.h

java保存报错解决for column &#39;name&#39; at r

Incorrect string value: '\xE6\x98\xAF\xE6\xBB\xB4...' for column 'name' at r 修改数据库的编码 ALTER DATABASE test CHARACTER SET utf8 ALTER TABLE hobby CONVERT TO CHARACTER SET utf8 ALTER TABLE person CONVERT TO CHARACTER SET utf8 java保存报错解决for column 'name'

facebook graph api 报错SSLError(1, u&#39;[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661)&#39;)

使用facebook graph api,报错如下 一开始以为是https证书验证失败,查了一下午源码,没有看到问题,于是把Python27\lib\site-packages\requests\adapters.py文件的如下位置异常处理注释掉了,看看异常到底从哪来的 def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): """Sends Prep

java 刷新报错 Feature &#39;taglib&#39; not found.

刷新工程报错:org.eclipse.emf.ecore.xmi.FeatureNotFoundException: Feature 'taglib' not found. 错误原因:tomcat7,部署tomcat6下开发的项目.web.xml下引入taglib标签的方式有了新的配置要求. tomcat7.0前版本web.xml中taglib配置如下: <taglib>     <taglib-uri>http://www.krmsoft.com/tags-slsint</

java @override 报错处理

转载自:http://blog.sina.com.cn/s/blog_9c7605530101kl9r.html 一.java @override 报错处理 做项目的时候,同事那边电脑上编译通过的java代码,或者是网上下载的样例代码,导入工程后却是编译不通过,总是@override报错,把@override去掉就好了,有时候@Override出现的地方很多,要全部删除@Override很繁琐很吐血,不能从根本上解决问题.网上找了一下原因,才知道: 据说这是jdk的问题,@Override是JD

mavne install 报错org.apache.maven.surefire.util.SurefireReflectionException: java.lang.reflect.InvocationTargetException

maven install 报错 org.apache.maven.surefire.util.SurefireReflectionException: java.lang.reflect.InvocationTargetException; nested exception is java.lang.reflect.InvocationTargetException: nulljava.lang.reflect.InvocationTargetException at sun.reflect.

Java代码报错[收集整理]

1. com.ibatis.common.jdbc.exception.NestedSQLException: com.ibatis.common.jdbc.exception.NestedSQLException: --- The error occurred in com/visec/fileIssue/domain/fileIssue.xml. --- The error occurred while applying a parameter map. --- Check the file

Centos7 JDK安装过程中 解决java -version 报错: bash: /home/jdk1.8.0_161/bin/java: Permission denied

1.执行Linux命令 -----vim /etc/profile  编辑profile  文件,在里面添加: #set java enviroment JAVA_HOME=/opt/JavaHome/jdk1.8.0_171JRE_HOME=/opt/JavaHome/jdk1.8.0_171/jreCLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/libPATH=$PATH:$JAVA_HOME/bin

Java编译报错:无效的源发行版

简介 IDEA编译Java项目报错:无效的源发行版9 目测原因编译等级不对,需要调整IDE中的编译等级相关参数 解决方案 打开设置,-->Java Compiler选择如图所示的选项进行相应调整 打开项目构建(Project Structure...)--> Project选择相应的编译级别 打开项目构建(Project Structure...)--> Modules选择相应的模块, 看下模块的编译级别是否对应 参考资料 IDEA: Error:java: 无效的源发行版: 9 原文地