Spark Idea Maven 开发环境搭建

一、安装jdk

jdk版本最好是1.7以上,设置好环境变量,安装过程,略。

二、安装Maven

我选择的Maven版本是3.3.3,安装过程,略。

编辑Maven安装目录conf/settings.xml文件,

<!-- 修改Maven 库存放目录-->
<localRepository>D:\maven-repository\repository</localRepository>

三、安装Idea

安装过程,略。

四、创建Spark项目

1、新建一个Spark项目,

2、选择Maven,从模板创建项目,

3、填写项目GroupId等,

4、选择本地安装的Maven和Maven配置文件。

5、next

6、创建完毕,查看新项目结构:

7、自动更新Maven pom文件

8、编译项目

如果出现这种错误,这个错误是由于Junit版本造成的,可以删掉Test,和pom.xml文件中Junit的相关依赖,

即删掉这两个Scala类:

和pom.xml文件中的Junit依赖:

    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.12</version>
    </dependency>

 9、刷新Maven依赖

10、引入Jdk和Scala开发库

11、在pom.xml加入相关的依赖包,包括Hadoop、Spark等

<dependency>
			<groupId>commons-logging</groupId>
			<artifactId>commons-logging</artifactId>
			<version>1.1.1</version>
			<type>jar</type>
		</dependency>
		<dependency>
			<groupId>org.apache.commons</groupId>
			<artifactId>commons-lang3</artifactId>
			<version>3.1</version>
		</dependency>
		<dependency>
			<groupId>log4j</groupId>
			<artifactId>log4j</artifactId>
			<version>1.2.9</version>
		</dependency>
		<dependency>
			<groupId>junit</groupId>
			<artifactId>junit</artifactId>
			<version>4.12</version>
		</dependency>

		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-client</artifactId>
			<version>2.7.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-common</artifactId>
			<version>2.7.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-hdfs</artifactId>
			<version>2.7.1</version>
		</dependency>

		<dependency>
			<groupId>org.apache.spark</groupId>
			<artifactId>spark-core_2.10</artifactId>
			<version>1.5.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.spark</groupId>
			<artifactId>spark-sql_2.10</artifactId>
			<version>1.5.1</version>
		</dependency>

  然后刷新maven的依赖,

12、新建一个Scala Object。

测试代码为:

  def main(args: Array[String]) {
    println("Hello World!")
    val sparkConf = new SparkConf().setMaster("local").setAppName("test")
    val sparkContext = new SparkContext(sparkConf)
  }

  执行,

如果报了以下错误,

java.lang.SecurityException: class "javax.servlet.FilterRegistration"‘s signer information does not match signer information of other classes in the same package
	at java.lang.ClassLoader.checkCerts(ClassLoader.java:952)
	at java.lang.ClassLoader.preDefineClass(ClassLoader.java:666)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:794)
	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
	at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
	at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.spark-project.jetty.servlet.ServletContextHandler.<init>(ServletContextHandler.java:136)
	at org.spark-project.jetty.servlet.ServletContextHandler.<init>(ServletContextHandler.java:129)
	at org.spark-project.jetty.servlet.ServletContextHandler.<init>(ServletContextHandler.java:98)
	at org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:110)
	at org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:101)
	at org.apache.spark.ui.WebUI.attachPage(WebUI.scala:78)
	at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:62)
	at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:62)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
	at org.apache.spark.ui.WebUI.attachTab(WebUI.scala:62)
	at org.apache.spark.ui.SparkUI.initialize(SparkUI.scala:61)
	at org.apache.spark.ui.SparkUI.<init>(SparkUI.scala:74)
	at org.apache.spark.ui.SparkUI$.create(SparkUI.scala:190)
	at org.apache.spark.ui.SparkUI$.createLiveUI(SparkUI.scala:141)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:466)
	at com.test.Test$.main(Test.scala:13)
	at com.test.Test.main(Test.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

  可以把servlet-api 2.5 jar删除即可:

最好的办法是删除pom.xml中相关的依赖,即

    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>2.7.1</version>
    </dependency>

最后的pom.xml文件的依赖是

<dependencies>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-common</artifactId>
      <version>2.7.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-hdfs</artifactId>
      <version>2.7.1</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-core_2.10</artifactId>
      <version>1.5.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.10</artifactId>
      <version>1.5.1</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-hive_2.10</artifactId>
      <version>1.5.1</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming_2.10</artifactId>
      <version>1.5.2</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-mllib_2.10</artifactId>
      <version>1.5.2</version>
    </dependency>

    <dependency>
      <groupId>com.databricks</groupId>
      <artifactId>spark-avro_2.10</artifactId>
      <version>2.0.1</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming_2.10</artifactId>
      <version>1.5.2</version>
    </dependency>

  </dependencies>

  

  如果是报了这个错误,也没有什么问题,程序依旧可以执行,

java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
	at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:356)
	at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:371)
	at org.apache.hadoop.util.Shell.<clinit>(Shell.java:364)
	at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
	at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:272)
	at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
	at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:790)
	at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:760)
	at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:633)
	at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2084)
	at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2084)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2084)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:311)
	at com.test.Test$.main(Test.scala:13)
	at com.test.Test.main(Test.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

  最后看到的正常输出:

Hello World!
Using Spark‘s default log4j profile: org/apache/spark/log4j-defaults.properties
16/09/19 11:21:29 INFO SparkContext: Running Spark version 1.5.1
16/09/19 11:21:29 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
	at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:356)
	at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:371)
	at org.apache.hadoop.util.Shell.<clinit>(Shell.java:364)
	at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
	at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:272)
	at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
	at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:790)
	at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:760)
	at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:633)
	at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2084)
	at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2084)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2084)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:311)
	at com.test.Test$.main(Test.scala:13)
	at com.test.Test.main(Test.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
16/09/19 11:21:29 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/09/19 11:21:30 INFO SecurityManager: Changing view acls to: pc
16/09/19 11:21:30 INFO SecurityManager: Changing modify acls to: pc
16/09/19 11:21:30 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(pc); users with modify permissions: Set(pc)
16/09/19 11:21:30 INFO Slf4jLogger: Slf4jLogger started
16/09/19 11:21:31 INFO Remoting: Starting remoting
16/09/19 11:21:31 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:52500]
16/09/19 11:21:31 INFO Utils: Successfully started service ‘sparkDriver‘ on port 52500.
16/09/19 11:21:31 INFO SparkEnv: Registering MapOutputTracker
16/09/19 11:21:31 INFO SparkEnv: Registering BlockManagerMaster
16/09/19 11:21:31 INFO DiskBlockManager: Created local directory at C:\Users\pc\AppData\Local\Temp\blockmgr-f9ea7f8c-68f9-4f9b-a31e-b87ec2e702a4
16/09/19 11:21:31 INFO MemoryStore: MemoryStore started with capacity 966.9 MB
16/09/19 11:21:31 INFO HttpFileServer: HTTP File server directory is C:\Users\pc\AppData\Local\Temp\spark-64cccfb4-46c8-4266-92c1-14cfc6aa2cb3\httpd-5993f955-0d92-4233-b366-c9a94f7122bc
16/09/19 11:21:31 INFO HttpServer: Starting HTTP Server
16/09/19 11:21:31 INFO Utils: Successfully started service ‘HTTP file server‘ on port 52501.
16/09/19 11:21:31 INFO SparkEnv: Registering OutputCommitCoordinator
16/09/19 11:21:31 INFO Utils: Successfully started service ‘SparkUI‘ on port 4040.
16/09/19 11:21:31 INFO SparkUI: Started SparkUI at http://192.168.51.143:4040
16/09/19 11:21:31 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/09/19 11:21:31 INFO Executor: Starting executor ID driver on host localhost
16/09/19 11:21:31 INFO Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService‘ on port 52520.
16/09/19 11:21:31 INFO NettyBlockTransferService: Server created on 52520
16/09/19 11:21:31 INFO BlockManagerMaster: Trying to register BlockManager
16/09/19 11:21:31 INFO BlockManagerMasterEndpoint: Registering block manager localhost:52520 with 966.9 MB RAM, BlockManagerId(driver, localhost, 52520)
16/09/19 11:21:31 INFO BlockManagerMaster: Registered BlockManager
16/09/19 11:21:31 INFO SparkContext: Invoking stop() from shutdown hook
16/09/19 11:21:32 INFO SparkUI: Stopped Spark web UI at http://192.168.51.143:4040
16/09/19 11:21:32 INFO DAGScheduler: Stopping DAGScheduler
16/09/19 11:21:32 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/09/19 11:21:32 INFO MemoryStore: MemoryStore cleared
16/09/19 11:21:32 INFO BlockManager: BlockManager stopped
16/09/19 11:21:32 INFO BlockManagerMaster: BlockManagerMaster stopped
16/09/19 11:21:32 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/09/19 11:21:32 INFO SparkContext: Successfully stopped SparkContext
16/09/19 11:21:32 INFO ShutdownHookManager: Shutdown hook called
16/09/19 11:21:32 INFO ShutdownHookManager: Deleting directory C:\Users\pc\AppData\Local\Temp\spark-64cccfb4-46c8-4266-92c1-14cfc6aa2cb3

Process finished with exit code 0

  至此,开发环境搭建完毕。

时间: 2024-10-12 09:06:16

Spark Idea Maven 开发环境搭建的相关文章

Scala java maven开发环境搭建

基于maven配置的scala开发环境,首先需要安装 idea 的scala plugin.然后就可以使用maven编译scala程序了.一般情况下都是java scala的混合,所以src下面有java scala两个源文件目录. 1 <?xml version="1.0" encoding="UTF-8"?> 2 <project xmlns="http://maven.apache.org/POM/4.0.0" 3 xml

spark Intellij IDEA开发环境搭建

(1)创建Scala项目 File->new->Project,如下图 选择Scala 然后next 其中Project SDK指定安装的JDK,Scala SDK指定安装的Scala(这里使用的是IDEA自带的scala SDK),这里将项目名称命令为SparkWordCount,然后finish  在IDEA中开发应用程序时,常常需要通过一定的文件目录组织进行源码编写,例如源文件目录.测试源文件目录,下面演示在Intellij IDEA的src目录下创建main/scala源文件目录. 直

Eclipse+maven开发环境搭建

版本描述: Eclipse 3.2.2 Maven 2.0.7 Jdk 1.5以上,本例是在jdk1.50版本测试通过 Maven配置过程 Maven官方下载地址:http://www.apache.org/dyn/closer.cgi/maven/binaries/maven-2.0.7-bin.zip Maven下载后,解压到本地目录,如:c:\ maven-2.0.7,配置maven环境变量M2_HOME,如图:, 在path中添加bin目录,如图: 启动cmd,输入mvn –versio

Spark+ECLIPSE+JAVA+MAVEN windows开发环境搭建及入门实例【附详细代码】

http://blog.csdn.net/xiefu5hh/article/details/51707529 Spark+ECLIPSE+JAVA+MAVEN windows开发环境搭建及入门实例[附详细代码] 标签: SparkECLIPSEJAVAMAVENwindows 2016-06-18 22:35 405人阅读 评论(0) 收藏 举报  分类: spark(5)  版权声明:本文为博主原创文章,未经博主允许不得转载. 目录(?)[+] 前言 本文旨在记录初学Spark时,根据官网快速

Windows下基于eclipse的Spark应用开发环境搭建

原创文章,转载请注明: 转载自www.cnblogs.com/tovin/p/3822985.html 一.软件下载 maven下载安装 :http://10.100.209.243/share/soft/apache-maven-3.2.1-bin.zip       jdk下载安装:          http://10.100.209.243/share/soft/jdk-7u60-windows-i586.exe(32位)         http://10.100.209.243/sh

spark JAVA 开发环境搭建及远程调试

spark JAVA 开发环境搭建及远程调试 以后要在项目中使用Spark 用户昵称文本做一下聚类分析,找出一些违规的昵称信息.以前折腾过Hadoop,于是看了下Spark官网的文档以及 github 上 官方提供的examples,看完了之后决定动手跑一个文本聚类的demo,于是有了下文. 1. 环境介绍 本地开发环境是:IDEA2018.JDK8.windows 10.远程服务器 Ubuntu 16.04.3 LTS上安装了spark-2.3.1-bin-hadoop2.7 看spark官网

Hive项目开发环境搭建(Eclipse\MyEclipse + Maven)

写在前面的话 可详细参考,一定得去看 HBase 开发环境搭建(Eclipse\MyEclipse + Maven) Zookeeper项目开发环境搭建(Eclipse\MyEclipse + Maven) 我这里,相信,能看此博客的朋友,想必是有一定基础的了.我前期写了大量的基础性博文.可以去补下基础. 步骤一:File  ->  New  -> Project   ->  Maven Project 步骤二:自行设置,待会创建的myHBase工程,放在哪个目录下. 步骤三: 步骤四:

Eclipse搭建maven开发环境

上一篇学习了maven开发环境的搭建,并且手动编写了一个maven工程,但是这样子效率很低下,今天带大家学习在eclipse下搭建maven开发环境. 常用的maven命令 mvn clean :运行清理操作,会将target文件夹中的数据删除 mvn clean compile:先运行清理操作,之后运行编译操作,会将代码编译到target文件夹中. mvn clean test: 运行清理和测试 mvn clean package : 运行清理和打包 mvn clean install : 运

Hadoop项目开发环境搭建(Eclipse\MyEclipse + Maven)

写在前面的话 可详细参考,一定得去看 HBase 开发环境搭建(Eclipse\MyEclipse + Maven) Zookeeper项目开发环境搭建(Eclipse\MyEclipse + Maven) Hive项目开发环境搭建(Eclipse\MyEclipse + Maven) MapReduce 开发环境搭建(Eclipse\MyEclipse + Maven) 我这里,相信,能看此博客的朋友,想必是有一定基础的了.我前期写了大量的基础性博文.可以去补下基础. 步骤一:File  ->