Spark编译安装和运行

一、环境说明

Mac OSX 10.10.3
Java  1.7.0_71
Spark 1.4.0

二、编译安装

tar -zxvf spark-1.4.0.tgz
cd spark-1.4.0
./sbt/sbt assembly

ps:如果之前执行过编译,需要执行 ./sbt/sbt clean  清理后才能重新编译。

三、运行

adeMacBook-Pro:spark-1.4.0 apple$ ./bin/spark-shell
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark‘s default log4j profile: org/apache/spark/log4j-defaults.properties
15/06/14 11:32:25 INFO SecurityManager: Changing view acls to: apple
15/06/14 11:32:25 INFO SecurityManager: Changing modify acls to: apple
15/06/14 11:32:25 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(apple); users with modify permissions: Set(apple)
15/06/14 11:32:25 INFO HttpServer: Starting HTTP Server
15/06/14 11:32:26 INFO Server: jetty-8.y.z-SNAPSHOT
15/06/14 11:32:26 INFO AbstractConnector: Started [email protected]0.0.0.0:61566
15/06/14 11:32:26 INFO Utils: Successfully started service ‘HTTP class server‘ on port 61566.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  ‘_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.4.0
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_71)
Type in expressions to have them evaluated.
Type :help for more information.
15/06/14 11:32:31 INFO SparkContext: Running Spark version 1.4.0
15/06/14 11:32:31 INFO SecurityManager: Changing view acls to: apple
15/06/14 11:32:31 INFO SecurityManager: Changing modify acls to: apple
15/06/14 11:32:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(apple); users with modify permissions: Set(apple)
15/06/14 11:32:31 INFO Slf4jLogger: Slf4jLogger started
15/06/14 11:32:31 INFO Remoting: Starting remoting
15/06/14 11:32:32 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:61567]
15/06/14 11:32:32 INFO Utils: Successfully started service ‘sparkDriver‘ on port 61567.
15/06/14 11:32:32 INFO SparkEnv: Registering MapOutputTracker
15/06/14 11:32:32 INFO SparkEnv: Registering BlockManagerMaster
15/06/14 11:32:32 INFO DiskBlockManager: Created local directory at /private/var/folders/s3/llfgz_mx47572r5b4pbk7xm80000gp/T/spark-cf6feb6b-1464-4d54-89f3-8d97bf15205f/blockmgr-b8410cda-aa29-4069-9406-d6155512cd53
15/06/14 11:32:32 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
15/06/14 11:32:32 INFO HttpFileServer: HTTP File server directory is /private/var/folders/s3/llfgz_mx47572r5b4pbk7xm80000gp/T/spark-cf6feb6b-1464-4d54-89f3-8d97bf15205f/httpd-a1838f08-2ccd-42d2-9419-6e91cb6fdfad
15/06/14 11:32:32 INFO HttpServer: Starting HTTP Server
15/06/14 11:32:32 INFO Server: jetty-8.y.z-SNAPSHOT
15/06/14 11:32:32 INFO AbstractConnector: Started [email protected]0.0.0.0:61568
15/06/14 11:32:32 INFO Utils: Successfully started service ‘HTTP file server‘ on port 61568.
15/06/14 11:32:32 INFO SparkEnv: Registering OutputCommitCoordinator
15/06/14 11:32:32 INFO Server: jetty-8.y.z-SNAPSHOT
15/06/14 11:32:32 INFO AbstractConnector: Started [email protected]0.0.0.0:4040
15/06/14 11:32:32 INFO Utils: Successfully started service ‘SparkUI‘ on port 4040.
15/06/14 11:32:32 INFO SparkUI: Started SparkUI at http://192.168.1.106:4040
15/06/14 11:32:32 INFO Executor: Starting executor ID driver on host localhost
15/06/14 11:32:32 INFO Executor: Using REPL class URI: http://192.168.1.106:61566
15/06/14 11:32:32 INFO Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService‘ on port 61569.
15/06/14 11:32:32 INFO NettyBlockTransferService: Server created on 61569
15/06/14 11:32:32 INFO BlockManagerMaster: Trying to register BlockManager
15/06/14 11:32:32 INFO BlockManagerMasterEndpoint: Registering block manager localhost:61569 with 265.4 MB RAM, BlockManagerId(driver, localhost, 61569)
15/06/14 11:32:32 INFO BlockManagerMaster: Registered BlockManager
15/06/14 11:32:33 INFO SparkILoop: Created spark context..
Spark context available as sc.
15/06/14 11:32:33 INFO SparkILoop: Created sql context..
SQL context available as sqlContext.

scala> 

参考:

https://spark.apache.org/docs/latest/

时间: 2024-08-03 12:06:06

Spark编译安装和运行的相关文章

Spark入门实战系列--2.Spark编译与部署(下)--Spark编译安装

[注]该系列文章以及使用到安装包/测试数据 可以在<倾情大奉送--Spark入门实战系列>获取 1.编译Spark Spark可以通过SBT和Maven两种方式进行编译,再通过make-distribution.sh脚本生成部署包.SBT编译需要安装git工具,而Maven安装则需要maven工具,两种方式均需要在联网下进行,通过比较发现SBT编译速度较慢(原因有可能是1.时间不一样,SBT是白天编译,Maven是深夜进行的,获取依赖包速度不同 2.maven下载大文件是多线程进行,而SBT是

spark编译安装及部署

1.下载并编译spark源码 下载spark http://spark.apache.org/downloads.html 我下载的是1.2.0版本 解压并编译,在编译前,可以根据自己机器的环境修改相应的pom.xml配置,我的环境是hadoop2.4.1修改个小版本号即可,编译包括了对hive.yarn.ganglia等的支持 tar xzf ~/source/spark-1.2.0.tgz cd spark-1.2.0 vi pom.xml ./make-distribution.sh --

Ubuntu16.04下编译安装及运行单目ORBSLAM2

官网有源代码和配置教程,地址是 https://github.com/raulmur/ORB_SLAM2 1 安装必要工具 首先,有两个工具是需要提前安装的.即cmake和Git. sudo apt-get install cmake sudo apt-get install git 2 安装Pangolin,用于可视化和用户接口 安装依赖项: sudo apt-get install libglew-dev sudo apt-get install libpython2.7-dev 先转到一个

Spark编译与部署

Spark入门实战系列--2.Spark编译与部署(上)--基础环境搭建 [注] 1.该系列文章以及使用到安装包/测试数据 可以在<倾情大奉送--Spark入门实战系列>获取: 2.Spark编译与部署将以CentOS 64位操作系统为基础,主要是考虑到实际应用一般使用64位操作系统,内容分为三部分:基础环境搭建.Hadoop编译安装和Spark编译安装,该环境作为后续实验基础: 3.文章演示了Hadoop.Spark的编译过程,同时附属资源提供了编译好的安装包,觉得编译费时间可以直接使用这些

Spark入门实战系列--2.Spark编译与部署(上)--基础环境搭建

[注] 1.该系列文章以及使用到安装包/测试数据 可以在<倾情大奉送--Spark入门实战系列>获取: 2.Spark编译与部署将以CentOS 64位操作系统为基础,主要是考虑到实际应用一般使用64位操作系统,内容分为三部分:基础环境搭建.Hadoop编译安装和Spark编译安装,该环境作为后续实验基础: 3.文章演示了Hadoop.Spark的编译过程,同时附属资源提供了编译好的安装包,觉得编译费时间可以直接使用这些编译好的安装包进行部署. 1.运行环境说明 1.1 硬软件环境 l  主机

Linux下指定版本编译安装LAMP

说明: 操作系统:CentOS 6.5 64位 需求: 编译安装LAMP运行环境 各软件版本如下: MySQL:mysql-5.1.73 Apache:httpd-2.2.31 PHP:php-5.2.17 具体操作: 准备篇 一.配置防火墙,开启80端口.3306端口 vi /etc/sysconfig/iptables #编辑防火墙配置文件 # Firewall configuration written by system-config-firewall # Manual customiz

Centos7中编译安装MySQL(mysql-5.7)

MySQL 是一个真正的多线程.多用户的SQL数据库服务,凭借其高性能.高可靠和易于使用的性能,成为服务器领域中最受欢迎的开源数据库系统.为了确保数据库的功能的完性.可定制性,本篇文章将采用源代码编译的方式安装mysql数据库系统 实验环境 系统:CentOS-7-x86_64(ip:192.168.75.103) 使用软件:boost_1_59_0.mysql-5.7.17 安装实验环境 yum -y install gcc gcc-c++ ncurses ncurses-devel biso

apache/mysql/php编译安装及支持xcache和fastcgi方式运行

一.编译安装apache     1.安装环境:yum install gcc gcc-c++ openssl-devel libtool -y     2.安装apr.apr-util及pcre         tar jxf apr-1.5.1.tar.bz2         cd apr-1.5.1         ./configure --prefix=/usr/local/apr         make && make install              tar jxf

linux软件管理之------编译安装nginx服务器并手动编写自动化运行脚本

红帽系列的 linux软件管理分为三类:1. rpm 安装软件.2. yum 安装软件.3. 源码包编译安装.前面两种会在相关专题给出详细讲解.源码包的编译安装是非常关键的,我们知道linux的相关版本非常多,相关的编译器,解释器也有很多,很多还有最小系统,嵌入式系统等等.同一功能的软件如果只有编译好的软件包,在其它linux的平台上,可能并不能正常安装运行,在此情况下,源码包编译安装出现了.所以本文的重点是以nginx为例,给出源码包编译安装的详细过程,同时带你手工编写自动化运行脚本. 准备工