配置Ipython Nodebook 运行 Python Spark 程序

配置Ipython Nodebook 运行 Python Spark 程序

1.1、安装Anaconda

Anaconda的官网是https://www.anaconda.com,下载对应的版本;

1.1.1、下载Anaconda

$ cd /opt/local/src/
$ wget -c https://repo.anaconda.com/archive/Anaconda3-5.2.0-Linux-x86_64.sh

1.1.2、安装Anaconda

# 参数 -b 表示 batch -p 表示指定安装目录
$ bash Anaconda3-5.2.0-Linux-x86_64.sh -p /opt/local/anaconda -b

1.1.3、配置Anaconda相关环境变量

  • 配置环境变量
$ tail -n 8 ~/.bashrc

# Anaconda3
export ANACONDA_PATH=/opt/local/anaconda
export PATH=$ANACONDA_PATH/bin:$PATH

# PySpark
export PYSPARK_DRIVER_PYTHON=$ANACONDA_PATH/bin/ipython
export PYSPARK_PYTHON=$ANACONDA_PATH/bin/python
  • 启用环境变量
$ source ~/.bashrc
  • 验证
$ python --version
Python 3.6.5 :: Anaconda, Inc.

1.2、在Ipython Notebook 使用pySpark

1.2.1、创建工作目录

$ mkdir  ~/ipynotebook
$ cd ~/ipynotebook

1.2.2、Ipython Notebook 运行pySpark

  • 运行Ipython Notebook
$ PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" pyspark
[TerminalIPythonApp] WARNING | Subcommand `ipython notebook` is deprecated and will be removed in future versions.
[TerminalIPythonApp] WARNING | You likely want to use `jupyter notebook` in the future
[I 14:21:56.030 NotebookApp] JupyterLab beta preview extension loaded from /opt/local/anaconda/lib/python3.6/site-packages/jupyterlab
[I 14:21:56.030 NotebookApp] JupyterLab application directory is /opt/local/anaconda/share/jupyter/lab
[I 14:21:56.037 NotebookApp] Serving notebooks from local directory: /home/hadoop/ipynotebook
[I 14:21:56.037 NotebookApp] 0 active kernels
[I 14:21:56.037 NotebookApp] The Jupyter Notebook is running at:
[I 14:21:56.037 NotebookApp] http://localhost:8888/?token=5b68718fdabe4488decf07703a3bd76bf46d5dc733a6617d
[I 14:21:56.037 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 14:21:56.040 NotebookApp] 

    Copy/paste this URL into your browser when you connect for the first time,
    to login with a token:
        http://localhost:8888/?token=5b68718fdabe4488decf07703a3bd76bf46d5dc733a6617d&token=5b68718fdabe4488decf07703a3bd76bf46d5dc733a6617d
[I 14:21:56.683 NotebookApp] Accepting one-time-token-authenticated connection from 127.0.0.1

会自动通过默认的浏览器打开http://localhost:8888 页面

  • 在IPython Notebook 上编写程序

1.2.3、Ipython Notebook 在Hadoop Yarn 运行pySpark

  • 运行Ipython Notebook
$ PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" HADOOP_CONF_DIR=/opt/local/hadoop/etc/hadoop MASTER=yarn-client pyspark
[TerminalIPythonApp] WARNING | Subcommand `ipython notebook` is deprecated and will be removed in future versions.
[TerminalIPythonApp] WARNING | You likely want to use `jupyter notebook` in the future
[I 14:50:48.149 NotebookApp] JupyterLab beta preview extension loaded from /opt/local/anaconda/lib/python3.6/site-packages/jupyterlab
[I 14:50:48.149 NotebookApp] JupyterLab application directory is /opt/local/anaconda/share/jupyter/lab
[I 14:50:48.157 NotebookApp] Serving notebooks from local directory: /home/hadoop/ipynotebook
[I 14:50:48.157 NotebookApp] 0 active kernels
[I 14:50:48.157 NotebookApp] The Jupyter Notebook is running at:
[I 14:50:48.157 NotebookApp] http://localhost:8888/?token=8fe2c599dc39a23104dd6a058a0e05de3d9e88cfeda71b45
[I 14:50:48.157 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 14:50:48.161 NotebookApp] 

    Copy/paste this URL into your browser when you connect for the first time,
    to login with a token:
        http://localhost:8888/?token=8fe2c599dc39a23104dd6a058a0e05de3d9e88cfeda71b45&token=8fe2c599dc39a23104dd6a058a0e05de3d9e88cfeda71b45
  • 在IPython Notebook 上编写程序

  • 在YARN查看任务
$ yarn application -list
18/06/24 14:53:06 INFO client.RMProxy: Connecting to ResourceManager at node/192.168.20.10:8032
Total number of applications (application-types: [] and states: [SUBMITTED, ACCEPTED, RUNNING]):1
                Application-Id      Application-Name        Application-Type          User       Queue               State         Final-State         Progress                        Tracking-URL
application_1529805293111_0001          PySparkShell                   SPARK        hadoop     default             RUNNING           UNDEFINED              10%                    http://node:4040

1.2.4、Ipython Notebook 在Spark Stand Alone 运行pySpark

  • 启动Spark Stand Alone
$ /opt/local/spark/sbin/start-master.sh

$ /opt/local/spark/sbin/start-slaves.sh

$ jps
13249 Jps
13027 Master
13188 Worker
  • 运行Ipython Notebook
$ PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" MASTER=spark://node:7077 pyspark --num-executors 1 --total-executor-cores 1 --executor-memory 512m
[TerminalIPythonApp] WARNING | Subcommand `ipython notebook` is deprecated and will be removed in future versions.
[TerminalIPythonApp] WARNING | You likely want to use `jupyter notebook` in the future
[I 15:11:59.211 NotebookApp] JupyterLab beta preview extension loaded from /opt/local/anaconda/lib/python3.6/site-packages/jupyterlab
[I 15:11:59.212 NotebookApp] JupyterLab application directory is /opt/local/anaconda/share/jupyter/lab
[I 15:11:59.230 NotebookApp] Serving notebooks from local directory: /home/hadoop/ipynotebook
[I 15:11:59.230 NotebookApp] 0 active kernels
[I 15:11:59.230 NotebookApp] The Jupyter Notebook is running at:
[I 15:11:59.230 NotebookApp] http://localhost:8888/?token=1972eb523fea28d541985df7ed2ce55cc2bfada7e31eb9ea
[I 15:11:59.230 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 15:11:59.233 NotebookApp] 

    Copy/paste this URL into your browser when you connect for the first time,
    to login with a token:
        http://localhost:8888/?token=1972eb523fea28d541985df7ed2ce55cc2bfada7e31eb9ea&token=1972eb523fea28d541985df7ed2ce55cc2bfada7e31eb9ea
[I 15:12:02.594 NotebookApp] Accepting one-time-token-authenticated connection from 127.0.0.1
  • 在IPython Notebook 上编写程序

  • 查看Spark Standalone Web UI 界面

1.3、总结

启动启动Ipython Notebook,首先进入Ipython Notebook的工作目录,如~/ipynotebook这个根据实际的情况确定;

1.3.1、Local 启动Ipython Notebook

PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" pyspark
#### 或者
PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" pyspark --master local[*]

1.3.2、Hadoop YARN 启动Ipython Notebook

PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" HADOOP_CONF_DIR=/opt/local/hadoop/etc/hadoop MASTER=yarn-client pyspark
#### 或者
PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" HADOOP_CONF_DIR=/opt/local/hadoop/etc/hadoop pyspark --master yarn --deploy-mode client

1.3.2、Spark Stand Alone 启动Ipython Notebook

PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" MASTER=spark://node:7077 pyspark --num-executors 1 --total-executor-cores 1 --executor-memory 512m 

原文地址:http://blog.51cto.com/balich/2132206

时间: 2024-08-05 14:54:06

配置Ipython Nodebook 运行 Python Spark 程序的相关文章

IPython Notebook 运行python Spark程序

1.安装pip 因为centos7.0自带的python系统是2.7.5,并没有安装pip,需要先安装pip $ wget https://bootstrap.pypa.io/get-pip.py $ python get-pip.py $ pip install numpy pandas scipy jupyter 2.配置启动项 $ vim ./.bashrc export PYSPARK_DRIVER_PYTHON=/usr/bin/ipython export PYSPARK_PYTHO

python+spark程序代码片段

处理如此的字符串: time^B1493534543940^Aid^B02CD^Aasr^B叫爸爸^Anlp^B{"domain":"com.abc.system.chat","intent":"chat","slots":{"tts":"爸爸","asr":"叫爸爸"},"voice":"叫爸爸&

配置Office Excel运行Python宏脚本

基本环境 名称 版本 操作系统 Windows 10 x64 Office 2016 安装Python 1.下载Python安装包 登录https://www.python.org/downloads/windows/进行下载 Python2.x或Python3.x均可,推荐Python3.x(因为2020年1月1日起Python2就停止服务了...) 2.安装Python 安装前,勾选Add Python 3.x to PATH选项.安装完毕之后,在Windows控制台可直接使用python命

底层战详解使用Java开发Spark程序(DT大数据梦工厂)

Scala开发Spark很多,为什么还要用Java开发原因:1.一般Spark作为数据处理引擎,一般会跟IT其它系统配合,现在业界里面处于霸主地位的是Java,有利于团队的组建,易于移交:2.Scala学习角度讲,比Java难.找Scala的高手比Java难,项目的维护和二次开发比较困难:3.很多人员有Java的基础,确保对Scala不是很熟悉的人可以编写课程中的案例预测:2016年Spark取代Map Reduce,拯救HadoopHadoop+Spark = A winning combat

IDE开发Spark程序

IDEA Eclipse 下载scala 下载地址 scala.msi scala环境变量配置 (1)设置SCALA-HOME变量:如图,单击新建,在变量名一栏输入: SCALA-HOME 变量值一栏输入: D:\Program Files\scala 也就是scala的安装目录,根据个人情况有所不同,如果安装在E盘,将"D"改成"E"即可. (2)设置path变量:找到系统变量下的"path"如图,单击编辑.在"变量值"一栏

Java开发Spark程序

pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <mo

luigi框架--关于python运行spark程序

首先,目标是写个python脚本,跑spark程序来统计hdfs中的一些数据.参考了别人的代码,故用了luigi框架. 至于luigi的原理 底层的一些东西Google就好.本文主要就是聚焦快速使用,知其然不知其所以然. python写Spark或mapreduce还有其他的方法,google上很多,这里用luigi只是刚好有参考的代码,而且理解起来还是简单,就用了. 上代码: import luigi, sysfrom datetime import datetime, timedeltafr

spark 集群运行python作业

今天尝试用刚搭建好的spark集群运行python作业,遇到了一些问题,解决了一些坑的同时也对spark集群的运作和配置方式有了一些比较浅的认识,不像之前那么没有概念了,记录如下,之后还要继续更多的对Hadoop生态圈和spark并行计算框架的探究. 首先说下环境,集群有五个节点,集群环境是用cloudera manager 搭建的,hadoop用的是cloudera的CDH,我对CDH和hadoop之间关系的理解就是与linux和CentOS的关系一样,其他的的相关组件例如Hbase和Hive

editplus3运行Python程序

editplus3是一款不错的编辑器,他可以编译,运行java,php等各种程序,现把他运行Python程序的方法贴出来,首先得安装python,然后打开editplug3,工具——配置用户工具——组名称随便写个后点添加选应用程序,菜单文本:python命令:C:\Python31\python.exe(你自己实际安装python的目录)参数:选择向下的箭头--“文件路径”初始目录:“文件目录”捕获输出:开启确定即可,然后运行程序的时候只需点工具--python就开始运行了 editplus3运