Spark(五十):使用JvisualVM监控Spark Executor JVM

引导

Windows环境下JvisulaVM一般存在于安装了JDK的目录${JAVA_HOME}/bin/JvisualVM.exe,它支持(本地和远程)jstatd和JMX两种方式连接远程JVM。

jstatd (Java Virtual Machine jstat Daemon)——监听远程服务器的CPU,内存,线程等信息

JMX(Java Management Extensions,即Java管理扩展)是一个为应用程序、设备、系统等植入管理功能的框架。JMX可以跨越一系列异构操作系统平台、系统体系结构和网络传输协议,灵活的开发无缝集成的系统、网络和服务管理应用。

备注:针对jstatd我尝试未成功,因此也不在这里误导别人。

JMX监控

正常配置:

-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Djava.rmi.server.hostname=<ip>
-Dcom.sun.management.jmxremote.port=<port>

添加JMX配置:

在Spark中监控executor时,需要先配置jmx然后再启动spark应用程序,配置方式有三种:

1)在spark-defaults.conf中配置那三个参数

2)在spark-env.sh中配置:配置master,worker的JavaOptions

3)在spark-submit提交时配置

这里采用以下spark-submit提交时配置:

spark-submit --class myTest.KafkaWordCount --master yarn --deploy-mode cluster \--conf "spark.executor.extraJavaOptions=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=0 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false" --verbose --executor-memory 1G --total-executor-cores 6 /hadoop/spark/app/spark/20151223/testSpark.jar *.*.*.*:* test3 wordcount 4 kafkawordcount3 checkpoint4

注意:

1)不能指定具体的 ip 和 port------因为spark中运行时,很可能一个节点上分配多个container进程,此时占用同一个端口,会导致spark应用程序通过spark-submit提交失败。

2)因为不指定具体的ip和port,所以在任务提交阶段会自动分配端口。

3)上边三种配置方式可能会导致监控级别不同(比如spark-submit只针对一个应用程序,spark-env.sh可能是全局一个节点所有executor监控【未验证】,请读者注意。)

查找JMX分配端口

通过yarn applicationattempt -list appicationId查找到applicationattemptid

[[email protected]143 bin]# yarn applicationattempt -list application_1559203334026_0015
19/06/01 17:57:18 INFO client.RMProxy: Connecting to ResourceManager at CDH-143/10.dx.dx.143:8032
Total number of application attempts :1
         ApplicationAttempt-Id                 State                        AM-Container-Id                            Tracking-URL
appattempt_1559203334026_0015_000001                 RUNNING    container_1559203334026_0015_01_000001  http://CDH-143:8088/proxy/application_1559203334026_0015/

通过yarn container -list aaplicationattemptId查找container id list

[[email protected]143 bin]# yarn container -list appattempt_1559203334026_0015_000001
19/06/01 17:57:52 INFO client.RMProxy: Connecting to ResourceManager at CDH-143/10.dx.dx.143:8032
Total number of containers :16
                  Container-Id            Start Time             Finish Time                   State                    Host                                LOG-URL
container_1559203334026_0015_01_000012  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-146:8041    http://CDH-146:8042/node/containerlogs/container_1559203334026_0015_01_000012/dx
container_1559203334026_0015_01_000013  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-146:8041    http://CDH-146:8042/node/containerlogs/container_1559203334026_0015_01_000013/dx
container_1559203334026_0015_01_000010  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-146:8041    http://CDH-146:8042/node/containerlogs/container_1559203334026_0015_01_000010/dx
container_1559203334026_0015_01_000011  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-146:8041    http://CDH-146:8042/node/containerlogs/container_1559203334026_0015_01_000011/dx
container_1559203334026_0015_01_000016  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-146:8041    http://CDH-146:8042/node/containerlogs/container_1559203334026_0015_01_000016/dx
container_1559203334026_0015_01_000014  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-146:8041    http://CDH-146:8042/node/containerlogs/container_1559203334026_0015_01_000014/dx
container_1559203334026_0015_01_000015  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-146:8041    http://CDH-146:8042/node/containerlogs/container_1559203334026_0015_01_000015/dx
container_1559203334026_0015_01_000004  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-142:8041    http://CDH-142:8042/node/containerlogs/container_1559203334026_0015_01_000004/dx
container_1559203334026_0015_01_000005  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-142:8041    http://CDH-142:8042/node/containerlogs/container_1559203334026_0015_01_000005/dx
container_1559203334026_0015_01_000002  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-142:8041    http://CDH-142:8042/node/containerlogs/container_1559203334026_0015_01_000002/dx
container_1559203334026_0015_01_000003  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-142:8041    http://CDH-142:8042/node/containerlogs/container_1559203334026_0015_01_000003/dx
container_1559203334026_0015_01_000008  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-142:8041    http://CDH-142:8042/node/containerlogs/container_1559203334026_0015_01_000008/dx
container_1559203334026_0015_01_000009  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-142:8041    http://CDH-142:8042/node/containerlogs/container_1559203334026_0015_01_000009/dx
container_1559203334026_0015_01_000006  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-142:8041    http://CDH-142:8042/node/containerlogs/container_1559203334026_0015_01_000006/dx
container_1559203334026_0015_01_000007  Sat Jun 01 13:27:52 +0800 2019                   N/A                 RUNNING            CDH-142:8041    http://CDH-142:8042/node/containerlogs/container_1559203334026_0015_01_000007/dx
container_1559203334026_0015_01_000001  Sat Jun 01 13:27:38 +0800 2019                   N/A                 RUNNING            CDH-142:8041    http://CDH-142:8042/node/containerlogs/container_1559203334026_0015_01_000001/dx

到具体executor所在节点服务器上,使用如下命令找到运行的线程,和 pid

[[email protected]146 ~]# ps -axu | grep container_1559203334026_0015_01_000013
yarn      8844  0.0  0.0 113144  1496 ?        S    13:27   0:00 bash /data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/default_container_executor.sh
yarn      8857  0.0  0.0 113280  1520 ?        Ss   13:27   0:00 /bin/bash -c /usr/java/jdk1.8.0_171-amd64/bin/java -server -Xmx6144m ‘-Dcom.sun.management.jmxremote‘ ‘-Dcom.sun.management.jmxremote.port=0‘ ‘-Dcom.sun.management.jmxremote.authenticate=false‘ ‘-Dcom.sun.management.jmxremote.ssl=false‘ -Djava.io.tmpdir=/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/tmp ‘-Dspark.network.timeout=10000000‘ ‘-Dspark.driver.port=47564‘ ‘-Dspark.port.maxRetries=32‘ -Dspark.yarn.app.container.log.dir=/data6/yarn/container-logs/application_1559203334026_0015/container_1559203334026_0015_01_000013 -XX:OnOutOfMemoryError=‘kill %p‘ org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:47564 --executor-id 12 --hostname CDH-146 --cores 2 --app-id application_1559203334026_0015 --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/__app__.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/streaming-dx-perf-3.0.0.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/dx-common-3.0.0.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/spark-sql-kafka-0-10_2.11-2.4.0.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/spark-avro_2.11-3.2.0.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/shc-core-1.1.2-2.2-s_2.11-SNAPSHOT.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/rocksdbjni-5.17.2.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/kafka-clients-0.10.0.1.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/elasticsearch-spark-20_2.11-6.4.1.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/dx_Spark_State_Store_Plugin-1.0-SNAPSHOT.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/bijection-core_2.11-0.9.5.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/bijection-avro_2.11-0.9.5.jar 1>/data6/yarn/container-logs/application_1559203334026_0015/container_1559203334026_0015_01_000013/stdout 2>/data6/yarn/container-logs/application_1559203334026_0015/container_1559203334026_0015_01_000013/stderr
yarn      9000  143  3.3 8736712 4379648 ?     Sl   13:27  24:35 /usr/java/jdk1.8.0_171-amd64/bin/java -server -Xmx6144m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=0 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.io.tmpdir=/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/tmp -Dspark.network.timeout=10000000 -Dspark.driver.port=47564 -Dspark.port.maxRetries=32 -Dspark.yarn.app.container.log.dir=/data6/yarn/container-logs/application_1559203334026_0015/container_1559203334026_0015_01_000013 -XX:OnOutOfMemoryError=kill %p org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:47564 --executor-id 12 --hostname CDH-146 --cores 2 --app-id application_1559203334026_0015 --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/__app__.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/dx-domain-perf-3.0.0.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/dx-common-3.0.0.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/spark-sql-kafka-0-10_2.11-2.4.0.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/spark-avro_2.11-3.2.0.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/shc-core-1.1.2-2.2-s_2.11-SNAPSHOT.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/rocksdbjni-5.17.2.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/kafka-clients-0.10.0.1.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/elasticsearch-spark-20_2.11-6.4.1.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/dx_Spark_State_Store_Plugin-1.0-SNAPSHOT.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/bijection-core_2.11-0.9.5.jar --user-class-path file:/data6/yarn/nm/usercache/dx/appcache/application_1559203334026_0015/container_1559203334026_0015_01_000013/bijection-avro_2.11-0.9.5.jar
root     25939  0.0  0.0 112780   956 pts/1    S+   13:45   0:00 grep --color=auto container_1559203334026_0015_01_000013

然后通过 pid 找到对应JMX的端口

[[email protected]146 ~]# sudo netstat -antp | grep 9000
tcp        0      0 10.dx.dx.146:9000      0.0.0.0:*               LISTEN      2642/python2.7
tcp6       0      0 :::48169                :::*                    LISTEN      9000/java
tcp6       0      0 :::37692                :::*                    LISTEN      9000/java
tcp6       0      0 10.dx.dx.146:52710     :::*                    LISTEN      9000/java
tcp6       0      0 10.dx.dx.146:55535     10.dx.dx.142:38397     ESTABLISHED 9000/java
tcp6   64088      0 10.dx.dx.146:45410     10.206.186.35:9092      ESTABLISHED 9000/java
tcp6       0      0 10.dx.dx.146:60259     10.dx.dx.143:47564     ESTABLISHED 9000/java           

结果中看,疑似为4816937692,稍微尝试一下即可连上对应的 spark executor

使用JvisulaVM.exe工具添加监控

在本地windows服务器上找到JDK的目录,找到文件${JAVA_HOME}/bin/JvisualVM.exe,并运行它。启动后选择“远程”右键,添加JMX监控

填写监控executor所在节点ip

然后就可以启动监控:

原文地址:https://www.cnblogs.com/yy3b2007com/p/10960588.html

时间: 2024-08-28 04:17:22

Spark(五十):使用JvisualVM监控Spark Executor JVM的相关文章

Spark(五十二):Spark Scheduler模块之DAGScheduler流程

导入 从一个Job运行过程中来看DAGScheduler是运行在Driver端的,其工作流程如下图: 图中涉及到的词汇概念: 1. RDD——Resillient Distributed Dataset 弹性分布式数据集. 2. Operation——作用于RDD的各种操作分为transformation和action. 3. Job——作业,一个JOB包含多个RDD及作用于相应RDD上的各种operation. 4. Stage——一个作业分为多个阶段. 5. Partition——数据分区,

Linux学习总结(五十六)监控zabbix部署 下篇

zabbix 应用举例 一 添加自定义监控项目 我们举一个实例:监控web服务器80端口的并发连接数,并设置图形.1 . 写一个可以抓取数据的脚本,在客户端zabbix-agent 上创建脚本 vim /usr/local/sbin/estab.sh #!/bin/bash netstat -ant |grep ':80 ' |grep -c ESTABLISHED 保存后,给脚本777权限脚本测试:sh /usr/local/sbin/estab.sh 看是否报错2 在zabbix客户端配置文

IT十八掌课程体系SPARK知识点总结

Spark知识点 IT十八掌课程体系SPARK知识点如下: 有需要IT十八掌体系课程的可以加微信:15210639973 1.定义 MapReduce-like集群计算框架设计的低延迟迭代和交互使用的工作. 2.体系结构 3.一些重要概念的解析 (1) RDD(resilient distributed dataset) 弹性分布式数据集一个只读的,可分区的分布式数据集,能够部分或全部的缓存在内存中(数据溢出时会根据LRU策略来决定哪些数据可以放在内存里,哪些存到磁盘上),用来减少Disk-io

十分钟了解分布式计算:Spark

Spark是一个通用的分布式内存计算框架,本文主要研讨Spark的核心数据结构RDD,及其在内存上的容错,内容基于论文 Zaharia, Matei, et al. "Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing" Proceedings of the 9th USENIX conference on Networked Systems Desig

Spark(十) -- Spark Streaming API编程

本文测试的Spark版本是1.3.1 Spark Streaming编程模型: 第一步: 需要一个StreamingContext对象,该对象是Spark Streaming操作的入口 ,而构建一个StreamingContext对象需要两个参数: 1.SparkConf对象:该对象是配置Spark 程序设置的,例如集群的Master节点,程序名等信息 2.Seconds对象:该对象设置了StreamingContext多久读取一次数据流 第二步: 构建好入口对象之后,直接调用该入口的方法读取各

Spark(十二) -- Spark On Yarn &amp; Spark as a Service &amp; Spark On Tachyon

Spark On Yarn: 从0.6.0版本其,就可以在在Yarn上运行Spark 通过Yarn进行统一的资源管理和调度 进而可以实现不止Spark,多种处理框架并存工作的场景 部署Spark On Yarn的方式其实和Standalone是差不多的,区别就是需要在spark-env.sh中添加一些yarn的环境配置,在提交作业的时候会根据这些配置加载yarn的信息,然后将作业提交到yarn上进行管理 首先请确保已经部署了Yarn,相关操作请参考: hadoop2.2.0集群安装和配置 部署完

Spark(十八)SparkSQL的自定义函数UDF

在Spark中,也支持Hive中的自定义函数.自定义函数大致可以分为三种: UDF(User-Defined-Function),即最基本的自定义函数,类似to_char,to_date等 UDAF(User- Defined Aggregation Funcation),用户自定义聚合函数,类似在group by之后使用的sum,avg等 UDTF(User-Defined Table-Generating Functions),用户自定义生成函数,有点像stream里面的flatMap 自定

Spark整理(一):Spark是啥以及能干啥

一.Spark是什么 1.与Hadoop的关系 如今Hadoop已经不能狭义地称它为软件了,Hadoop广泛的说可以是一套完整的生态系统,可以包括HDFS.Map-Reduce.HBASE.HIVE等等.. 而Spark是一个计算框架,注意,是计算框架 其可以运行在Hadoop之上,绝大部分情况下是基于HDFS 说代替Hadoop其实是代替Hadoop中的Map-Reduce,用来解决Map-Reduce带来的一些问题 更具体地讲,Spark是基于内存的 大数据并行计算框架,可以完美的融入到Ha

spark集群与spark HA高可用快速部署 spark研习第一季

1.spark 部署 标签: spark 0 apache spark项目架构 spark SQL -- spark streaming -- MLlib -- GraphX 0.1 hadoop快速搭建,主要利用hdfs存储框架 下载hadoop-2.6.0,解压,到etc/hadoop/目录下 0.2 快速配置文件 cat core-site.xml <configuration> <property> <name>fs.defaultFS</name>