Dream Spark ------spark on yarn ,yarn的配置

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>sdb-ali-hangzhou-dp1</value>
</property>
<property>
   <name>yarn.resourcemanager.webapp.address</name>
   <value>sdb-ali-hangzhou-dp1:21188</value>
 </property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!-- 这个配置是将生成的日志文件上传到hdfs,但是本地的会删除,也就是说在yarn的监控界面会看不到,所以并没有采用-->
<!--<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/user/yarnlogs</value>
</property>
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>-1</value>
</property>
<property>
<name>yarn.log-aggregation.retain-check-interval-seconds</name>
<value>-1</value>
</property>-->
<!-- 72小时候yarn的日志会清除掉-->
<property>
<name>yarn.nodemanager.log.retain-seconds</name>
<value>604800</value>
</property>
<!--<property>
<name>yarn.application.classpath</name>
<value>/data/kefu3/application/easemobbigdata_jar/libs/*,$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$HADOOP_YARN_HOME/share/hadoop/yarn/*,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*</value>
</property>-->
<!-- 以下是yarn的HA的配置,暂时没有使用-->
<!-- Site specific YARN configuration properties -->
<!--<property>
  <name>yarn.resourcemanager.ha.enabled</name>
  <value>true</value>
 </property>
 <property>
  <name>yarn.resourcemanager.ha.rm-ids</name>
  <value>nn1,nn2</value>
 </property>
 <property>
  <name>yarn.resourcemanager.hostname.nn1</name>
  <value>sdb-ali-hangzhou-dp1</value>
 </property>
 <property>
  <name>yarn.resourcemanager.hostname.nn2</name>
  <value>sdb-ali-hangzhou-dp2</value>
 </property>
 <property>
  <name>yarn.resourcemanager.recovery.enabled</name>
  <value>true</value>
 </property>
 <property>
  <name>yarn.resourcemanager.store.class</name>
  <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
 </property>
 <property>
  <name>yarn.resourcemanager.zk-address</name>
  <value>sdb-ali-hangzhou-dp1:2181,sdb-ali-hangzhou-dp2:2181</value>
  <description>For multiple zk services, separate them with comma</description>
 </property>
 <property>
  <name>yarn.resourcemanager.cluster-id</name>
  <value>yarn-ha</value>
 </property>
 <property>
  <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
  <value>true</value>
  <description>Enable automatic failover; By default, it is enabled only when HA is enabled.</description>
 </property>
 <property>
    <name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name>
    <value>/yarn-leader-election</value>
  <description>Optional setting. The default value is /yarn-leader-election</description>
 </property>
 <property>
  <name>yarn.client.failover-proxy-provider</name>
  <value>org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider</value>
 </property>
 <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
 </property>
 <property>
  <name>yarn.resourcemanager.address.nn1</name>
  <value>sdb-ali-hangzhou-dp1:21132</value>
 </property>
 <property>
  <name>yarn.resourcemanager.address.nn2</name>
  <value>sdb-ali-hangzhou-dp2:21132</value>
 </property>
 <property>
  <name>yarn.resourcemanager.scheduler.address.nn1</name>
  <value>sdb-ali-hangzhou-dp1:21130</value>
 </property>
 <property>
  <name>yarn.resourcemanager.scheduler.address.nn2</name>
  <value>sdb-ali-hangzhou-dp2:21130</value>
 </property>
 <property>
  <name>yarn.resourcemanager.resource-tracker.address.nn1</name>
  <value>sdb-ali-hangzhou-dp1:21131</value>
 </property>
 <property>
  <name>yarn.resourcemanager.resource-tracker.address.nn2</name>
  <value>sdb-ali-hangzhou-dp2:21131</value>
 </property>
 <property>
  <name>yarn.resourcemanager.webapp.address.nn1</name>
  <value>sdb-ali-hangzhou-dp1:21188</value>
 </property>
 <property>
  <name>yarn.resourcemanager.webapp.address.nn2</name>
  <value>sdb-ali-hangzhou-dp2:21188</value>
 </property>
 <property>
 <name>yarn.nodemanager.resource.memory-mb</name>
 <value>10240</value>
 </property>
 <property>
 <name>yarn.scheduler.minimum-allocation-mb</name>
 <value>2048</value>
 </property>
 <property>
 <name>yarn.scheduler.maximum-allocation-mb</name>
 <value>10240</value>
 </property>
 <property>
 <name>yarn.app.mapreduce.am.resource.mb</name>
 <value>4096</value>
 </property>
 <property>
 <name>yarn.app.mapreduce.am.command-opts</name>
 <value>-Xmx1024m</value>
 </property>-->
</configuration>

  

时间: 2025-01-05 10:43:23

Dream Spark ------spark on yarn ,yarn的配置的相关文章

[Spark]Spark入门资料阅读

Spark在集群上的运行模式 链接: http://spark.apache.org/docs/latest/cluster-overview.html Component章节 总结: 1 Each application gets its own executor processes,所以各application间是独立的. 2 spark可以使用多种 cluster manager,包括 Spark's own standalone cluster manager, Mesos or YAR

Spark&amp;Spark性能调优实战

Spark特别适用于多次操作特定的数据,分mem-only和mem & disk.其中mem-only:效率高,但占用大量的内存,成本很高;mem & disk:内存用完后,会自动向磁盘迁移,解决了内存不足的问题,却带来了数据的置换的消费.Spark常见的调优工具有nman.Jmeter和Jprofile,以下是Spark调优的一个实例分析: 1.场景:精确客户群 对一个容量为300g的客户信息表在spark上进行查询优化,该大宽表有1800多列,有效使用的有20列. 2.优化达到的效果:

Spark程序提交到Yarn集群时所遇异常

Exception 1:当我们将任务提交给Spark Yarn集群时,大多会出现以下异常,如下: 14/08/09 11:45:32 WARN component.AbstractLifeCycle: FAILED [email protected]:4040: java.net.BindException: Address already in use java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(

CDH集群中YARN的参数配置

前言:Hadoop 2.0之后,原先的MapReduce不在是简单的离线批处理MR任务的框架,升级为MapReduceV2(Yarn)版本,也就是把资源调度和任务分发两块分离开来.而在最新的CDH版本中,同时集成了MapReduceV1和MapReduceV2(Yarn)两个版本,如果集群中需要使用Yarn做统一的资源调度,建议使用Yarn. CDH对Yarn的部分参数做了少了修改,并且添加了相关的中文说明,本文着重介绍了CDH中相比 MapReduceV1一些参数改动的配置. 一.CPU配置

Spark中的local模式的配置以及测试

一:介绍 1.Spark的模式 Local:本地运行模式,主要用于开发.测试 Standalone:使用Spark自带的资源管理框架运行Spark程序,30%左右 Yarn: 将spark应用程序运行在yarn上,绝大多数使用情况,60%左右 Mesos: 二:安装 1.解压 2.建立软连接 3.复制配置文件 4.修改env.sh文件 三:运行测试 1.启动HDFS 2.准备测试路径 3.开始测试 ./run-example SparkPi 10<----------------10代表迭代的次

react执行yarn eject后配置antd的按需加载

第一步: 用create-react-app创建完成项目后,执行yarn eject .在config文件夹会显示配置文档 第二步: 添加插件yarn add babel-plugin-import --save-dev  yarn add antd --save-dev 第三步:在congif文件夹下webpack.config.dev.js第147行添加代码 options: { + plugins: [ + ['import', [{ libraryName: "antd", s

Spark中的日志聚合的配置

1.介绍 Spark的日志聚合功能不是standalone模式独享的,是所有运行模式下都会存在的情况 默认情况下历史日志是保存到tmp文件夹中的 2.参考官网的知识点位置 3.修改spark-defaults.conf 4.修改env.sh 5.在HDFS上新建/spark-history bin/hdfs dfs -mkdir /spark-history 6.启动历史服务 sbin/start-history-server.sh 7.测试 webUI: http://192.168.187.

Spark 2.2.0 安装与配置

下载Spark 解压并移动到/software目录: tar -zxvf spark-2.2.0-bin-without-hadoop.tgz mv spark-2.2.0-bin-without-hadoop /software/spark 在/etc/profile文件添加: export SPARK_HOME=/software/spark export PATH=$SPARK_HOME/sbin:$SPARK_HOME/bin:$PATH 保存并更新/etc/profile:source

[java][spark][spark streamming]java.util.concurrent.TimeoutException: Futures timed out

spark streamming 程序提交到yarn 上运行 报错 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/mnt/disk3/hadoop/yarn/local/filecache/491/spark2-hdp-yarn-archive.tar.gz/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBin