初始化Centos7vi /etc/sysconfig/network-scripts/ifcfg-ens33 DEVICE=ens33TYPE=EthernetONBOOT=yesNM_CONTROLLED=yesBOOTPROTO=staticIPADDR=192.168.3.131GATEWAY=192.168.3.2NETMASK=255.255.255.0DNS1=192.168.3.2 hostnamectl set-hostname node-03 配置环境变量export JAVA_HOME=/root/apps/jdk1.8.0_202export HADOOP_HOME=/root/apps/hadoop-2.8.1export HADOOP_CONF_DIR=/root/apps/hadoop-2.8.1/etc/hadoopexport PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 往Spark集群中提交任务/root/apps/spark-2.2.3-bin-hadoop2.7/bin/spark-submit --master spark://node-01:7077 --class org.apache.spark.examples.SparkPi /root/apps/spark-2.2.3-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.2.3.jar 100 HDFS的配置 cat /root/apps/hadoop-2.8.1/etc/hadoop/core-site.xml<property> <name>fs.defaultFS</name> <value>hdfs://node-01:9000</value></property> cat /root/apps/hadoop-2.8.1/etc/hadoop/hdfs-site.xml <property> <name>dfs.namenode.name.dir</name> <value>/root/dfs/name</value></property> <property> <name>dfs.datanode.data.dir</name> <value>/root/dfs/data</value></property> <property> <name>dfs.blocksize</name> <value>64m</value></property> <property> <name>dfs.replication</name> <value>2</value></property> 初始化nameNodehadoop namenode -format 启动HDFS/root/apps/hadoop-2.8.1/sbin/start-yarn.sh; 测试HDFShadoop fs -mkdir -p /wordcount/input;hadoop fs -copyFromLocal /home/bduser/data/testData/testWc.txt /wordcount/input/testWc1.txt;hadoop fs -rm -p /wordcount Yarn的配置/root/apps/hadoop-2.8.1/etc/hadoop/yarn-site.xml <property> <name>yarn.resourcemanager.hostname</name> <value>node-01</value></property><property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value></property><property> <name>yarn.nodemanager.resource.memory-mb</name> <value>2048</value></property><property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>2</value></property> hadoop namenode -format 启动Yarn/root/apps/hadoop-2.8.1/sbin/start-yarn.sh; 测试Yarnhttp://node-01:8088/cluster 虚拟机中执行Pi求值得Demo/root/apps/spark-2.2.3-bin-hadoop2.7/bin/spark-submit --master yarn --deploy-mode cluster --class org.apache.spark.examples.SparkPi --driver-memory 1024m --executor-memory 1024m --total-executor-cores 2 --queue default /root/apps/spark-2.2.3-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.2.3.jar 100
原文地址:https://www.cnblogs.com/cerofang/p/11881094.html
时间: 2024-10-04 05:12:08