Hadoop CDH5 Spark部署
时间:2016-05-29 23:24 来源:linux.it.net.cn 作者:IT
Spark是一个基于内存计算的开源的集群计算系统,目的是让数据分析更加快速,Spark 是一种与 Hadoop 相似的开源集群计算环境,但是两者之间还存在一些不同之处,这些有用的不同之处使 Spark 在某些工作负载方面表现得更加优越,换句话说,Spark 启用了内存分布数据集,除了能够提供交互式查询外,它还可以优化迭代工作负载。尽管创建 Spark 是为了支持分布式数据集上的迭代作业,但是实际上它是对 Hadoop 的补充,可以在 Hadoop 文件系统中并行运行。
CDH5 Spark安装
1 Spark的相关软件包
spark-core: spark的核心软件包
spark-worker: 管理spark-worker的脚本
spark-master: 管理spark-master的脚本
spark-python: Spark的python客户端
2 Spark运行依赖的环境
CDH5
JDK
3 安装Spark
apt-get install spark-core spark-master spark-worker spark-python
4 配置运行Spark (Standalone Mode)
1 Configuring Spark(/etc/spark/conf/spark-env.sh)
SPARK_MASTER_IP, to bind the master to a different IP address or hostname
SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports
SPARK_WORKER_CORES, to set the number of cores to use on this machine
SPARK_WORKER_MEMORY, to set how much memory to use (for example 1000MB, 2GB)
SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT
SPARK_WORKER_INSTANCE, to set the number of worker processes per node
SPARK_WORKER_DIR, to set the working directory of worker processes
2 Starting, Stopping, and Running Spark
service spark-master start
service spark-worker start
还有一个GUI界面在<master_host>:18080
5 Running Spark Applications
1 Spark应用有三种运行模式:
Standalone mode:默认模式
YARN client mode:提交spark应用到YARN,spark驱动在spark客户端进程上。
YARN cluster mode:提交spark应用到YARN,spark驱动运行在ApplicationMaster上。
2 运行SparkPi在Standalone模式
source /etc/spark/conf/spark-env.sh
CLASSPATH=$CLASSPATH:/your/additional/classpath
$SPARK_HOME/bin/spark-class [<spark-config-options>] \
org.apache.spark.examples.SparkPi \
spark://$SPARK_MASTER_IP:$SPARK_MASTER_PORT 10
Spark运行参数设置:http://spark.apache.org/docs/0.9.0/configuration.html
3 运行SparkPi在YARN Client模式
在YARN client和YARN cluster模式下, 你首先要上传spark JAR包到你的HDFS上, 然后设置SPARK_JAR环境变量。
source /etc/spark/conf/spark-env.sh
hdfs dfs -mkdir -p /user/spark/share/lib
hdfs dfs -put $SPARK_HOME/assembly/lib/spark-assembly_*.jar /user/spark/share/lib/spark-assembly.jar
SPARK_JAR=hdfs://<nn>:<port>/user/spark/share/lib/spark-assembly.jar
source /etc/spark/conf/spark-env.sh
SPARK_CLASSPATH=/your/additional/classpath
SPARK_JAR=hdfs://<nn>:<port>/user/spark/share/lib/spark-assembly.jar
$SPARK_HOME/bin/spark-class [<spark-config-options>] \
org.apache.spark.examples.SparkPi yarn-client 10
4 运行SparkPi在YARN Cluster模式
source /etc/spark/conf/spark-env.sh
SPARK_JAR=hdfs://<nn>:<port>/user/spark/share/lib/spark-assembly.jar
APP_JAR=$SPARK_HOME/examples/lib/spark-examples_<version>.jar
$SPARK_HOME/bin/spark-class org.apache.spark.deploy.yarn.Client \
--jar $APP_JAR \
--class org.apache.spark.examples.SparkPi \
--args yarn-standalone \
--args 10
(责任编辑:IT)
Spark是一个基于内存计算的开源的集群计算系统,目的是让数据分析更加快速,Spark 是一种与 Hadoop 相似的开源集群计算环境,但是两者之间还存在一些不同之处,这些有用的不同之处使 Spark 在某些工作负载方面表现得更加优越,换句话说,Spark 启用了内存分布数据集,除了能够提供交互式查询外,它还可以优化迭代工作负载。尽管创建 Spark 是为了支持分布式数据集上的迭代作业,但是实际上它是对 Hadoop 的补充,可以在 Hadoop 文件系统中并行运行。 CDH5 Spark安装 1 Spark的相关软件包
spark-core: spark的核心软件包 spark-worker: 管理spark-worker的脚本 spark-master: 管理spark-master的脚本 spark-python: Spark的python客户端 2 Spark运行依赖的环境
CDH5 JDK 3 安装Spark
apt-get install spark-core spark-master spark-worker spark-python4 配置运行Spark (Standalone Mode) 1 Configuring Spark(/etc/spark/conf/spark-env.sh)
SPARK_MASTER_IP, to bind the master to a different IP address or hostname SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports SPARK_WORKER_CORES, to set the number of cores to use on this machine SPARK_WORKER_MEMORY, to set how much memory to use (for example 1000MB, 2GB) SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT SPARK_WORKER_INSTANCE, to set the number of worker processes per node SPARK_WORKER_DIR, to set the working directory of worker processes 2 Starting, Stopping, and Running Spark
service spark-master start service spark-worker start 还有一个GUI界面在<master_host>:18080 5 Running Spark Applications 1 Spark应用有三种运行模式: Standalone mode:默认模式 YARN client mode:提交spark应用到YARN,spark驱动在spark客户端进程上。 YARN cluster mode:提交spark应用到YARN,spark驱动运行在ApplicationMaster上。2 运行SparkPi在Standalone模式
source /etc/spark/conf/spark-env.sh CLASSPATH=$CLASSPATH:/your/additional/classpath $SPARK_HOME/bin/spark-class [<spark-config-options>] \ org.apache.spark.examples.SparkPi \ spark://$SPARK_MASTER_IP:$SPARK_MASTER_PORT 10Spark运行参数设置:http://spark.apache.org/docs/0.9.0/configuration.html 3 运行SparkPi在YARN Client模式 在YARN client和YARN cluster模式下, 你首先要上传spark JAR包到你的HDFS上, 然后设置SPARK_JAR环境变量。source /etc/spark/conf/spark-env.sh hdfs dfs -mkdir -p /user/spark/share/lib hdfs dfs -put $SPARK_HOME/assembly/lib/spark-assembly_*.jar /user/spark/share/lib/spark-assembly.jar SPARK_JAR=hdfs://<nn>:<port>/user/spark/share/lib/spark-assembly.jar source /etc/spark/conf/spark-env.sh SPARK_CLASSPATH=/your/additional/classpath SPARK_JAR=hdfs://<nn>:<port>/user/spark/share/lib/spark-assembly.jar $SPARK_HOME/bin/spark-class [<spark-config-options>] \ org.apache.spark.examples.SparkPi yarn-client 104 运行SparkPi在YARN Cluster模式
source /etc/spark/conf/spark-env.sh SPARK_JAR=hdfs://<nn>:<port>/user/spark/share/lib/spark-assembly.jar APP_JAR=$SPARK_HOME/examples/lib/spark-examples_<version>.jar $SPARK_HOME/bin/spark-class org.apache.spark.deploy.yarn.Client \ --jar $APP_JAR \ --class org.apache.spark.examples.SparkPi \ --args yarn-standalone \ --args 10
(责任编辑:IT) |