[关闭]
@hadoopMan 2017-05-18T09:22:48.000000Z 字数 10242 阅读 1038

spark安装部署

spark


想学习spark,hadoop,kafka等大数据框架,请加群459898801,满了之后请加2群224209501。后续文章会陆续公开

1,spark环境的安装

创建四个目录

  1. sudo mkdir /opt/modules
  2. sudo mkdir /opt/softwares
  3. sudo mkdir /opt/tools
  4. sudo mkdir /opt/datas
  5. sudo chmod 777 -R /opt/

1,安装jdk1.7

先卸载自带的jdk

  1. rpm qa | grep java
  2. sudo rpm -e --nodeps (自带java包)

安装jdk1.7

  1. export JAVA_HOME=/opt/modules/jdk1.7.0_67
  2. export PATH=$PATH:$JAVA_HOME/bin

2,spark编译

安装mvn

  1. export MAVEN_HOME=/usr/local/apache-maven-3.0.5
  2. export PATH=$PATH:$MAVEN_HOME/bin

3,安装scala

  1. export SCALA_HOME=/opt/modules/scala-2.10.4
  2. export PATH=$PATH:$SCALA_HOME/bin

4,修改mvn镜像源

编译之前先配置镜像及域名服务器,来提高下载速度,进而提高编译速度,用nodepad++打开/opt/compileHadoop/apache-maven-3.0.5/conf/setting.xml。(nodepad已经通过sftp链接到了机器)

  1. <mirror>
  2. <id>nexus-spring</id>
  3. <mirrorOf>cdh.repo</mirrorOf>
  4. <name>spring</name>
  5. <url>http://repo.spring.io/repo/</url>
  6. </mirror>
  7. <mirror>
  8. <id>nexus-spring2</id>
  9. <mirrorOf>cdh.releases.repo</mirrorOf>
  10. <name>spring2</name>
  11. <url>http://repo.spring.io/repo/</url>
  12. </mirror>

5,配置域名解析服务器

  1. sudo vi /etc/resolv.conf
  2. 添加内容:
  3. nameserver 8.8.8.8
  4. nameserver 8.8.4.4

6,编译spark

为了提高编译速度,修改如下内容

  1. VERSION=1.3.0
  2. SPARK_HADOOP_VERSION=2.6.0-cdh5.4.0
  3. SPARK_HIVE=1
  4. #VERSION=$("$MVN" help:evaluate -Dexpression=project.version 2>/dev/null | grep -v "INFO" | tail -n 1)
  5. #SPARK_HADOOP_VERSION=$("$MVN" help:evaluate -Dexpression=hadoop.version $@ 2>/dev/null\
  6. # | grep -v "INFO"\
  7. # | tail -n 1)
  8. #SPARK_HIVE=$("$MVN" help:evaluate -Dexpression=project.activeProfiles -pl sql/hive $@ 2>/dev/null\
  9. # | grep -v "INFO"\
  10. # | fgrep --count "<id>hive</id>";\
  11. # # Reset exit status to 0, otherwise the script stops here if the last grep finds nothing\
  12. # # because we use "set -o pipefail"
  13. # echo -n)

执行编译指令:

  1. ./make-distribution.sh --tgz -Pyarn -Phadoop-2.4 -Dhadoop.version=2.6.0-cdh5.4.0 -Phive-0.13.1 -Phive-thriftserver
  2. 去掉下面编译会很快,即使编译失败也不会每次都清除
  3. -DskipTests clean package

spark编译成功后.png-52.8kB

4 安装hadoop2.6

1,添加java主目录位置

  1. hadoop-env.sh
  2. mapred-env.sh
  3. yarn-env.sh
  4. 添加如下:
  5. export JAVA_HOME=/opt/modules/jdk1.7.0_67

2,core-site.xml配置

  1. <property>
  2. <name>hadoop.tmp.dir</name>
  3. <value>/opt/modules/hadoop-2.5.0/data/tmp</value>
  4. </property>
  5. <property>
  6. <name>fs.defaultFS</name>
  7. <value>hdfs://spark.learn.com:8020</value>
  8. </property>

3,hdfs-site.xml配置

  1. <property>
  2. <name>dfs.replication</name>
  3. <value>1</value>
  4. </property>

4,mapred-site.xml配置

  1. <property>
  2. <name>mapreduce.framework.name</name>
  3. <value>yarn</value>
  4. </property>

5,yarn-site.xml配置

  1. <property>
  2. <name>yarn.resourcemanager.hostname</name>
  3. <value>miaodonghua.host</value>
  4. </property>
  5. <property>
  6. <name>yarn.nodemanager.aux-services</name>
  7. <value>mapreduce_shuffle</value>
  8. </property>

6,slaves配置

  1. miaodonghua.host//主机名:nodemanager和datanode地址

7,格式化namenode

  1. bin/hdfs namenode -format

3,spark几种模式的安装部署

1 spark本地模式的安装

本地模式基础语法测试
1,直接运行

  1. bin/spark-shell

运行成功.png-54.4kB
2,spark的webAPP

  1. http://spark.learn.com:4040

webapp.png-73.9kB
3,读取readme.cm

  1. val textFile = sc.textFile("README.md")

读取readmemd.png-25.6kB
4,count

  1. testFile.count()

count.png-82.7kB
5,first

  1. testFile.first()

first.png-64.1kB
6,filter

  1. val linesWithSpark = testFile.filter(line => line.contains("Spark"))

lineFilter.png-5kB

  1. testFile.filter(line => line.contains("Spark")).count()

lineFilter2.png-83.6kB
相对复杂计算 complex computations

  1. testFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b)

复杂1.png-81.8kB

  1. import java.lang.Math
  2. testFile.map(line => line.split(" ").size).reduce((a, b) => Math.max(a, b))

JAVAmax.png-87.1kB

  1. val wordCounts = testFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a + b)

wordcount.png-8kB

  1. wordCounts.collect()

显示.png-52.3kB

cache

  1. scala> linesWithSpark.cache()

cache.png-49.7kB

  1. scala> linesWithSpark.count()

cache1.png-14.6kB

2 spark standalone模式的安装

Cluster Mode.png-25.4kB
1,配置spark-env.sh

  1. HADOOP_CONF_DIR=/opt/modules/hadoop-2.6.0-cdh5.4.0/etc/hadoop
  2. JAVA_HOME=/opt/modules/jdk1.7.0_67
  3. SCALA_HOME=/opt/modules/scala-2.10.4
  4. SPARK_MASTER_IP=spark.learn.com
  5. SPARK_MASTER_PORT=7077
  6. SPARK_MASTER_WEBUI_PORT=8080
  7. SPARK_WORKER_CORES=1
  8. SPARK_WORKER_MEMORY=1000m
  9. SPARK_WORKER_PORT=7078
  10. SPARK_WORKER_WEBUI_PORT=8081
  11. SPARK_WORKER_INSTANCES=1

2,配置spark-defaults.conf

  1. spark.master spark://hadoop-spark.com:7077

3,配置slaves

  1. spark.learn.com

4,启动spark

  1. sbin/start-master.sh
  2. sbin/start-slaves.sh

启动成功.png-7kB

5,命令测试
读取本地的需要注释掉

  1. HADOOP_CONF_DIR=/opt/modules/hadoop-2.6.0-cdh5.4.0/etc/hadoop
  1. val textFile = sc.textFile("hdfs://spark.learn.com:8020/user/hadoop/spark")

读取hdfs成功.png-28.6kB

  1. val wordcount = textFile.flatMap(x=>x.split(" ")).map(x=>(x,1)).reduceByKey((a,b)=>a+b).collect()

输出结果.png-27.8kB

  1. val wordcount = textFile.flatMap(x=>x.split(" ")).map(x=>(x,1)).reduceByKey((a,b)=>a+b).sortByKey(true).collect()

输出有序结果.png-23kB

  1. val wordcount = textFile.flatMap(x=>x.split(" ")).map(x=>(x,1)).reduceByKey((a,b)=>a+b).map(x=>(x._2,x._1)).sortByKey(false).collect()

按value排序.png-17.6kB

  1. sc.textFile("hdfs://spark.learn.com:8020/user/cyhp/spark/wc.input").flatMap(_.split(" ")).map((_,1)).reduceByKey(_ + _).collect

3 spark on yarn

查看spark-submit参数

  1. [hadoop@spark spark-1.3.0-bin-2.6.0-cdh5.4.0]$ bin/spark-submit --help
  1. Spark assembly has been built with Hive, including Datanucleus jars on classpath
  2. Usage: spark-submit [options] <app jar | python file> [app arguments]
  3. Usage: spark-submit --kill [submission ID] --master [spark://...]
  4. Usage: spark-submit --status [submission ID] --master [spark://...]
  5. Options:
  6. --master MASTER_URL spark://host:port, mesos://host:port, yarn, or local.
  7. --deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or
  8. on one of the worker machines inside the cluster ("cluster")
  9. (Default: client).
  10. --class CLASS_NAME Your application's main class (for Java / Scala apps).
  11. --name NAME A name of your application.
  12. --jars JARS Comma-separated list of local jars to include on the driver
  13. and executor classpaths.
  14. --packages Comma-separated list of maven coordinates of jars to include
  15. on the driver and executor classpaths. Will search the local
  16. maven repo, then maven central and any additional remote
  17. repositories given by --repositories. The format for the
  18. coordinates should be groupId:artifactId:version.
  19. --repositories Comma-separated list of additional remote repositories to
  20. search for the maven coordinates given with --packages.
  21. --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place
  22. on the PYTHONPATH for Python apps.
  23. --files FILES Comma-separated list of files to be placed in the working
  24. directory of each executor.
  25. --conf PROP=VALUE Arbitrary Spark configuration property.
  26. --properties-file FILE Path to a file from which to load extra properties. If not
  27. specified, this will look for conf/spark-defaults.conf.
  28. --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512M).
  29. --driver-java-options Extra Java options to pass to the driver.
  30. --driver-library-path Extra library path entries to pass to the driver.
  31. --driver-class-path Extra class path entries to pass to the driver. Note that
  32. jars added with --jars are automatically included in the
  33. classpath.
  34. --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G).
  35. --proxy-user NAME User to impersonate when submitting the application.
  36. --help, -h Show this help message and exit
  37. --verbose, -v Print additional debug output
  38. --version, Print the version of current Spark
  39. Spark standalone with cluster deploy mode only:
  40. --driver-cores NUM Cores for driver (Default: 1).
  41. --supervise If given, restarts the driver on failure.
  42. --kill SUBMISSION_ID If given, kills the driver specified.
  43. --status SUBMISSION_ID If given, requests the status of the driver specified.
  44. Spark standalone and Mesos only:
  45. --total-executor-cores NUM Total cores for all executors.
  46. YARN-only:
  47. --driver-cores NUM Number of cores used by the driver, only in cluster mode
  48. (Default: 1).
  49. --executor-cores NUM Number of cores per executor (Default: 1).
  50. --queue QUEUE_NAME The YARN queue to submit to (Default: "default").
  51. --num-executors NUM Number of executors to launch (Default: 2).
  52. --archives ARCHIVES Comma separated list of archives to be extracted into the
  53. working directory of each executor.

1,submit本地模式
ClientMode.png-13.1kB

  1. bin/spark-submit \
  2. --class org.apache.spark.examples.SparkPi \
  3. lib/spark-examples-1.3.0-hadoop2.6.0-cdh5.4.0.jar \
  4. 10

submit.png-15.3kB
2,submit运行于standalone模式
Cluster Mode.png-14.2kB

  1. bin/spark-submit \
  2. --deploy-mode cluster \
  3. --class org.apache.spark.examples.SparkPi \
  4. lib/spark-examples-1.3.0-hadoop2.6.0-cdh5.4.0.jar \
  5. 10

submit的standalone模式.png-35.6kB

3,submit提交到yarn上
yarn Cluster mode
yarnCluster.png-42.4kB
yarn Client mode
yarn Client.png-107.6kB

  1. bin/spark-submit \
  2. --master yarn \
  3. --class org.apache.spark.examples.SparkPi \
  4. lib/spark-examples-1.3.0-hadoop2.6.0-cdh5.4.0.jar \
  5. 10

submit到yarn上.png-89.6kB

4,spark监控界面

1,启动spark监控服务

执行下面指令启动spark historyserver

  1. ./sbin/start-history-server.sh

查看方式

  1. http://<server-url>:18080

2,spark监控的配置项介绍

1,整体配置项

我们可以对historyserver进行如下配置:

Environment Variable Meaning
SPARK_DAEMON_MEMORY Memory to allocate to the history server (default: 512m).
SPARK_DAEMON_JAVA_OPTS JVM options for the history server (default: none).
SPARK_PUBLIC_DNS The public address for the history server. If this is not set, links to application history may use the internal address of the server, resulting in broken links (default: none).
SPARK_HISTORY_OPTS spark.history.* configuration options for the history server (default: none).

2,SPARK_HISTORY_OPTS配置

Property Name Default Meaning
spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider Name of the class implementing the application history backend. Currently there is only one implementation, provided by Spark, which looks for application logs stored in the file system.
spark.history.fs.logDirectory file:/tmp/spark-events Directory that contains application event logs to be loaded by the history server
spark.history.fs.updateInterval 10 The period, in seconds, at which information displayed by this history server is updated. Each update checks for any changes made to the event logs in persisted storage.
spark.history.retainedApplications 50 The number of application UIs to retain. If this cap is exceeded, then the oldest applications will be removed.
spark.history.ui.port 18080
spark.history.kerberos.enabled false Indicates whether the history server should use kerberos to login. This is useful if the history server is accessing HDFS files on a secure Hadoop cluster. If this is true, it uses the configs spark.history.kerberos.principal and spark.history.kerberos.keytab.
spark.history.kerberos.principal (none) Kerberos principal name for the History Server.
spark.history.kerberos.keytab (none) Location of the kerberos keytab file for the History Server.
spark.history.ui.acls.enable false Specifies whether acls should be checked to authorize users viewing the applications. If enabled, access control checks are made regardless of what the individual application had set for spark.ui.acls.enable when the application was run. The application owner will always have authorization to view their own application and any users specified via spark.ui.view.acls when the application was run will also have authorization to view that application. If disabled, no access control checks are made.

3,标记一个任务是否完成

要注意的是historyserver仅仅显示完成了的spark任务。标记任务完成的一种方式是直接调用sc.stop()。

4,spark-defaults.conf中的配置

Property Name Default Meaning
spark.eventLog.compress false
spark.eventLog.dir file:///tmp/spark-events
spark.eventLog.enabled false

2,具体配置

1,在spark-env.sh

  1. SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs://spark.learn.com:8020/user/hadoop/spark/history"

2,在spark-defaults.conf

  1. spark.eventLog.enabled true
  2. spark.eventLog.dir hdfs://spark.learn.com:8020/user/hadoop/spark/history
  3. spark.eventLog.compress true

3,测试

启动相关服务

  1. sbin/start-master.sh
  2. sbin/start-slaves.sh
  3. sbin/start-history-server.sh
  4. bin/spark-shell

执行spark应用

  1. val textFile = sc.textFile("hdfs://spark.learn.com:8020/user/hadoop/spark/input/")
  2. textFile.count
  3. sc.stop

执行成功.png-25.9kB

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注