sistill.blogg.se

Download spark-2.2.0-bin-hadoop2.7
Download spark-2.2.0-bin-hadoop2.7










download spark-2.2.0-bin-hadoop2.7
  1. #Download spark 2.2.0 bin hadoop2.7 install#
  2. #Download spark 2.2.0 bin hadoop2.7 update#

#Download spark 2.2.0 bin hadoop2.7 update#

Update system configuration through source /etc/profile

download spark-2.2.0-bin-hadoop2.7

Modify users and user groups sudo chown hadoop:hadoop -R hadoopįirst, add the installation path to the system path and open / etc/profile sudo vim /etc/profileĪdd at bottom export SPARK_HOME=/home/hadoop/spark Modify permissions sudo chmod a+w -R spark

#Download spark 2.2.0 bin hadoop2.7 install#

Then we need to add Scala to / etc/profile_ Home property sudo vim /etc/profileĪdd at the end export SCALA_HOME=/home/hadoop/scalaĮnter scala / etc and enter the console to configure scala / etc successfully 7 install spark 7.1 downloadĭownload address of official website: 7.2 installationĭirectly decompress the compressed file to the / home/hadoop directory sudo tar -xvf spark-3.0.1-bin-hadoop2.7.tar -C /hoom/hadoopĬhange the name of the directory to spark sudo mv spark-3.0.1-bin-hadoop2.7 spark Sudo mv scala-2.12.12 scala 6.3 configuration Official download address of sclca: 6.2 installationĭirectly decompress the compressed file to the / home/hadoop directory sudo tar -xvf scala-2.12.12.tgz -C /hoom/hadoopĬhange the name of the directory to scala After the startup is completed, you can view it through the jps command You may need to wait for a period of time. If the output status code is 0, the format is completed if it is 1, it means there is an error If it needs to be formatted for the second time, delete it first, and then create the dfs and tmp directories) hdfs namenode -format Sudo tar -zxvf jdk-8u191-linux-圆4.tar.gzĪdd the following to the end of the document:Įxport JAVA_HOME=/home/hadoop/java/jdk1.8.0_191 # Note that you need to change the jdk directory to your ownĮxport CLASSPATH=.:$ tmpįormat the master node (only need to format the master node, note: it can only be formatted once at a time.

download spark-2.2.0-bin-hadoop2.7

Put it in the directory / home/hadoop/java #Decompress jdk Step four, if you just want to be fake~ 2. **If hadoop wants to build a cluster at the beginning, you can refer to links 1 and 3. First, we build a pseudo distributed hadoop, and then we change it into a cluster. In this paper, we record the trial process. The version used in this configuration is VMware 16 + Hadoop 2.7.7 + java 8u191 + Ubuntu 20.04.1 + spark-3.0.1-bin-hadoop 2.7 + scala 2.12.12












Download spark-2.2.0-bin-hadoop2.7