hadoop2.6和hbase1.0单机版安装配置
时间:2016-02-23 16:06 来源:linux.it.net.cn 作者:IT
环境
系统:Ubuntu 14.04
hadoop版本:2.6.0
hbase版本:1.0
jdk版本:1.8
下载地址:Apache上慢慢找吧~~
jdk的环境配置这里就不列出来了,首先讲一下hadoop配置吧。
hadoop安装
1.安装位置:/opt
2.创建hadoop用户组
sudo addgroup hadoop
3.创建hadoop用户
sudo adduser -ingroup hadoop hadoop
4.给hadoop添加权限
sudo vim /etc/sudoers
在root ALL=(ALL:ALL) ALL下添加
hadoop ALL=(ALL:ALL) ALL
-
1
5.安装ssh sudo apt-get install ssh openssh-server 6.建立ssh无密码登陆
su - hadoop
ssh-keygen -t rsa -P ""
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
-
1
-
2
-
3
-
4
测试 ssh localhost 7.解压hadoop
tar -zxvf hadoop-2.6.0.tar.gz
sudo mv hadoop-2.6.0 /opt/hadoop
sudo chmod -R 775 /opt/hadoop
sudo chown -R hadoop:hadoop /opt/hadoop
-
1
-
2
-
3
-
4
8.配置环境变量 sudo vim ~/.bashrc 在末尾添加
#HADOOP VARIABLES START
export JAVA_HOME=/opt/jdk1.8.0
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path-$HADOOP_HOME/lib"
#HADOOP VARIABLES END
export HBASE_HOME=/opt/hbase
export PATH=$PATH:$HBASE_HOME/bin
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.${JAVA_HOME}/lib:${JRE_HOME}/lib:${HADOOP_HOME}/share/hadoop/common/lib:${HBASE_HOME}/lib
source ~/.bashrc 9.修改hadoop-env.sh 将JAVA_HOME修改为jdk安装目录,这里是/opt/jdk1.8.0 10.修改core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/tmp</value>
</property>
</configuration>
11.修改mapred-site.xml.template并修改为mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
12.修改yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
13.修改hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
14.修改masters和slaves masters好像不存在,自行添加 插入: localhost 15.添加临时目录
cd /opt/hadoop
mkdir tmp dfs dfs/name dfs/data
-
1
-
2
16.初始化hdfs hdfs namenode -format 17.启动hadoop
start-dfs.sh
start-yarn.sh
hbase安装
hbase安装相对简单,就是把其整合hadoop 1.解压
tar -zxvf hbase-1.0.0-bin.tar.gz
sudo mv hbase-1.0.0 /opt/hbase
cd /opt
sudo chmod -R 775 hbase
sudo chown -R hadoop:hadoop: hbase
2.修改环境变量 sudo vim /opt/hbase/conf/hbase-env.sh 修改$JAVA_HOME为jdk安装目录,这里是/opt/jdk1.8.0 3.修改hbase-site.xml 添加:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
4.启动hbase start-hbase.sh 5.进入hbase shell hbase shell 6.查看进程 通过jps,应该会看到一共有9个进程,分别如下:
3616 NodeManager
3008 NameNode
6945 HQuorumPeer
7010 HMaster
3302 SecondaryNameNode
3128 DataNode
7128 HRegionServer
3496 ResourceManager
7209 Jps
进程号不一定是这些
至此,hadoop和hbase都安装完成了,不过这只是单机版也可以说是伪分布式配置,希望对大家有所帮助。
(责任编辑:IT)
环境
系统:Ubuntu 14.04
jdk的环境配置这里就不列出来了,首先讲一下hadoop配置吧。 hadoop安装
1.安装位置:/opt hadoop ALL=(ALL:ALL) ALL
5.安装ssh sudo apt-get install ssh openssh-server 6.建立ssh无密码登陆 su - hadoop ssh-keygen -t rsa -P "" cd ~/.ssh cat id_rsa.pub >> authorized_keys
测试 ssh localhost 7.解压hadoop tar -zxvf hadoop-2.6.0.tar.gz sudo mv hadoop-2.6.0 /opt/hadoop sudo chmod -R 775 /opt/hadoop sudo chown -R hadoop:hadoop /opt/hadoop
8.配置环境变量 sudo vim ~/.bashrc 在末尾添加 #HADOOP VARIABLES START export JAVA_HOME=/opt/jdk1.8.0 export HADOOP_HOME=/opt/hadoop export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path-$HADOOP_HOME/lib" #HADOOP VARIABLES END export HBASE_HOME=/opt/hbase export PATH=$PATH:$HBASE_HOME/bin export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.${JAVA_HOME}/lib:${JRE_HOME}/lib:${HADOOP_HOME}/share/hadoop/common/lib:${HBASE_HOME}/lib source ~/.bashrc 9.修改hadoop-env.sh 将JAVA_HOME修改为jdk安装目录,这里是/opt/jdk1.8.0 10.修改core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop/tmp</value> </property> </configuration> 11.修改mapred-site.xml.template并修改为mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> 12.修改yarn-site.xml <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> 13.修改hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/opt/hadoop/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/opt/hadoop/dfs/data</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> </configuration> 14.修改masters和slaves masters好像不存在,自行添加 插入: localhost 15.添加临时目录 cd /opt/hadoop mkdir tmp dfs dfs/name dfs/data
16.初始化hdfs hdfs namenode -format 17.启动hadoop start-dfs.sh start-yarn.sh hbase安装hbase安装相对简单,就是把其整合hadoop 1.解压 tar -zxvf hbase-1.0.0-bin.tar.gz sudo mv hbase-1.0.0 /opt/hbase cd /opt sudo chmod -R 775 hbase sudo chown -R hadoop:hadoop: hbase 2.修改环境变量 sudo vim /opt/hbase/conf/hbase-env.sh 修改$JAVA_HOME为jdk安装目录,这里是/opt/jdk1.8.0 3.修改hbase-site.xml 添加: <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:9000/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> </configuration> 4.启动hbase start-hbase.sh 5.进入hbase shell hbase shell 6.查看进程 通过jps,应该会看到一共有9个进程,分别如下: 3616 NodeManager 3008 NameNode 6945 HQuorumPeer 7010 HMaster 3302 SecondaryNameNode 3128 DataNode 7128 HRegionServer 3496 ResourceManager 7209 Jps 进程号不一定是这些
至此,hadoop和hbase都安装完成了,不过这只是单机版也可以说是伪分布式配置,希望对大家有所帮助。 (责任编辑:IT) |