目錄
正文
1、HBase 依賴于 HDFS 做底層的數(shù)據(jù)存儲(chǔ)
2、HBase 依賴于 MapReduce 做數(shù)據(jù)計(jì)算
3、HBase 依賴于 ZooKeeper 做服務(wù)協(xié)調(diào)
4、HBase源碼是java編寫(xiě)的,安裝需要依賴JDK
打開(kāi)官方的版本說(shuō)明http://hbase.apache.org/1.2/book.html
此處我們的hadoop版本用的的是2.7.5,HBase選擇的版本是1.2.6
參考http://www.cnblogs.com/qingyunzong/p/8619184.html
參考http://www.cnblogs.com/qingyunzong/p/8634335.html
找到官網(wǎng)下載 hbase 安裝包 hbase-1.2.6-bin.tar.gz,這里給大家提供一個(gè)下載地址: http://mirrors.hust.edu.cn/apache/hbase/
[hadoop@hadoop1 ~]$ lsapps data hbase-1.2.6-bin.tar.gz hello.txt log zookeeper.out[hadoop@hadoop1 ~]$ tar -zxvf hbase-1.2.6-bin.tar.gz -C apps/
配置文件目錄在安裝包的conf文件夾中
[hadoop@hadoop1 conf]$ vi hbase-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_73
export HBASE_MANAGES_ZK=false
[hadoop@hadoop1 conf]$ vi hbase-site.xml
<configuration> <property> <!-- 指定 hbase 在 HDFS 上存儲(chǔ)的路徑 --> <name>hbase.rootdir</name> <value>hdfs://myha01/hbase126</value> </property> <property> <!-- 指定 hbase 是分布式的 --> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <!-- 指定 zk 的地址,多個(gè)用“,”分割 --> <name>hbase.zookeeper.quorum</name> <value>hadoop1:2181,hadoop2:2181,hadoop3:2181,hadoop4:2181</value> </property></configuration>
[hadoop@hadoop1 conf]$ vi regionservers
hadoop1hadoop2hadoop3hadoop4
該文件是不存在的,先自行創(chuàng)建
[hadoop@hadoop1 conf]$ vi backup-masters
hadoop4
最重要一步,要把 hadoop 的 hdfs-site.xml 和 core-site.xml 放到 hbase-1.2.6/conf 下
[hadoop@hadoop1 conf]$ cd ~/apps/hadoop-2.7.5/etc/hadoop/[hadoop@hadoop1 hadoop]$ cp core-site.xml hdfs-site.xml ~/apps/hbase-1.2.6/conf/
分發(fā)之前先刪除HBase目錄下的docs文件夾,
[hadoop@hadoop1 hbase-1.2.6]$ rm -rf docs/
在進(jìn)行分發(fā)
[hadoop@hadoop1 apps]$ scp -r hbase-1.2.6/ hadoop2:$PWD[hadoop@hadoop1 apps]$ scp -r hbase-1.2.6/ hadoop3:$PWD[hadoop@hadoop1 apps]$ scp -r hbase-1.2.6/ hadoop4:$PWD
HBase 集群對(duì)于時(shí)間的同步要求的比 HDFS 嚴(yán)格,所以,集群?jiǎn)?dòng)之前千萬(wàn)記住要進(jìn)行 時(shí)間同步,要求相差不要超過(guò) 30s
所有服務(wù)器都有進(jìn)行配置
[hadoop@hadoop1 apps]$ vi ~/.bashrc
#HBaseexport HBASE_HOME=/home/hadoop/apps/hbase-1.2.6export PATH=$PATH:$HBASE_HOME/bin
使環(huán)境變量立即生效
[hadoop@hadoop1 apps]$ source ~/.bashrc
嚴(yán)格按照啟動(dòng)順序進(jìn)行
每個(gè)zookeeper節(jié)點(diǎn)都要執(zhí)行以下命令
[hadoop@hadoop1 apps]$ zkServer.sh startZooKeeper JMX enabled by defaultUsing config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfgStarting zookeeper ... STARTED[hadoop@hadoop1 apps]$
如果需要運(yùn)行MapReduce程序則啟動(dòng)yarn集群,否則不需要啟動(dòng)
[hadoop@hadoop1 apps]$ start-dfs.shStarting namenodes on [hadoop1 hadoop2]hadoop2: starting namenode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-namenode-hadoop2.outhadoop1: starting namenode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-namenode-hadoop1.outhadoop3: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop3.outhadoop4: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop4.outhadoop2: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop2.outhadoop1: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop1.outStarting journal nodes [hadoop1 hadoop2 hadoop3]hadoop3: starting journalnode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-journalnode-hadoop3.outhadoop2: starting journalnode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-journalnode-hadoop2.outhadoop1: starting journalnode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-journalnode-hadoop1.outStarting ZK Failover Controllers on NN hosts [hadoop1 hadoop2]hadoop2: starting zkfc, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-zkfc-hadoop2.outhadoop1: starting zkfc, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-zkfc-hadoop1.out[hadoop@hadoop1 apps]$
啟動(dòng)完成之后檢查以下namenode的狀態(tài)
[hadoop@hadoop1 apps]$ hdfs haadmin -getServiceState nn1standby[hadoop@hadoop1 apps]$ hdfs haadmin -getServiceState nn2active[hadoop@hadoop1 apps]$
保證 ZooKeeper 集群和 HDFS 集群?jiǎn)?dòng)正常的情況下啟動(dòng) HBase 集群 啟動(dòng)命令:start-hbase.sh,在哪臺(tái)節(jié)點(diǎn)上執(zhí)行此命令,哪個(gè)節(jié)點(diǎn)就是主節(jié)點(diǎn)
[hadoop@hadoop1 conf]$ start-hbase.shstarting master, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-master-hadoop1.outJava HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0hadoop3: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop3.outhadoop4: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop4.outhadoop2: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop2.outhadoop3: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0hadoop3: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0hadoop4: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0hadoop4: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0hadoop2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0hadoop2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0hadoop1: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop1.outhadoop4: starting master, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-master-hadoop4.out[hadoop@hadoop1 conf]$
觀看啟動(dòng)日志可以看到:
(1)首先在命令執(zhí)行節(jié)點(diǎn)啟動(dòng) master
(2)然后分別在 hadoop02,hadoop03,hadoop04,hadoop05 啟動(dòng) regionserver
(3)然后在 backup-masters 文件中配置的備節(jié)點(diǎn)上再啟動(dòng)一個(gè) master 主進(jìn)程
主節(jié)點(diǎn)和備用節(jié)點(diǎn)都啟動(dòng) hmaster 進(jìn)程
各從節(jié)點(diǎn)都啟動(dòng) hregionserver 進(jìn)程
按照對(duì)應(yīng)的配置信息各個(gè)節(jié)點(diǎn)應(yīng)該要啟動(dòng)的進(jìn)程如上圖所示
hadoop1
hadop4
從圖中可以看出hadoop4是備用節(jié)點(diǎn)
干掉hadoop1上的hbase進(jìn)程,觀察備用節(jié)點(diǎn)是否啟用
[hadoop@hadoop1 conf]$ jps4960 HMaster2960 QuorumPeerMain3169 NameNode3699 DFSZKFailoverController3285 DataNode5098 HRegionServer5471 Jps3487 JournalNode[hadoop@hadoop1 conf]$ kill -9 4960
hadoop1界面訪問(wèn)不了
hadoop4變成主節(jié)點(diǎn)
啟動(dòng)HMaster進(jìn)程
[hadoop@hadoop3 conf]$ jps3360 Jps2833 JournalNode2633 QuorumPeerMain3179 HRegionServer2732 DataNode[hadoop@hadoop3 conf]$ hbase-daemon.sh start masterstarting master, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-master-hadoop3.outJava HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0[hadoop@hadoop3 conf]$ jps2833 JournalNode3510 Jps3432 HMaster2633 QuorumPeerMain3179 HRegionServer2732 DataNode[hadoop@hadoop3 conf]$
啟動(dòng)HRegionServer進(jìn)程
[hadoop@hadoop3 conf]$ hbase-daemon.sh start regionserver
聯(lián)系客服