电竞比分网-中国电竞赛事及体育赛事平台

分享

HBase學(xué)習(xí)之路 (二)HBase集群安裝

 HK123COM 2019-02-14

目錄

 

正文

前提

1、HBase 依賴于 HDFS 做底層的數(shù)據(jù)存儲

2、HBase 依賴于 MapReduce 做數(shù)據(jù)計算

3、HBase 依賴于 ZooKeeper 做服務(wù)協(xié)調(diào)

4、HBase源碼是java編寫的,安裝需要依賴JDK

版本選擇

打開官方的版本說明http://hbase./1.2/book.html

JDK的選擇

Hadoop的選擇

 

此處我們的hadoop版本用的的是2.7.5,HBase選擇的版本是1.2.6

安裝

1、zookeeper的安裝

參考http://www.cnblogs.com/qingyunzong/p/8619184.html

2、Hadoopd的安裝

參考http://www.cnblogs.com/qingyunzong/p/8634335.html

3、下載安裝包

找到官網(wǎng)下載 hbase 安裝包 hbase-1.2.6-bin.tar.gz,這里給大家提供一個下載地址: http://mirrors./apache/hbase/

4、上傳服務(wù)器并解壓縮到指定目錄

[hadoop@hadoop1 ~]$ ls
apps  data  hbase-1.2.6-bin.tar.gz  hello.txt  log  zookeeper.out
[hadoop@hadoop1 ~]$ tar -zxvf hbase-1.2.6-bin.tar.gz -C apps/

5、修改配置文件

配置文件目錄在安裝包的conf文件夾中

(1)修改hbase-env.sh 

[hadoop@hadoop1 conf]$ vi hbase-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_73
export HBASE_MANAGES_ZK=false

(2)修改hbase-site.xml

[hadoop@hadoop1 conf]$ vi hbase-site.xml
復(fù)制代碼
<configuration>

        <property>
                <!-- 指定 hbase 在 HDFS 上存儲的路徑 -->
                <name>hbase.rootdir</name>
                <value>hdfs://myha01/hbase126</value>
        </property>
        <property>
                <!-- 指定 hbase 是分布式的 -->
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>
        <property>
                <!-- 指定 zk 的地址,多個用“,”分割 -->
                <name>hbase.zookeeper.quorum</name>
                <value>hadoop1:2181,hadoop2:2181,hadoop3:2181,hadoop4:2181</value>
        </property>

</configuration>
復(fù)制代碼

(3)修改regionservers 

[hadoop@hadoop1 conf]$ vi regionservers 
hadoop1
hadoop2
hadoop3
hadoop4

(4)修改backup-masters

該文件是不存在的,先自行創(chuàng)建

[hadoop@hadoop1 conf]$ vi backup-masters
hadoop4

(5)修改hdfs-site.xml 和 core-site.xml 

最重要一步,要把 hadoop 的 hdfs-site.xml 和 core-site.xml 放到 hbase-1.2.6/conf 下

[hadoop@hadoop1 conf]$ cd ~/apps/hadoop-2.7.5/etc/hadoop/
[hadoop@hadoop1 hadoop]$ cp core-site.xml hdfs-site.xml ~/apps/hbase-1.2.6/conf/

6、將HBase安裝包分發(fā)到其他節(jié)點(diǎn)

分發(fā)之前先刪除HBase目錄下的docs文件夾,

[hadoop@hadoop1 hbase-1.2.6]$ rm -rf docs/

在進(jìn)行分發(fā)

[hadoop@hadoop1 apps]$ scp -r hbase-1.2.6/ hadoop2:$PWD
[hadoop@hadoop1 apps]$ scp -r hbase-1.2.6/ hadoop3:$PWD
[hadoop@hadoop1 apps]$ scp -r hbase-1.2.6/ hadoop4:$PWD

7、 同步時間

HBase 集群對于時間的同步要求的比 HDFS 嚴(yán)格,所以,集群啟動之前千萬記住要進(jìn)行 時間同步,要求相差不要超過 30s

8、配置環(huán)境變量

所有服務(wù)器都有進(jìn)行配置

[hadoop@hadoop1 apps]$ vi ~/.bashrc 
#HBase
export HBASE_HOME=/home/hadoop/apps/hbase-1.2.6
export PATH=$PATH:$HBASE_HOME/bin

使環(huán)境變量立即生效

[hadoop@hadoop1 apps]$ source ~/.bashrc 

啟動HBase集群

嚴(yán)格按照啟動順序進(jìn)行

1、啟動zookeeper集群

每個zookeeper節(jié)點(diǎn)都要執(zhí)行以下命令

[hadoop@hadoop1 apps]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@hadoop1 apps]$ 

2、啟動HDFS集群及YARN集群

如果需要運(yùn)行MapReduce程序則啟動yarn集群,否則不需要啟動

復(fù)制代碼
[hadoop@hadoop1 apps]$ start-dfs.sh
Starting namenodes on [hadoop1 hadoop2]
hadoop2: starting namenode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-namenode-hadoop2.out
hadoop1: starting namenode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-namenode-hadoop1.out
hadoop3: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop3.out
hadoop4: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop4.out
hadoop2: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop2.out
hadoop1: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop1.out
Starting journal nodes [hadoop1 hadoop2 hadoop3]
hadoop3: starting journalnode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-journalnode-hadoop3.out
hadoop2: starting journalnode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-journalnode-hadoop2.out
hadoop1: starting journalnode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-journalnode-hadoop1.out
Starting ZK Failover Controllers on NN hosts [hadoop1 hadoop2]
hadoop2: starting zkfc, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-zkfc-hadoop2.out
hadoop1: starting zkfc, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-zkfc-hadoop1.out
[hadoop@hadoop1 apps]$ 
復(fù)制代碼

啟動完成之后檢查以下namenode的狀態(tài)

[hadoop@hadoop1 apps]$ hdfs haadmin -getServiceState nn1
standby
[hadoop@hadoop1 apps]$ hdfs haadmin -getServiceState nn2
active
[hadoop@hadoop1 apps]$ 

3、啟動HBase

保證 ZooKeeper 集群和 HDFS 集群啟動正常的情況下啟動 HBase 集群 啟動命令:start-hbase.sh,在哪臺節(jié)點(diǎn)上執(zhí)行此命令,哪個節(jié)點(diǎn)就是主節(jié)點(diǎn)

復(fù)制代碼
[hadoop@hadoop1 conf]$ start-hbase.sh
starting master, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-master-hadoop1.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
hadoop3: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop3.out
hadoop4: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop4.out
hadoop2: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop2.out
hadoop3: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
hadoop3: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
hadoop4: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
hadoop4: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
hadoop2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
hadoop2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
hadoop1: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop1.out
hadoop4: starting master, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-master-hadoop4.out
[hadoop@hadoop1 conf]$ 
復(fù)制代碼

觀看啟動日志可以看到:

(1)首先在命令執(zhí)行節(jié)點(diǎn)啟動 master

(2)然后分別在 hadoop02,hadoop03,hadoop04,hadoop05 啟動 regionserver

(3)然后在 backup-masters 文件中配置的備節(jié)點(diǎn)上再啟動一個 master 主進(jìn)程

驗(yàn)證啟動是否正常

1、檢查各進(jìn)程是否啟動正常

 主節(jié)點(diǎn)和備用節(jié)點(diǎn)都啟動 hmaster 進(jìn)程

 各從節(jié)點(diǎn)都啟動 hregionserver 進(jìn)程

按照對應(yīng)的配置信息各個節(jié)點(diǎn)應(yīng)該要啟動的進(jìn)程如上圖所示

2、通過訪問瀏覽器頁面

hadoop1

hadop4

從圖中可以看出hadoop4是備用節(jié)點(diǎn)

3、驗(yàn)證高可用

干掉hadoop1上的hbase進(jìn)程,觀察備用節(jié)點(diǎn)是否啟用

復(fù)制代碼
[hadoop@hadoop1 conf]$ jps
4960 HMaster
2960 QuorumPeerMain
3169 NameNode
3699 DFSZKFailoverController
3285 DataNode
5098 HRegionServer
5471 Jps
3487 JournalNode
[hadoop@hadoop1 conf]$ kill -9 4960
復(fù)制代碼

 hadoop1界面訪問不了

hadoop4變成主節(jié)點(diǎn)

4、如果有節(jié)點(diǎn)相應(yīng)的進(jìn)程沒有啟動,那么可以手動啟動

啟動HMaster進(jìn)程

復(fù)制代碼
[hadoop@hadoop3 conf]$ jps
3360 Jps
2833 JournalNode
2633 QuorumPeerMain
3179 HRegionServer
2732 DataNode
[hadoop@hadoop3 conf]$ hbase-daemon.sh start master
starting master, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-master-hadoop3.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
[hadoop@hadoop3 conf]$ jps
2833 JournalNode
3510 Jps
3432 HMaster
2633 QuorumPeerMain
3179 HRegionServer
2732 DataNode
[hadoop@hadoop3 conf]$ 
復(fù)制代碼

 

啟動HRegionServer進(jìn)程

[hadoop@hadoop3 conf]$ hbase-daemon.sh start regionserver 

 

    本站是提供個人知識管理的網(wǎng)絡(luò)存儲空間,所有內(nèi)容均由用戶發(fā)布,不代表本站觀點(diǎn)。請注意甄別內(nèi)容中的聯(lián)系方式、誘導(dǎo)購買等信息,謹(jǐn)防詐騙。如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請點(diǎn)擊一鍵舉報。
    轉(zhuǎn)藏 分享 獻(xiàn)花(0

    0條評論

    發(fā)表

    請遵守用戶 評論公約

    類似文章 更多