이 블로그 검색

2014년 7월 26일 토요일

Hadoop 2.2.0 + HBase 0.98.0 install in Centos 6.5 64bit




centOS 6.5 minimal 버전에 hadoop-2.2.0, hbase-0.98.0 을 설치하고 구동시킨다.
/usr/local/src 에 설치파일들을 다운받고
/usr/local/java
/usr/local/hadoop
/usr/local/hbase
위치에 각각 설치한다.
다음 설명에서 폴더 위치에 대한 별다른 설명이 없다면 /usr/local 에 설치하는것으로 판단한다.

  1. centos 설치

    1. 준비물
      1. OS CD file download 및 CD 로 굽기
        1. http://www.centos.org/download/
      2. 회사내에서 네트워크 제약사항이 많기 때문에 yum, maven build, wget download 등으로 외부에 접속할 수 있는 proxy 가 있어야 한다.
      3. ip 및 hostname 준비
    2. 설치할때 minimal 버전으로 설치
      1. Desktop 버전으로 설치하게 되면 쓸데없는 패키지들이 많이 설치되어 resource 가 낭비될 수 있다.
    3. network setting
      1. vi /etc/sysconfig/network-scripts/ifcfg-eth0
        1. IPADDR, NETMASK, GATEWAY 등은 준비한대로...
          ONBOOT=yes
          NM_CONTROLLED=no
          BOOTPROTO=static
          IPADDR=xxx.xxx.81.164
          NETMASK=255.255.255.0
          GATEWAY=xxx.xxx.81.1
          ARPCHECK=no
      2. vi /etc/resolv.conf
        1. DNS(nameserver) 셋팅
          nameserver 168.126.63.1
          nameserver 168.126.63.2
      3. vi /etc/hosts
        1. hostname 을 지정해준다. (안해주면 ip 로 접근해야함)

          xxx.xxx.81.155 host005.com
          xxx.xxx.81.156 host006.com
          xxx.xxx.81.163 host013.com
          xxx.xxx.81.164 host014.com
      4. service network restart
        1. network restart
    4. proxy  setting
      1. proxy setting
        1. vi /etc/profile
          export http_proxy=http://proxyhost:proxyport
          export https_proxy=https://proxyhost:proxyport
          export ftp_proxy=ftp://proxyhost:proxyport
        2. source /etc/profile
      2. yum proxy setting
        1. vi /etc/yum.conf
          proxy=http://proxyhost:proxyport
        2. service network restart
        3. yum clean all
        4. yum update
      3. maven proxy setting (maven 이 설치되었다고 가정함)
        1. settings.xml 파일을 ~/.m2/settings.xml 로 copy 
          1. cp MAVEN_HOME/conf/settings.xml ~/.m2/
        2. vi ~/.m2/settings.xml 
          <settings>
          ...
          <proxies>
          <proxy>
          <active>true</active>
          <protocol>http</protocol>
          <host>proxy host</host>
          <port>proxy port</port>
          <nonProxyHosts>local.net|some.host.com</nonProxyHosts>
          </proxy>
          </proxies>
          ...
          </settings>
          빨간색으로 표시한부분은 준비한 proxy server 를 적는다.
    5. cluster 로 설치할것이기 때문에 ssh key-gen을 셋팅해두면 편리하다. (password 를 입력할 필요없이 바로 ssh 접근) 그리고 hadoop cluster 를 실행하기 위한 필수 셋팅이기도 하다.
      1. 각 node(PC, machine, etc.) 마다 authorized key 생성
        ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
        cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
      2. 각 node(PC, machine, etc.) 마다 ~/.ssh/authorized_keys 를 열어서 내용들을 각 node 마다 공유한다.

        [node#1] vim ~/.ssh/authorized_keys
        ssh-rsa AAABOPI8AEJROIFAJPSDG09J[50459I-34JPADJFGOJI54ARGASD== root@myhost1
        [node#2] vim ~/.ssh/authorized_keys
        ssh-rsa SWEOIHTEWIIASDF-34JPADJFGOJI54ARGA46ERAASDFE4TAADSSA== root@myhost2
        [node#3] vim ~/.ssh/authorized_keys
        ssh-rsa E5YWERHTSDFGSAFSD50459I-34E4YTAETEARTJPADJFGOJI54ARA== root@myhost3
        [node#4] vim ~/.ssh/authorized_keys
        ssh-rsa EW5YSDGRE6TYSDGSDRGSDFRG5Y50459I-34JPADJFGOJI54ARGAA== root@myhost4
      3. 모든 node 에 복사

        [node#1] vim ~/.ssh/authorized_keys
        ssh-rsa AAABOPI8AEJROIFAJPSDG09J[50459I-34JPADJFGOJI54ARGASD== root@myhost1
        ssh-rsa SWEOIHTEWIIASDF-34JPADJFGOJI54ARGA46ERAASDFE4TAADSSA== root@myhost2
        ssh-rsa E5YWERHTSDFGSAFSD50459I-34E4YTAETEARTJPADJFGOJI54ARA== root@myhost3
        ssh-rsa EW5YSDGRE6TYSDGSDRGSDFRG5Y50459I-34JPADJFGOJI54ARGAA== root@myhost4
        [node#2] vim ~/.ssh/authorized_keys
        ssh-rsa AAABOPI8AEJROIFAJPSDG09J[50459I-34JPADJFGOJI54ARGASD== root@myhost1
        ssh-rsa SWEOIHTEWIIASDF-34JPADJFGOJI54ARGA46ERAASDFE4TAADSSA== root@myhost2
        ssh-rsa E5YWERHTSDFGSAFSD50459I-34E4YTAETEARTJPADJFGOJI54ARA== root@myhost3
        ssh-rsa EW5YSDGRE6TYSDGSDRGSDFRG5Y50459I-34JPADJFGOJI54ARGAA== root@myhost4
        [node#3] vim ~/.ssh/authorized_keys
        ssh-rsa AAABOPI8AEJROIFAJPSDG09J[50459I-34JPADJFGOJI54ARGASD== root@myhost1
        ssh-rsa SWEOIHTEWIIASDF-34JPADJFGOJI54ARGA46ERAASDFE4TAADSSA== root@myhost2
        ssh-rsa E5YWERHTSDFGSAFSD50459I-34E4YTAETEARTJPADJFGOJI54ARA== root@myhost3
        ssh-rsa EW5YSDGRE6TYSDGSDRGSDFRG5Y50459I-34JPADJFGOJI54ARGAA== root@myhost4
        [node#4] vim ~/.ssh/authorized_keys
        ssh-rsa AAABOPI8AEJROIFAJPSDG09J[50459I-34JPADJFGOJI54ARGASD== root@myhost1
        ssh-rsa SWEOIHTEWIIASDF-34JPADJFGOJI54ARGA46ERAASDFE4TAADSSA== root@myhost2
        ssh-rsa E5YWERHTSDFGSAFSD50459I-34E4YTAETEARTJPADJFGOJI54ARA== root@myhost3
        ssh-rsa EW5YSDGRE6TYSDGSDRGSDFRG5Y50459I-34JPADJFGOJI54ARGAA== root@myhost4
      4. master node 에서 각 slaves 로 ssh 접속을 한번씩 시도하여 ~/.ssh/known_hosts 에 등록 되도록 한다.
        1. ssh myhost2
          1. yes
        2. ssh myhost3
          1. yes
        3. ssh myhost4
          1. yes
      5. master node 에서 각 slaves 로 ssh 접속을 시도했을때 아무멘트 없이 연결되면 끝.
    6. iptables 는 일단 꺼놓는다. 제대로 서비스 하기 위해선 각 port 별로 설정해야 할것이다.
      1. service iptables stop
  2. java install

    1. java download and decompress
      1. cd /usr/local/src
      2. http://www.oracle.com/technetwork/java/javase/downloads/index.html
      3. tar -zxvf jdk-7u45-linux-x64.tar.gz
    2. java setting
      1. /usr/local/java 에 java 를 설치하자
        1. mv jdk1.7.0_45 /usr/local
        2. cd /usr/local
        3. ln -s /usr/local/jdk1.7.0_45 java
      2. update-alternatives --install 를 이용한 setting 하는 방법
        1. usage : alternatives --install <link> <name> <path> <priority>
          update-alternatives --install /usr/bin/java java /usr/local/jdk1.7.0_45/bin/java 1
          update-alternatives --install /usr/bin/javac javac /usr/local/jdk1.7.0_45/bin/javac 1
          update-alternatives --install /usr/bin/javaws javaws /usr/local/jdk1.7.0_45/bin/javaws 1
          update-alternatives --config java
          update-alternatives --config javac
          update-alternatives --config javaws
      3. 일반적인 package 를 설치하는 방식으로 setting 하는 방법
        1. /etc/profile 에 셋팅하고 PATH 설정
          [node#1]$ vim /etc/profile
          export JAVA_HOME=/usr/local/java
          PATH=$JAVA_HOME/bin:$PATH

          [node#1]$ source /etc/profile
    3. java -version 했을때 다음과 같이 나오면 끝.
      java version "1.7.0_45"
      Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
      Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
  3. hadoop install

    1. download 및 decompress, move
      1. http://apache.mirror.cdnetworks.com/hadoop/common/
      2. 2014년 3월 27일 현재 2.2.0 이 최신 stable 버전임
      3. wget http://apache.mirror.cdnetworks.com/hadoop/common/stable2/hadoop-2.2.0.tar.gz
      4. cd /usr/local/src
      5. tar -zxvf hadoop-2.2.0.tar.gz
      6. mv hadoop-2.2.0 /usr/local/
      7. cd /usr/local
      8. ln -s /usr/local/hadoop-2.2.0/ hadoop
    2. hadoop setting
      1. vim /etc/profile 에 다음을 설정후, source /etc/profile 
        export JAVA_HOME=/usr/local/java
        export HADOOP_HOME=/usr/local/hadoop
        export HADOOP_MAPRED_HOME=$HADOOP_HOME
        export HADOOP_COMMON_HOME=$HADOOP_HOME
        export HADOOP_HDFS_HOME=$HADOOP_HOME
        export YARN_HOME=$HADOOP_HOME
        export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
        export YARN_CONF_DIR=$HADOOP_CONF_DIR
        export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
        export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
        PATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
      2. hadoop-env.sh 
        1. vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh
          export JAVA_HOME=/usr/local/java
      3. core-site.xml
        1. vim /usr/local/hadoop/etc/hadoop/core-site.xml
          <configuration>
          <property>
          <name>fs.default.name</name>
          <value>hdfs://host013.com:9000</value>
          </property>
          </configuration>
      4. yarn-site.xml
        1. vim /usr/local/hadoop/etc/hadoop/yarn-site.xml
          <configuration>
          <!-- Site specific YARN configuration properties -->
          <property>
          <name>yarn.nodemanager.aux-services</name>
          <value>mapreduce_shuffle</value>
          </property>
          <property>
          <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
          <value>org.apache.hadoop.mapred.ShuffleHandler</value>
          </property>
          <property>
          <description>Classpath for typical applications.</description>
          <name>yarn.application.classpath</name>
          <value>
          $HADOOP_CONF_DIR,
          $HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
          $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
          $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
          $YARN_HOME/*,$YARN_HOME/lib/*
          </value>
          </property>
          </configuration>


      5. mapred-site.xml
        1. mapred-site.xml 의 경우에는 mapred-site.xml.template 파일로 저장되어있음
          mv /usr/local/hadoop/etc/hadoop/mapred-site.xml.template/usr/local/hadoop/etc/hadoop/mapred-site.xml
        2. vim /usr/local/hadoop/etc/hadoop/mapred-site.xml
          <configuration>
          <property>
          <name>mapreduce.framework.name</name>
          <value>yarn</value>
          </property>
          </configuration>
      6. hdfs-site.xml
        1. vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
          <configuration>
          <property>
          <name>dfs.replication</name>
          <value>3</value>
          </property>
          <property>
          <name>dfs.name.dir</name>
          <value>/home/data/hdfs/namenode</value>
          </property>
          <property>
          <name>dfs.data.dir</name>
          <value>/home/data/hdfs/datanode</value>
          </property>
          <property>
          <name>fs.hdfs.impl</name>
          <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
          <description>The FileSystem for hdfs: uris.</description>
          </property>
          </configuration>
      7. slaves
        1. vim /usr/local/hadoop/etc/hadoop/slaves
          host005.com
          host006.com
          host014.com
    3. hadoop data storage 생성
      1. 위에 hdfs-site.xml 에서 dfs.name.dir, dfs.data.dir 에 설정한 path 를 만든다.
        mkdir -p /home/data/hdfs/namenode
        mkdir -p /home/data/hdfs/datanode
      2. dfs.name.dir 은 master node 에서 사용
      3. dfs.data.dir 은 slaves nodes 에서 사용
    4. cluster setting - 설정된 전체 directory 를 모든 nodes 에 복사한다.
      scp -r /usr/local/hadoop-2.2.0 host005.com:/usr/local/
      scp -r /usr/local/hadoop-2.2.0 host006.com:/usr/local/
      scp -r /usr/local/hadoop-2.2.0 host014.com:/usr/local/

      1. 각 slaves node 마다 /usr/local/hadoop-2.2.0 폴더를 확인후, ln -s 걸어준다.
        cd /usr/local ln -s /usr/local/hadoop-2.2.0/ hadoop
    5. hdfs format
      1. namenode(master) 에서만 수행
        [node#1]$ hadoop namenode -format
        DEPRECATED: Use of this script to execute hdfs command is deprecated.
        Instead use the hdfs command for it.
        14/03/27 13:44:46 INFO namenode.NameNode: STARTUP_MSG:
        /************************************************************
        STARTUP_MSG: Starting NameNode
        STARTUP_MSG: host = host013.com/xxx.xxx.81.163
        STARTUP_MSG: args = [-format]
        STARTUP_MSG: version = 2.2.0
        STARTUP_MSG: classpath = blablabla....
        STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
        STARTUP_MSG: java = 1.7.0_45
        ************************************************************/
        14/03/27 13:44:46 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
        ...
        ...
        14/03/27 13:44:47 INFO util.ExitUtil: Exiting with status 0
        14/03/27 13:44:47 INFO namenode.NameNode: SHUTDOWN_MSG:
        /************************************************************
        SHUTDOWN_MSG: Shutting down NameNode at host013.com/xxx.xxx.81.163
        ************************************************************/
        [node#1]$ _
    6. hadoop run
      1. HADOOP_HOME/sbin 에 start script 들이 들어있다.
        [node#1]$ start-all.sh

        1. start-all.sh 는 start-dfs.sh, start-yarn.sh 를 포함한다. (hadoop 2 버전부터 yarn 을 사용)
    7. 확인
      1. jps 로 확인
      2. hadoop dfsadmin -report
        [root@host013 hadoop]# hadoop dfsadmin -report
        DEPRECATED: Use of this script to execute hdfs command is deprecated.
        Instead use the hdfs command for it.
        14/04/02 23:43:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
        Configured Capacity: 5707246288896 (5.19 TB)
        Present Capacity: 5416717225984 (4.93 TB)
        DFS Remaining: 5416716046336 (4.93 TB)
        DFS Used: 1179648 (1.13 MB)
        DFS Used%: 0.00%
        Under replicated blocks: 0
        Blocks with corrupt replicas: 0
        Missing blocks: 0
        -------------------------------------------------
        Datanodes available: 3 (3 total, 0 dead)
        Live datanodes:
        Name: xxx.xxx.81.156:50010 (host006.com)
        Hostname: host006.com
        Decommission Status : Normal
        Configured Capacity: 1902415429632 (1.73 TB)
        DFS Used: 393216 (384 KB)
        Non DFS Used: 96843001856 (90.19 GB)
        DFS Remaining: 1805572034560 (1.64 TB)
        DFS Used%: 0.00%
        DFS Remaining%: 94.91%
        Last contact: Wed Apr 02 23:43:02 KST 2014

        Name: xxx.xxx.81.164:50010 (host014.com)
        Hostname: host014.com
        Decommission Status : Normal
        Configured Capacity: 1902415429632 (1.73 TB)
        DFS Used: 393216 (384 KB)
        Non DFS Used: 96843018240 (90.19 GB)
        DFS Remaining: 1805572018176 (1.64 TB)
        DFS Used%: 0.00%
        DFS Remaining%: 94.91%
        Last contact: Wed Apr 02 23:43:02 KST 2014

        Name: xxx.xxx.81.155:50010 (host005.com)
        Hostname: host005.com
        Decommission Status : Normal
        Configured Capacity: 1902415429632 (1.73 TB)
        DFS Used: 393216 (384 KB)
        Non DFS Used: 96843042816 (90.19 GB)
        DFS Remaining: 1805571993600 (1.64 TB)
        DFS Used%: 0.00%
        DFS Remaining%: 94.91%
        Last contact: Wed Apr 02 23:43:02 KST 2014
      3. web 확인
        1. explore 에서 hostip:8088, hostip:50070 등 확인
        2. 열려있는 port 를 확인하기 위해서 netstat 명령을 사용
          [root@host013 hadoop]# netstat -ntpl|grep LISTEN
          tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 6157/java
          tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1219/sshd
          tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1295/master
          tcp 0 0 xxx.xxx.81.163:9000 0.0.0.0:* LISTEN 6157/java
          tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 6340/java
          tcp 0 0 :::3888 :::* LISTEN 9715/java
          tcp 0 0 :::22 :::* LISTEN 1219/sshd
          tcp 0 0 :::8088 :::* LISTEN 6502/java
          tcp 0 0 ::1:25 :::* LISTEN 1295/master
          tcp 0 0 :::8030 :::* LISTEN 6502/java
          tcp 0 0 :::8031 :::* LISTEN 6502/java
          tcp 0 0 ::ffff:xxx.xxx.81.163:60000 :::* LISTEN 10787/java
          tcp 0 0 :::8032 :::* LISTEN 6502/java
          tcp 0 0 :::8033 :::* LISTEN 6502/java
          tcp 0 0 :::2181 :::* LISTEN 9715/java
          tcp 0 0 :::60010 :::* LISTEN 10787/java
          tcp 0 0 :::45994 :::* LISTEN 9715/java
  4. zookeeper install

    1. download, decompress
      [node#1]$ tar -zxvf zookeeper-3.4.5.tar.gz
      [node#1]$ mv zookeeper-3.4.5 /usr/local/
      [node#1]$ cd /usr/local/
      [node#1]$ ln -s /usr/local/zookeeper-3.4.5/ zookeeper
    2. zookeeper setting
      1. zoo.cfg 
        [node#1]$ cd /usr/local/zookeeper
        [node#1]$ mv conf/zoo_sample.cfg conf/zoo.cfg
        [node#1]$ vim conf/zoo.cfg
        dataDir=/home/data/zookeeper/data ...
        server.1=host013.com:2888:3888
        server.2=host005.com:2888:3888
        server.3=host006.com:2888:3888
        server.4=host014.com:2888:3888

        1. dataDir 의 변경은 zookeeper data 가 저장될 위치이다.
        2. 중요한 변경사항은 server.x 로 되어있는 부분에서 x 를 숫자로 해야한다.
        3. 그리고 반드시 dataDir 의 path 로 가서 myid 라는 파일을 만든후 x 를 입력해주어야 한다.
          [root@host013 zookeeper]# vim /home/data/zookeeper/data/myid 1

          1. 각 node 마다 셋팅해야 한다. 
    3. cluster setting - 설정된 전체 directory 를 모든 nodes 에 복사한다.
      scp -r /usr/local/zookeeper-3.4.5 host005.com:/usr/local/
      scp -r /usr/local/zookeeper-3.4.5 host006.com:/usr/local/
      scp -r /usr/local/zookeeper-3.4.5 host014.com:/usr/local/

      1. 각 node 마다 /usr/local/zookeeper-3.4.5 폴더를 확인후, ln -s 걸어준다.
        ln -s /usr/local/zookeeper-3.4.5 zookeeper
    4. zookeeper run
      1. 실행은 각 node 마다 해주어야 한다. hadoop 이나 hbase 처럼 master 에서만 동작시켜서는 올바르게 동작하지 않는다. (아니면 script 를 만들어도 된다)
        [node#x]$ bin/zkServer.sh start
    5. 확인
      1. jps 했을때 "QuorumPeerMain" process 가 떠있으면 끝.
  5. HBase install

    1. download, decompress
      [node#1]$ tar -zxvf hbase-0.98.0-hadoop2-bin.tar.gz
      [node#1]$ mv hbase-0.98.0-hadoop2 /usr/local/

      [node#1]$ cd /usr/local/
      [node#1]$ ln -s /usr/local/hbase-0.98.0-hadoop2/ hbase

      1. hbase 는 0.94 버전까지는 그냥 다운받아서 설치하면 되었지만, 0.98 부터는 hadoop 버전에 따라 binary 가 다르다.
      2. troubleshooting 에 hbase 0.98.0 의 source compile 을 통한 설치가이드가 포함되어있다. (다운받은 lib 에 포함된 hadoop-common.jar 파일이 32비트에서 컴파일되어 64비트OS 에서는 다시 빌드해서 library 를 바꾸어야한다)
    2. HBase setting
      1. /etc/profile 수정후, source /etc/profile 로 반영
        export HBASE_HOME=/usr/local/hbase PATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin
      2. hbase-env.sh
        1. vim /usr/local/hbase/conf/hbase-env.sh
          export JAVA_HOME=/usr/local/java
          export HBASE_CLASSPATH=/usr/local/hadoop/etc/hadoop
          export HBASE_MANAGES_ZK=false
      3. regionservers
        1. vim /usr/local/hbase/conf/regionservers
          host005.com
          host006.com
          host014.com
        2. /etc/hosts 파일을 변경하지 않았으면 ip 로 적어야 한다.
      4. hbase-site.xml
        1. vim /usr/local/hbase/conf/hbase-site.xml
          <configuration>
          <property>
          <name>hbase.rootdir</name>
          <value>hdfs://host013.com:9000/hbase</value>
          </property>
          <property>
          <name>hbase.zookeeper.quorum</name>
          <value>host013.com,host005.com,host006.com,host014.com</value>
          </property>
          <property>
          <name>hbase.zookeeper.property.dataDir</name>
          <value>/home/data/zookeeper/data</value>
          </property>
          <property>
          <name>hbase.cluster.distributed</name>
          <value>true</value>
          </property>
          <property>
          <name>hbase.dynamic.jars.dir</name>
          <value>/usr/local/hbase/lib</value>
          </property>
          </configuration>
    3. cluster setting - 설정된 전체 directory 를 모든 nodes(regionservers) 에 복사한다.
      scp -r /usr/local/hbase-0.98.0-hadoop2 host005.com:/usr/local/
      scp -r /usr/local/hbase-0.98.0-hadoop2 host006.com:/usr/local/
      scp -r /usr/local/hbase-0.98.0-hadoop2 host014.com:/usr/local/

      1. 각 node 마다 /usr/local/hbase-0.98.0-hadoop2 폴더를 확인후, ln -s 걸어준다.
        ln -s /usr/local/hbase-0.98.0-hadoop2/ hbase
    4. HBase run
      1. master 에서 start
        [node#1]$ /usr/local/hbase/bin/start-hbase.sh
      2. zookeeper process 가 띄워졌는지 확인후 시작해야 한다.
    5. 확인
      1. jps 로 "HMaster" 확인
      2. web 확인
        1. explore 에서 masterHostIP:60010
      3. hbase shell 실행
        1. create 'testtable','columnfamily'
        2. list
        3. scan 'testtable'


댓글 2개:

  1. 자세한 설명 감사합니다. 이제 따라서 설치해보겠습니다.

    답글삭제
  2. 감사합니다. 혹시 내용에 문제가 있다면 댓글 남겨주세요.
    업데이트 하겠습니다.

    답글삭제