First steps

Once you have followed the steps in the Installation section to install the operator and its dependencies, you will now deploy an HBase cluster and its dependencies. Afterwards you can verify that it works by creating tables and data in HBase using the REST API and Apache Phoenix (an SQL layer used to interact with HBase).

Setup

ZooKeeper

To deploy a ZooKeeper cluster create one file called zk.yaml:

---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperCluster
metadata:
  name: simple-zk
spec:
  image:
    productVersion: 3.8.0
    stackableVersion: 23.4.0
  servers:
    roleGroups:
      default:
        replicas: 1

We also need to define a ZNode that will be used by the HDFS and HBase clusters to reference ZooKeeper. Create another file called znode.yaml:

---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperZnode
metadata:
  name: simple-znode
spec:
  clusterRef:
    name: simple-zk

Apply both of these files:

kubectl apply -f zk.yaml
kubectl apply -f znode.yaml

The state of the ZooKeeper cluster can be tracked with kubectl:

kubectl rollout status --watch statefulset/simple-zk-server-default --timeout=300s

HDFS

An HDFS cluster has three components: the namenode, the datanode and the journalnode. Create a file named hdfs.yaml defining 2 namenodes and one datanode and journalnode each:

---
apiVersion: hdfs.stackable.tech/v1alpha1
kind: HdfsCluster
metadata:
  name: simple-hdfs
spec:
  image:
    productVersion: 3.3.4
    stackableVersion: 23.4.0
  clusterConfig:
    dfsReplication: 1
    zookeeperConfigMapName: simple-znode
  nameNodes:
    roleGroups:
      default:
        replicas: 2
  dataNodes:
    roleGroups:
      default:
        replicas: 1
  journalNodes:
    roleGroups:
      default:
        replicas: 1

Where:

  • metadata.name contains the name of the HDFS cluster

  • the label of the Docker image provided by Stackable must be set in spec.version

Please note that the version you need to specify for spec.version is not only the version of Hadoop which you want to roll out, but has to be amended with a Stackable version as shown. This Stackable version is the version of the underlying container image which is used to execute the processes. For a list of available versions please check our image registry. It should generally be safe to simply use the latest image version that is available.

Create the actual HDFS cluster by applying the file:

kubectl apply -f hdfs.yaml

Track the progress with kubectl as this step may take a few minutes:

kubectl rollout status --watch statefulset/simple-hdfs-datanode-default --timeout=300s
kubectl rollout status --watch statefulset/simple-hdfs-namenode-default --timeout=300s
kubectl rollout status --watch statefulset/simple-hdfs-journalnode-default --timeout=300s

HBase

You can now create the HBase cluster. Create a file called hbase.yaml containing the following:

---
apiVersion: hbase.stackable.tech/v1alpha1
kind: HbaseCluster
metadata:
  name: simple-hbase
spec:
  image:
    productVersion: 2.4.12
    stackableVersion: 23.4.0
  clusterConfig:
    hdfsConfigMapName: simple-hdfs
    zookeeperConfigMapName: simple-znode
  masters:
    roleGroups:
      default:
        replicas: 1
  regionServers:
    roleGroups:
      default:
        config:
          resources:
            cpu:
              min: 300m
              max: "3"
            memory:
              limit: 3Gi
        replicas: 1
  restServers:
    roleGroups:
      default:
        replicas: 1

Verify that it works

To test the cluster you will use the REST API to check its version and status, and to create and inspect a new table. You will also use Phoenix to create, populate and query a second new table, before listing all non-system tables in HBase. These actions wil be carried out from one of the HBase components, the REST server.

First, check the cluster version with this callout:

  kubectl exec -n default simple-hbase-restserver-default-0 -- \
  curl -s -XGET -H "Accept: application/json" "http://simple-hbase-restserver-default:8080/version/cluster"

This will return the version that was specified in the HBase cluster definition:

{"Version":"2.4.12"}

The cluster status can be checked and formatted like this:

kubectl exec -n default simple-hbase-restserver-default-0 \
-- curl -s -XGET -H "Accept: application/json" "http://simple-hbase-restserver-default:8080/status/cluster" | json_pp

which will display cluster metadata that looks like this (only the first region is included for the sake of readability):

{
   "DeadNodes" : [],
   "LiveNodes" : [
      {
         "Region" : [
            {
               "currentCompactedKVs" : 0,
               "memStoreSizeMB" : 0,
               "name" : "U1lTVEVNLkNBVEFMT0csLDE2NjExNjA0NDM2NjcuYmYwMzA1YmM4ZjFmOGIwZWMwYjhmMGNjMWI5N2RmMmUu",
               "readRequestsCount" : 104,
               "rootIndexSizeKB" : 1,
               "storefileIndexSizeKB" : 1,
               "storefileSizeMB" : 1,
               "storefiles" : 1,
               "stores" : 1,
               "totalCompactingKVs" : 0,
               "totalStaticBloomSizeKB" : 0,
               "totalStaticIndexSizeKB" : 1,
               "writeRequestsCount" : 360
            },
            ...
         ],
         "heapSizeMB" : 351,
         "maxHeapSizeMB" : 11978,
         "name" : "simple-hbase-regionserver-default-0.simple-hbase-regionserver-default.default.svc.cluster.local:16020",
         "requests" : 395,
         "startCode" : 1661156787704
      }
   ],
   "averageLoad" : 43,
   "regions" : 43,
   "requests" : 1716
}

You can now create a table like this:

kubectl exec -n default simple-hbase-restserver-default-0 \
-- curl -s -XPUT -H "Accept: text/xml" -H "Content-Type: text/xml" \
"http://simple-hbase-restserver-default:8080/users/schema" \
-d '<TableSchema name="users"><ColumnSchema name="cf" /></TableSchema>'

This will create a table users with a single column family cf. Its creation can be verified by listing it:

kubectl exec -n default simple-hbase-restserver-default-0 \
-- curl -s -XGET -H "Accept: application/json" "http://simple-hbase-restserver-default:8080/users/schema" | json_pp
{
   "table" : [
      {
         "name" : "users"
      }
   ]
}

An alternative way to interact with HBase is to use the Phoenix library that is pre-installed on the Stackable HBase image (in the /stackable/phoenix directory). Use the python utility psql.py (found in /stackable/phoenix/bin) to create, populate and query a table called WEB_STAT:

kubectl exec -n default simple-hbase-restserver-default-0 -- \
/stackable/phoenix/bin/psql.py \
/stackable/phoenix/examples/WEB_STAT.sql \
/stackable/phoenix/examples/WEB_STAT.csv \
/stackable/phoenix/examples/WEB_STAT_QUERIES.sql

The final command will display some grouped data like this:

HO                    TOTAL_ACTIVE_VISITORS
-- ----------------------------------------
EU                                      150
NA                                        1
Time: 0.017 sec(s)

Check the tables again with:

kubectl exec -n default simple-hbase-restserver-default-0 \
-- curl -s -XGET -H "Accept: application/json" "http://simple-hbase-restserver-default:8080/users/schema" | json_pp

This time the list includes not just users (created above with the REST API) and WEB_STAT, but several other tables too:

{
   "table" : [
      {
         "name" : "SYSTEM.CATALOG"
      },
      {
         "name" : "SYSTEM.CHILD_LINK"
      },
      {
         "name" : "SYSTEM.FUNCTION"
      },
      {
         "name" : "SYSTEM.LOG"
      },
      {
         "name" : "SYSTEM.MUTEX"
      },
      {
         "name" : "SYSTEM.SEQUENCE"
      },
      {
         "name" : "SYSTEM.STATS"
      },
      {
         "name" : "SYSTEM.TASK"
      },
      {
         "name" : "WEB_STAT"
      },
      {
         "name" : "users"
      }
   ]
}

This is because Phoenix requires these SYSTEM. tables for its own internal mapping mechanism, and they are created the first time that Phoenix is used on the cluster.

What’s next

Look at the Usage guide to find out more about configuring your HBase cluster.