First steps
Now that the operator is installed it is time to deploy a ZooKeeper cluster and connect to it.
Deploy ZooKeeper
The ZooKeeper cluster is deployed with a very simple resource definition.
Create a file called zookeeper.yaml
:
---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperCluster
metadata:
name: simple-zk
spec:
clusterConfig:
listenerClass: external-unstable
image:
productVersion: 3.9.2
servers:
roleGroups:
default:
replicas: 3
and apply it:
kubectl apply -f zookeeper.yaml
The operator creates a ZooKeeper cluster with two replicas. Use kubectl to observe the status of the cluster:
kubectl rollout status --watch --timeout=5m statefulset/simple-zk-server-default
The operator deploys readiness probes to make sure the replicas are ready and established a quorum.
Only then, the StatefulSet is actually marked as Ready
.
You see
partitioned roll out complete: 2 new pods have been updated...
The ZooKeeper cluster is now ready.
Deploy a ZNode
ZooKeeper manages its data in a hierarchical node system.
You can look at the nodes using the zkCli tool.
It is included inside the Stackable ZooKeeper container, and you can invoke it using kubectl run
:
kubectl run my-pod \
--stdin --tty --quiet --restart=Never \
--image docker.stackable.tech/stackable/zookeeper:3.9.2-stackable0.0.0-dev -- \
bin/zkCli.sh -server simple-zk-server-default:2282 ls / > /dev/null && \
kubectl logs my-pod && \
kubectl delete pods my-pod
You might wonder why the logs are used instead of the output from kubectl run .
This is because kubectl run sometimes loses lines of the output, a known issue.
|
Among the log output you see the current list of nodes in the root directory /
:
[zookeeper]
The zookeeper
node contains ZooKeeper configuration data.
It is useful to use different nodes for different applications using ZooKeeper, and the Stackable Operator uses ZNodes for this.
ZNodes are created with manifest files of the kind ZookeeperZnode
.
Create a file called znode.yaml
with the following contents:
---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperZnode
metadata:
name: simple-znode
spec:
clusterRef:
name: simple-zk
And apply it:
kubectl apply -f znode.yaml
Use the same command as before to list the nodes:
kubectl run my-pod \
--stdin --tty --quiet --restart=Never \
--image docker.stackable.tech/stackable/zookeeper:3.9.2-stackable0.0.0-dev -- \
bin/zkCli.sh -server simple-zk-server-default:2282 ls / > /dev/null && \
kubectl logs my-pod && \
kubectl delete pods my-pod
and the ZNode has appeared in the output:
[znode-4e0a6098-057a-42cc-926e-276ea6305e09, zookeeper]
The discovery ConfigMap
The operator creates a ConfigMap with connection information that has the same name as the ZNode - in this case simple-znode
.
Have a look at it using
kubectl describe configmap simple-znode
You see an output similar to this:
ZOOKEEPER:
----
simple-zk-server-default-0.simple-zk-server-default.default.svc.cluster.local:2282,simple-zk-server-default-1.simple-zk-server-default.default.svc.cluster.local:2282/znode-2a9d12be-bfee-49dc-9030-2cb3c3dd80d3
ZOOKEEPER_CHROOT:
----
/znode-2a9d12be-bfee-49dc-9030-2cb3c3dd80d3
ZOOKEEPER_HOSTS:
----
simple-zk-server-default-0.simple-zk-server-default.default.svc.cluster.local:2282,simple-zk-server-default-1.simple-zk-server-default.default.svc.cluster.local:2282
The ZOOKEEPER
entry contains a ZooKeeper connection string that you can use to connect to this specific ZNode.
The ZOOKEEPER_CHROOT
and ZOOKEEPER_HOSTS
entries contain the node name and hosts list respectively.
You can use these three entries mounted into a pod to connect to ZooKeeper at this specific ZNode and read/write in that ZNode directory.
Great! This step concludes the Getting started guide. You have installed the ZooKeeper Operator and its dependencies and set up your first ZooKeeper cluster as well as your first ZNode.
What’s next
Have a look at the Usage guide to learn more about configuration options for your ZooKeeper cluster like setting up encryption or authentication. You can also have a look at the ZNodes page to learn more about ZNodes.