Apache ZooKeeper organizes all data into a hierarchical system of ZNodes, which act as both files (they can have data associated with them) and folders (they can contain other ZNodes) when compared to a traditional (POSIX-like) file system.

In order to isolate different clients using the same ZooKeeper cluster, each client application should be assigned a unique root ZNode, which it can then organize as it sees fit. This can be thought of like a namespace for that client, and prevents clashes between different clients.

The Stackable Operator for Apache ZooKeeper manages ZNodes using the ZookeeperZnode resource.

The Operator connects directly to ZooKeeper to manage the ZNodes inside of the ZooKeeper ensemble. This means that network access to the ZooKeeper pods is necessary. If your Kubernetes cluster restricts network acess, you need to configure a NetworkPolicy to allow the operator to connect to ZooKeeper.

Configuring ZNodes

ZNodes are configured with the ZookeeperZnode CustomResource. If a ZookeeperZnode resource is created, the operator will create the respective tree in ZooKeeper. Also, if the resource in Kubernetes is deleted, so is the data in ZooKeeper.

The operator automatically deletes the ZNode from the ZooKeeper cluster if the Kubernetes ZookeeperZnode object is deleted. Recreating the ZookeeperZnode object will not restore access to the data.

Here is an example of a ZookeeperZnode:

apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperZnode
  name: example-znode        (1)
    name: zookeeper-cluster  (2)
    namespace: my-namespace  (3)
1 The name of the ZNode in ZooKeeper. It is the same as the name of the Kubernetes resource.
2 Reference to the ZookeeperCluster object where the ZNode should be created.
3 The namespace of the ZookeeperCluster. Can be omitted and will default to the namespace of the ZNode object.

When a ZNode is created, the operator creates the required tree in ZooKeeper and a discovery ConfigMap with a Discovery Profiles for this ZNode. This discovery ConfigMap is used by other operators to configure clients with access to the ZNode.

The operator does not manage the contents of the ZNode.

Creating a ZNode per dependant

To ensure that a product that uses ZooKeeper is running smoothly, you should make sure that each Stacklet or product instance is operating with its own ZNode. For example, a Kafka and a Hadoop cluster should not share the same ZNode. Also no two Kafka instances should share the same ZNode.

Have a look at the Isolating clients with ZNodes guide for hands-on instructions on how to set up multiple ZNodes for different Stacklets.

Split responsibilities for ZooKeeper and ZNodes

One reason for the design of using multiple resources to configure the ZNodes instead of specifying them inside the ZookeeperCluster itself, was to allow different people in an organization to manage them separately.

The ZookeeperCluster might be under the responsibility of a cluster administrator, and access control might prevent anyone from creating or modifying the ZookeeperCluster.

ZNodes however are product specific and need to be managed by product teams that do not have cluster wide administration rights.

What’s next

Have a look at the usage guide for ZNodes: Isolating clients with ZNodes or the CRD reference for the ZookeeperZnode CustomResource.