Apache ZooKeeper organizes all data into a hierarchical system of ZNodes, which act as both files (they can have data associated with them) and folders (they can contain other ZNodes) when compared to a traditional (POSIX-like) file system.
In order to isolate different clients using the same ZooKeeper cluster, each client application should be assigned a unique root ZNode, which it can then organize as it sees fit. This can be thought of like a namespace for that client, and prevents clashes between different clients.
The Stackable Operator for Apache ZooKeeper manages ZNodes using the ZookeeperZnode resource.
--- apiVersion: zookeeper.stackable.tech/v1alpha1 kind: ZookeeperZnode metadata: name: example-znode (1) spec: clusterRef: name: zookeeper-cluster (2) namespace: my-namespace (3)
|1||The name of the ZNode in ZooKeeper. It is the same as the name of the Kubernetes resource.|
|2||Refererence to the
|3||The namespace of the
|It is the responsibility of the user to ensure that ZNodes are not shared between products. For example, a Kafka and a Hadoop cluster should not share the same ZNode.|
When a ZNode is created, the operator creates the required tree in ZooKeeper and a discovery ConfigMap with a Discovery Profiles for this ZNode. This discovery ConfigMap is used by other operators to configure clients with access to the ZNode.
The operator does not manage the contents of the ZNode.
The operator automatically deletes the ZNode from the ZooKeeper cluster if the Kubernetes
One reason for the design of using multiple resources to configure the ZNodes instead of specifying them inside the ZookeeperCluster itself, was to allow different people in an organization to manage them separately.
The ZookeeperCluster might be under the responsibility of a cluster administrator, and access control might prevent anyone from creating or modifying the ZookeeperCluster.
ZNodes however are product specific and need to be managed by product teams that do not have cluster wide administration rights.
Have a look at the usage guide for ZNodes: Isolating clients with ZNodes