First steps
After going through the Installation section and having installed all the operators, you now deploy a Kafka cluster and the required dependencies. Afterward you can verify that it works by producing test data into a topic and consuming it.
Setup
Two things need to be installed to create a Kafka cluster:
-
A ZooKeeper instance for internal use by Kafka
-
The Kafka cluster itself
Create them in this order by applying the corresponding manifest files. The operators you just installed then create the resources according to the manifest.
ZooKeeper
Create a file named zookeeper.yaml
with the following content:
---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperCluster
metadata:
name: simple-zk
spec:
image:
productVersion: 3.9.2
servers:
roleGroups:
default:
replicas: 1
and apply it:
kubectl apply -f zookeeper.yaml
Create a file kafka-znode.yaml
with the following content:
---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperZnode
metadata:
name: simple-kafka-znode
spec:
clusterRef:
name: simple-zk
and apply it:
kubectl apply -f kafka-znode.yaml
Kafka
Create a file named kafka.yaml
with the following contents:
---
apiVersion: kafka.stackable.tech/v1alpha1
kind: KafkaCluster
metadata:
name: simple-kafka
spec:
image:
productVersion: 3.7.1
clusterConfig:
tls:
serverSecretClass: null
zookeeperConfigMapName: simple-kafka-znode
brokers:
config:
bootstrapListenerClass: external-unstable # This exposes your Stacklet outside of Kubernetes. Remove this property if this is not desired
brokerListenerClass: external-unstable # This exposes your Stacklet outside of Kubernetes. Remove this property if this is not desired
roleGroups:
default:
replicas: 3
and apply it:
kubectl apply --server-side -f kafka.yaml
This creates the actual Kafka instance.
Verify that it works
Next you produce data into a topic and read it via kcat.
Depending on your platform you may need to replace kafkacat
in the commands below with kcat
.
First, make sure that all the Pods in the StatefulSets are ready:
kubectl get statefulset
The output should show all pods ready:
NAME READY AGE simple-kafka-broker-default 3/3 5m simple-zk-server-default 3/3 7m
Then, create a port-forward for the Kafka Broker:
kubectl port-forward svc/simple-kafka-broker-default-bootstrap 9092 2>&1 >/dev/null &
Create a file containing some data:
echo "some test data" > data
Write that data:
kcat -b localhost:9092 -t test-data-topic -P data
Read that data:
kcat -b localhost:9092 -t test-data-topic -C -e > read-data.out
Check the content:
grep "some test data" read-data.out
And clean up:
rm data rm read-data.out
You successfully created a Kafka cluster and produced and consumed data.
What’s next
Have a look at the Usage guide page to find out more about the features of the Kafka Operator.