ListenerClass

A ListenerClass defines a category of listeners and how to expose them in your specific Kubernetes environment. Think of it as a policy that says "when an application asks for 'external-stable' networking, here’s how we provide it in this cluster".

Common Examples

Cloud Environment (GKE, EKS, AKS and others)

In managed cloud environments, you typically want to use LoadBalancers since nodes are short-lived:

---
apiVersion: listeners.stackable.tech/v1alpha1
kind: ListenerClass
metadata:
  name: external-stable
spec:
  serviceType: LoadBalancer

On-Premise Environment

In on-premise clusters with stable, long-lived nodes, a NodePort Service is often preferred. Sometimes these clusters lack the necessary LoadBalancer infrastructure:

---
apiVersion: listeners.stackable.tech/v1alpha1
kind: ListenerClass
metadata:
  name: external-stable
spec:
  serviceType: NodePort

Internal-Only Services / Additional Service Annotations

Sometimes it is required to add additional annotations to a Service. How exactly this is accomplished depends on the cloud provider in question, but for GKE this requires the annotation networking.gke.io/load-balancer-type:

---
apiVersion: listeners.stackable.tech/v1alpha1
kind: ListenerClass
metadata:
  name: internal
spec:
  serviceType: LoadBalancer
  serviceAnnotations:
    networking.gke.io/load-balancer-type: Internal

Default ListenerClasses

The Stackable Data Platform expects these three ListenerClasses to exist:

cluster-internal

Used for internal cluster communication (e.g., ZooKeeper nodes talking to each other)

external-unstable

Used for external access where clients discover addresses dynamically and no stable address is required (e.g., individual Kafka brokers)

external-stable

Used for external access where clients need predictable addresses (e.g., Kafka bootstrap servers, Web UIs)

Presets

To help users get started, the Stackable Listener Operator ships different ListenerClass presets for different environments. These are configured using the preset Helm value, with stable-nodes being the default.

Installation Commands

For cloud environments:

helm install listener-operator oci://oci.stackable.tech/sdp-charts/listener-operator --set preset=ephemeral-nodes

For clusters with stable nodes:

helm install listener-operator oci://oci.stackable.tech/sdp-charts/listener-operator --set preset=stable-nodes

To define your own ListenerClasses:

helm install listener-operator oci://oci.stackable.tech/sdp-charts/listener-operator --set preset=none

What Each Preset Creates

Both stable-nodes and ephemeral-nodes create the same three ListenerClasses that Stackable operators expect, but with different service types:

ListenerClass Name stable-nodes ephemeral-nodes

cluster-internal

ClusterIP

ClusterIP

external-unstable

NodePort

NodePort

external-stable

NodePort

LoadBalancer

Why the Difference?

  • stable-nodes: Uses NodePort for external access and pins pods to specific nodes for address stability.

    This creates a dependency on specific nodes. If a pinned node becomes unavailable, the pod cannot start on other nodes until you either restore the node or manually delete the PVC to allow rescheduling.

    To recover from node failures:
    1. kubectl delete pvc <listener-pvc-name> - Allows the pod to reschedule (address may change)

    2. Or restore/replace the failed node with the same identity

      This does not require any particular networking setup, but is best suited for environments with reliable, long-lived nodes.

  • ephemeral-nodes: Uses LoadBalancer for stable external access, allowing pods to move freely between nodes but requires LoadBalancer infrastructure

Managed cloud environments should generally already provide an integrated LoadBalancer controller. For on-premise environments, an external implementation such as Calico or MetalLB can be used.

K3s' built-in ServiceLB (Klipper) is not recommended, because it doesn’t allow multiple Services to bind the same Port. If you use ServiceLB, use the stable-nodes preset instead.

Creating Custom ListenerClasses

If these presets are inadequate, you can create custom ListenerClasses. The key is understanding your environment’s requirements.

Choosing the Right Service Type

ClusterIP

  • Use for: Internal cluster communication only

  • Access: Only from within the Kubernetes cluster

  • Address: Cluster-internal IP address

NodePort

  • Use for: External access (from outside the Kubernetes cluster) in environments with stable nodes

  • Access: From outside the cluster via <NodeIP>:<NodePort>

  • Behavior: Pins pods to specific nodes for address stability

NodePort services may expose your applications to the internet if your Kubernetes nodes have public IP addresses. Ensure you understand your cluster’s network topology and have appropriate firewall rules in place.

When using NodePort with pinned pods, service addresses depend on specific nodes. If a pinned node becomes unavailable, the service may become unreachable until the pod can be rescheduled to a new node, potentially changing the service address.

Pods bound to NodePort listeners will be pinned to a specific Node for address stability. If this behavior is undesirable, consider using LoadBalancer instead.

LoadBalancer

  • Use for: External access in environments without stable nodes or other reasons for a LoadBalancer

  • Access: From outside the cluster via dedicated load balancer

  • Behavior: Allows pods to move freely between nodes

  • Requirements: Kubernetes cluster must have a LoadBalancer controller

  • Cost: Cloud providers typically charge for load balancer usage

Advanced Configuration

Custom Load Balancer Classes

Kubernetes supports using multiple different load balancer types in the same cluster by configuring a unique load-balancer class for each provider.

The Stackable Listener Operator supports using custom classes setting the ListenerClass.spec.loadBalancerClass field.

loadBalancerClass is only respected when using the LoadBalancer service type. Otherwise, the field will be ignored.
apiVersion: listeners.stackable.tech/v1alpha1
kind: ListenerClass
metadata:
  name: my-custom-lb
spec:
  serviceType: LoadBalancer
  loadBalancerClass: "example.com/my-loadbalancer"

Disabling NodePort Allocation

By default, LoadBalancer services also create NodePorts.

This can be disabled using the ListenerClass.spec.loadBalancerAllocateNodePorts field.

loadBalancerAllocateNodePorts is only respected when using the LoadBalancer service type. Otherwise, the field will be ignored.
apiVersion: listeners.stackable.tech/v1alpha1
kind: ListenerClass
metadata:
  name: lb-no-nodeports
spec:
  serviceType: LoadBalancer
  loadBalancerAllocateNodePorts: false

Address Types

Control whether IP addresses or hostnames are provided in the Listener status and filesystem:

IP

Returns IP addresses (more compatible, less predictable especially for ClusterIP services)

Hostname

Returns DNS hostnames (requires proper DNS setup)

HostnameConservative

(default) Uses hostnames for LoadBalancer/ClusterIP, IPs for NodePort

LoadBalancer and ClusterIP services typically have reliable DNS names, but node hostnames may not be resolvable by external clients, so NodePort services get IP addresses instead.

If the preferred address type is not supported for a given environment then another type will be used.
apiVersion: listeners.stackable.tech/v1alpha1
kind: ListenerClass
metadata:
  name: hostname-preferred
spec:
  serviceType: LoadBalancer
  preferredAddressType: Hostname

Adding Service Annotations

Many cloud providers require specific annotations for advanced features:

apiVersion: listeners.stackable.tech/v1alpha1
kind: ListenerClass
metadata:
  name: aws-internal-nlb
spec:
  serviceType: LoadBalancer
  serviceAnnotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-internal: "true"

Frequently Asked Questions

Why aren’t ListenerClasses namespace-scoped?

ListenerClasses are intentionally cluster-scoped to encourage separation of concerns between platform administrators (who understand infrastructure) and application developers (who choose policies). While this limits flexibility for application-specific customizations, it promotes networking standardization across the cluster.

If you need more granular control, consider creating additional ListenerClasses or using the none preset for full customization.

My pods won’t start after a node failure - what do I do?

If you’re using the stable-nodes preset (or custom NodePort ListenerClasses), pods may get stuck when their pinned node becomes unavailable.

Quick fix:

# Find the stuck PVC
kubectl get pvc | grep listener-

# Delete it to allow rescheduling (address may change)
kubectl delete pvc <listener-pvc-name>

For more details on why this happens and prevention strategies, see the preset details section.