Resources

Storage for data volumes

You can mount volumes where data is stored by specifying PersistentVolumeClaims for each individual role group.

By default, each Pod has one volume mount with 10Gi capacity and storage type Disk:

dataNodes:
  roleGroups:
    default:
      config:
        resources:
          storage:
            data:
              count: 1
              capacity: 10Gi
              hdfsStorageType: Disk

These defaults can be overridden individually:

dataNodes:
  roleGroups:
    default:
      config:
        resources:
          storage:
            data:
              capacity: 128Gi

In the above example, all DataNodes in the default group store data (the location of dfs.datanode.name.dir) on a 128Gi volume.

Multiple storage volumes

Datanodes can have multiple disks attached to increase the storage size as well as speed. They can be of different type, e.g. HDDs or SSDs.

You can configure multiple PersistentVolumeClaims (PVCs) for the datanodes as follows:

dataNodes:
  roleGroups:
    default:
      config:
        resources:
          storage:
            my-disks:
              count: 3
              capacity: 12Ti
              hdfsStorageType: Disk
            my-ssds:
              count: 2
              capacity: 5Ti
              storageClass: premium-ssd
              hdfsStorageType: SSD
            # The default "data" PVC is still created.
            # If this is not desired then the count must be set to 0.
            data:
              count: 0

This creates the following PVCs:

  1. my-disks-hdfs-datanode-default-0 (12Ti)

  2. my-disks-1-hdfs-datanode-default-0 (12Ti)

  3. my-disks-2-hdfs-datanode-default-0 (12Ti)

  4. my-ssds-hdfs-datanode-default-0 (5Ti)

  5. my-ssds-1-hdfs-datanode-default-0 (5Ti)

By configuring and using a dedicated StorageClass you can configure your HDFS to use local disks attached to Kubernetes nodes.

You might need to re-create the StatefulSet to apply the new PVC configuration because of this Kubernetes issue. You can delete the StatefulSet using kubectl delete statefulsets --cascade=orphan <statefulset>. The hdfs-operator recreates the StatefulSet automatically.

Resource Requests

Stackable operators handle resource requests in a sligtly different manner than Kubernetes. Resource requests are defined on role or role group level. On a role level this means that by default, all workers will use the same resource requests and limits. This can be further specified on role group level (which takes priority to the role level) to apply different resources.

This is an example on how to specify CPU and memory resources using the Stackable Custom Resources:

---
apiVersion: example.stackable.tech/v1alpha1
kind: ExampleCluster
metadata:
  name: example
spec:
  workers: # role-level
    config:
      resources:
        cpu:
          min: 300m
          max: 600m
        memory:
          limit: 3Gi
    roleGroups: # role-group-level
      resources-from-role: # role-group 1
        replicas: 1
      resources-from-role-group: # role-group 2
        replicas: 1
        config:
          resources:
            cpu:
              min: 400m
              max: 800m
            memory:
              limit: 4Gi

In this case, the role group resources-from-role will inherit the resources specified on the role level, resulting in a maximum of 3Gi memory and 600m CPU resources.

The role group resources-from-role-group has a maximum of 4Gi memory and 800m CPU resources (which overrides the role CPU resources).

For Java products the actual used heap memory is lower than the specified memory limit due to other processes in the Container requiring memory to run as well. Currently, 80% of the specified memory limit is passed to the JVM.

For memory, only a limit can be specified, which will be set as memory request and limit in the container. This is to always guarantee a container the full amount memory during Kubernetes scheduling.

A minimal HA setup consisting of 3 journalnodes, 2 namenodes and 2 datanodes has the following resource requirements:

  • 2950m CPU request

  • 8300m CPU limit

  • 6528m memory request and limit

  • 27648Mi persistent storage

Corresponding to the values above, the operator uses the following resource defaults:

spec:
  journalNodes:
    config:
      resources:
        cpu:
          min: 100m
          max: 400m
        memory:
          limit: 512Mi
        storage:
          data:
            capacity: 1Gi
  nameNodes:
    config:
      resources:
        cpu:
          min: 250m
          max: 1000m
        memory:
          limit: 1024Mi
        storage:
          data:
            capacity: 2Gi
  dataNodes:
    config:
      resources:
        cpu:
          min: 100m
          max: 400m
        memory:
          limit: 512Mi
        storage:
          data:
            capacity: 10Gi