Resource requests
Stackable operators handle resource requests in a sligtly different manner than Kubernetes. Resource requests are defined on role or role group level. On a role level this means that by default, all workers will use the same resource requests and limits. This can be further specified on role group level (which takes priority to the role level) to apply different resources.
This is an example on how to specify CPU and memory resources using the Stackable Custom Resources:
---
apiVersion: example.stackable.tech/v1alpha1
kind: ExampleCluster
metadata:
name: example
spec:
workers: # role-level
config:
resources:
cpu:
min: 300m
max: 600m
memory:
limit: 3Gi
roleGroups: # role-group-level
resources-from-role: # role-group 1
replicas: 1
resources-from-role-group: # role-group 2
replicas: 1
config:
resources:
cpu:
min: 400m
max: 800m
memory:
limit: 4Gi
In this case, the role group resources-from-role
will inherit the resources specified on the role level, resulting in a maximum of 3Gi
memory and 600m
CPU resources.
The role group resources-from-role-group
has a maximum of 4Gi
memory and 800m
CPU resources (which overrides the role CPU resources).
For Java products the actual used heap memory is lower than the specified memory limit due to other processes in the Container requiring memory to run as well. Currently, 80% of the specified memory limit is passed to the JVM. |
For memory, only a limit can be specified, which will be set as memory request and limit in the container. This is to always guarantee a container the full amount memory during Kubernetes scheduling.
If no resources are configured explicitly, the HBase operator uses following defaults:
A minimal HA setup consisting of 2 masters, 2 regionservers and 2 restservers has the following resource requirements:
-
2700m
CPU request -
7800m
CPU limit -
5888m
memory request and limit
Corresponding to the values above, the operator uses the following resource defaults:
spec:
masters:
config:
resources:
cpu:
min: 250m
max: "1"
memory:
limit: 1Gi
regionServers:
config:
resources:
cpu:
min: 250m
max: "1"
memory:
limit: 1Gi
restServers:
config:
resources:
cpu:
min: 100m
max: 400m
memory:
limit: 512Mi
The default values are most likely not sufficient to run a proper cluster in production. You need to update them according to your requirements. |
For more details regarding Kubernetes CPU limits see: Assign CPU Resources to Containers and Pods.