This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
In SUSE Cloud Application Platform, containers have predefined memory limits and request sizes. Depending on the workload, these may need to be adjusted in some cases.
By default, memory limits and request sizes are enabled. To disable it, add
the following block to your kubecf-config-values.yaml file.
features:
memory_limits:
enabled: false
To enable memory limits again, update the above block in your
kubecf-config-values.yaml so that enabled is set to
true.
After making the change above, and any other configuration changes, apply the update by doing the following:
For an initial deployment, continue to the deployment steps for your platform:
For SUSE CaaS Platform, see Section 4.13, “Deploying SUSE Cloud Application Platform”.
For Microsoft AKS, see Section 5.13, “Deploying SUSE Cloud Application Platform”.
For Amazon EKS, see Section 6.13, “Deploying SUSE Cloud Application Platform”.
For Google GKE, see Section 7.14, “Deploying SUSE Cloud Application Platform”.
For an existing deployment, use helm upgrade to apply
the change.
tux > helm upgrade kubecf suse/kubecf \
--namespace kubecf \
--values kubecf-config-values.yaml \
--version 2.7.13
Configuring memory limits and request sizes requires that
feature.memory_limits is enabled. The default memory limits
and request sizes can be found by examining the resources
block at
https://github.com/SUSE/kubernetes-charts-suse-com/blob/master/stable/kubecf/config/resources.yaml.
To configure memory limits and request sizes, add a
resources block to your kubecf-config-values.yaml. It contains a
mapping of instance groups to jobs to processes. The process then contains a
resource definition with limits and requests. All values are integers and
represent the number of megabytes (Mi) for the given limit or request. A fully
expanded tree looks like:
resources:
some_ig:
some_job:
some_process:
memory:
limit: ~
request: ~
Each level can define a $defaults resource definition that
will be applied to all processes below it, that don't have their own
definition (or a default further down the tree closer to them):
resources:
'$defaults':
memory:
limit: ~
request: ~
some_ig:
'$defaults': { ... }
some_job:
'$defaults': { ... }
some_process: ~
For convenience a $defaults value can be just an integer.
This
resources: '$defaults': 32
is a shortcut for:
resources:
'$defaults': {memory: {limit: 32, request: ~}, cpu: {limit: ~, request:~}}In addition, an instance group, job, or process can also be set to just an integer. This:
resources: some_ig: 32
is a shortcut for:
resources:
some_ig:
$defaults': 32Of course this means that any lower level jobs and processes will have to share this specific resource definition, as there is no way to explicitly enumerate the jobs or processes when the value is just an integer and not a map.
Note that there is a difference between this
resources: '$defaults': 32 some_ig: 64
and this:
resources:
'$defaults': 32
some_ig:
some_job: 64
The former definitions sets the memory limit of
all jobs under some_ig
while the latter only specifies the limit for some_job. If
there are more jobs in some_ig, then they will use the
global limit (32) and only some_job will use the specific
limit (64).
Memory requests will have a calculated default value, which is a configurable
percentage of the limit, at least some configurable minimum value, and never
higher than the limit itself. The default is always at least a minimum value,
but never larger than the limit itself. These defaults can be configured by
using features.memory_limits.default_request_minimum and
features.memory_limits.default_request_in_percent. The
following is an example configuration where the example values are the
respective defaults.
features:
memory_limits:
default_request_minimum: 32
default_request_in_percent: 25After making the change above, and any other configuration changes, apply the update by doing the following:
For an initial deployment, continue to the deployment steps for your platform:
For SUSE CaaS Platform, see Section 4.13, “Deploying SUSE Cloud Application Platform”.
For Microsoft AKS, see Section 5.13, “Deploying SUSE Cloud Application Platform”.
For Amazon EKS, see Section 6.13, “Deploying SUSE Cloud Application Platform”.
For Google GKE, see Section 7.14, “Deploying SUSE Cloud Application Platform”.
For an existing deployment, use helm upgrade to apply
the change.
tux > helm upgrade kubecf suse/kubecf \
--namespace kubecf \
--values kubecf-config-values.yaml \
--version 2.7.13