Node Memory Hog
Introduction¶
- This experiment causes Memory resource exhaustion on the Kubernetes node. The experiment aims to verify resiliency of applications whose replicas may be evicted on account on nodes turning unschedulable (Not Ready) due to lack of Memory resources.
- The Memory chaos is injected using a helper pod running the linux stress-ng tool (a workload generator)- The chaos is effected for a period equalling the TOTAL_CHAOS_DURATION and upto MEMORY_CONSUMPTION_PERCENTAGE(out of 100) or MEMORY_CONSUMPTION_MEBIBYTES(in Mebibytes out of total available memory).
- Application implies services. Can be reframed as: Tests application resiliency upon replica evictions caused due to lack of Memory resources
Scenario: Stress the memory of node
Uses¶
View the uses of the experiment
coming soon
Prerequisites¶
Verify the prerequisites
- Ensure that Kubernetes Version > 1.16
- Ensure that the Litmus Chaos Operator is running by executing
kubectl get pods
in operator namespace (typically,litmus
).If not, install from here - Ensure that the
node-memory-hog
experiment resource is available in the cluster by executingkubectl get chaosexperiments
in the desired namespace. If not, install from here
Default Validations¶
View the default validations
The target nodes should be in ready state before and after chaos injection.
Minimal RBAC configuration example (optional)¶
NOTE
If you are using this experiment as part of a litmus workflow scheduled constructed & executed from chaos-center, then you may be making use of the litmus-admin RBAC, which is pre installed in the cluster as part of the agent setup.
View the Minimal RBAC permissions
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-memory-hog-sa
namespace: default
labels:
name: node-memory-hog-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-memory-hog-sa
labels:
name: node-memory-hog-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-memory-hog-sa
labels:
name: node-memory-hog-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-memory-hog-sa
subjects:
- kind: ServiceAccount
name: node-memory-hog-sa
namespace: default
Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
Experiment tunables¶
check the experiment tunables
Mandatory Fields
Variables | Description | Notes |
---|---|---|
TARGET_NODES | Comma separated list of nodes, subjected to node memory hog chaos | |
NODE_LABEL | It contains node label, which will be used to filter the target nodes if TARGET_NODES ENV is not set | It is mutually exclusive with the TARGET_NODES ENV. If both are provided then it will use the TARGET_NODES |
Optional Fields
Variables | Description | Notes | |
---|---|---|---|
TOTAL_CHAOS_DURATION | The time duration for chaos insertion (in seconds) | Optional | Defaults to 120 |
LIB | The chaos lib used to inject the chaos | Optional | Defaults to litmus |
LIB_IMAGE | Image used to run the stress command | Optional | Defaults to litmuschaos/go-runner:latest |
MEMORY_CONSUMPTION_PERCENTAGE | Percent of the total node memory capacity | Optional | Defaults to 30 |
MEMORY_CONSUMPTION_MEBIBYTES | The size in Mebibytes of total available memory. When using this we need to keep MEMORY_CONSUMPTION_PERCENTAGE empty as the percentage have more precedence | Optional | |
NUMBER_OF_WORKERS | It is the number of VM workers involved in IO disk stress | Optional | Default to 1 |
RAMP_TIME | Period to wait before and after injection of chaos in sec | Optional | |
NODES_AFFECTED_PERC | The Percentage of total nodes to target | Optional | Defaults to 0 (corresponds to 1 node), provide numeric value only |
SEQUENCE | It defines sequence of chaos execution for multiple target pods | Default value: parallel. Supported: serial, parallel |
Experiment Examples¶
Common and Node specific tunables¶
Refer the common attributes and Node specific tunable to tune the common tunables for all experiments and node specific tunables.
Memory Consumption Percentage¶
It stresses the MEMORY_CONSUMPTION_PERCENTAGE
percentage of total node capacity of the targeted node.
Use the following example to tune this:
# stress the memory of the targeted node with MEMORY_CONSUMPTION_PERCENTAGE of node capacity
# it is mutually exclusive with the MEMORY_CONSUMPTION_MEBIBYTES.
# if both are provided then it will use MEMORY_CONSUMPTION_PERCENTAGE for stress
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
annotationCheck: "false"
chaosServiceAccount: node-memory-hog-sa
experiments:
- name: node-memory-hog
spec:
components:
env:
# percentage of total node capacity to be stressed
- name: MEMORY_CONSUMPTION_PERCENTAGE
value: '10' # in percentage
- name: TOTAL_CHAOS_DURATION
value: '60'
Memory Consumption Mebibytes¶
It stresses the MEMORY_CONSUMPTION_MEBIBYTES
MiBi of the memory of the targeted node.
It is mutually exclusive with the MEMORY_CONSUMPTION_PERCENTAGE
ENV. If MEMORY_CONSUMPTION_PERCENTAGE
ENV is set then it will use the percentage for the stress otherwise, it will stress the i/o based on MEMORY_CONSUMPTION_MEBIBYTES
ENV.
Use the following example to tune this:
# stress the memory of the targeted node with given MEMORY_CONSUMPTION_MEBIBYTES
# it is mutually exclusive with the MEMORY_CONSUMPTION_PERCENTAGE.
# if both are provided then it will use MEMORY_CONSUMPTION_PERCENTAGE for stress
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
annotationCheck: "false"
chaosServiceAccount: node-memory-hog-sa
experiments:
- name: node-memory-hog
spec:
components:
env:
# node memory to be stressed
- name: MEMORY_CONSUMPTION_MEBIBYTES
value: '500' # in MiBi
- name: TOTAL_CHAOS_DURATION
value: '60'
Workers For Stress¶
The workers count for the stress can be tuned with NUMBER_OF_WORKERS
ENV.
Use the following example to tune this:
# provide for the workers count for the stress
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
annotationCheck: "false"
chaosServiceAccount: node-memory-hog-sa
experiments:
- name: node-memory-hog
spec:
components:
env:
# total number of workers involved in stress
- name: NUMBER_OF_WORKERS
value: '1'
- name: TOTAL_CHAOS_DURATION
value: '60'