How to Manage Kubernetes Resources
In this post I want to describe how to limit the resources for a container in Kubernetes and how containers can request a minimum of resources on a node in the Kubernetes by using Helm.
What is Resource Requests ?
Resource requests mean, how many resources, e.g. CPU or RAM have. If a node doesn’t meet expectations of requests for container, the container (e.g. the pod) not be possible to schedule it.
What is Resource limits
Resource limits means: how many resources, e.g. CPU or RAM, a pod can use. If your set resource limit to 512MB RAM, the pod can only use 512 MB of RAM, even if it needs more, it won’t get more. Resource limiting is important to stop a pod go out of control and using all the resources of the node. Also, without limiting of resources, Kubernetes can’t perform auto-scaling of pods. More about auto-scaling look to my Next Post
Resource and request limits can be set on a container or namespace level. In this post, I will describe in the container-level. The namespace level is the same as Container level.
what is Pods
Pods are the smallest unit that can be created in Kubernetes. If a pod contains multiple containers, the resource requests of all containers are added for the pod. Kubernetes checks where and how it can schedule the pod. The resource limit can never be lower than the resource request. Otherwise, Kubernetes can’t be able to start the pod.
What is Container in Kubernetes
Containers are lightweight packages of your application code together with dependencies such as specific versions of programming language runtimes and libraries required to run your software services on Kubernetes. (Docker Containers).
Units for Resource limitation
CPU resources are defined in millicores. If a container needs half a CPU core, it should be define as 500m or 0.5. Kubernetes shall converts 0.5 to 500m. To be sure that never request more CPU cores than your biggest node has. Otherwise, Kubernetes cannot be able to schedule your pod. The best way is to use 1 CPU core (1000m) or less for a pod (except if you have a specific use case where your application can use more than one CPU core) . Usually it is more effective to scale out (create more pods) rather than use more CPU power.
If a pod hits the CPU limit, Kubernetes tries to kill the pod (because of restrictions of CPU using by the pod ). Some times it leads to worse performance of application.
Memory is defined in bytes. The default value is mebibyte which equals more or less a megabyte. Configuration can be anything from bytes up to petabytes. In case requested RAM is more than the biggest node has, Kubernetes will never schedule the pod.
Configuration of Resource Request Limits in Helm
Run Kubernetes Cluster locally which is described in my previous post: How to run Kubernetes Cluster locally
start your octant dashboard and go the Workloads: Deployments and select productmicroservice, select YAML then you can find that Resources {} are empty as seen then the following image:

By default, Helm adds an empty resources section to the values.yaml file. (under Charts folder), which means that no resource request or limit is set, as following:
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
Now uncomment the code and remove the empty braces. Then set values for the limits and requests, as like as following:
resources:
limits:
cpu: 0.3
memory: 128Mi
requests:
cpu: 100m
memory: 64Mi
This code configures that a container limits the resources to a maximum of 128 MB RAM and 0.3 CPU cores. It also configures requests 100 millicores (0.1 CPU core) and 64 MB of RAM. we shall see later that Kubernetes converts these 0.3 CPU cores to 300m.
That’s all you have to configure because Helm adds the reference to the values file automatically to the Container section in the deployments.yaml file (under charts: productmicroservice: temaplate:deployments.yaml ).
The deployment.yaml file is as follow:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
# some rows of code are deleted for readability reason
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.imagePullSecrets }}
Testing of Configuration
Run the helm upgrade command under chart folder with command line:
helm upgrade product productmicroservice
look to the Octant dashboard:
Workloads: Deployments and select productmicroservice, select YAML

As you see in above image that Limits and Requests has got limitations under Resources.
The Container part of YAML file is as follow:
spec:
containers:
- image: mehzan07/productmicroservice:latest
imagePullPolicy: IfNotPresent
name: productmicroservice
ports:
- containerPort: 80
name: http
protocol: TCP
resources:
limits:
cpu: 300m
memory: 128Mi
requests:
cpu: 100m
memory: 64Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Conclusion
In this post I have described how to do Resource limits configuration. Resource requests allow you to configure how many resources like CPU and RAM a node has to have available. Always make sure that you don’t configure too much RAM or CPU, otherwise, your pod will never be scheduled.
I have also mentioned limitation always be set to make sure that a pod can’t eat up all the resources of the Kubernetes node. with limitation of resource Kubernetes can be able to automatically scale the pods using the horizontal pod autoscaler. Autoscaler, makes your application more resilient and perform better under heavy load.
My next post describes, Auto-scaling in kubernetes
You can find the code of the demo on my GitHub.
This post is part of “Kubernetes step by step”.