The node Affinity feature’s main goal is to ensure that pods are hosted on specific nodes. It gives us the ability to position pods on certain nodes with greater precision.
lets take an example:- we have a cluster with a variety of workloads operating, and we want to devote the data processing tasks that demand more horsepower to a node with high or medium resources, which is where node affinity comes into play.
Here are some pod-defination.yaml example for the above example
apiVersion: v1
kind: Pod
metadata:
name: dbapp
spec:
containers:
- name: dbapp
image: db-processor
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution
nodeSelectorTerms:
- matchExpresions:
- key: Size
operator: In
values:
- Large
- Medium
So, in the Pod Definition file, we have affinity and then node Affinity, and under that, we have a property “requiredDuringSchedulingIgnoredDuringExecution,” and then we have the nodeSelectorTerms, which is an array, where we’ll define the key and value pairs.’
The key-value pairs have the following format: key, operator, and value, with the operator “In” as the value. The Operator guarantees that the pod is placed on a node with a label size of any of the values supplied in the list.
If you want to label a node, use the following command:-
kubectl label nodes <node-name> <label-key>=<label-value>
kubectl label nodes worker-node1 size=large
We may also use the “NotIn” operator to indicate something like size “NotIn” small, in which case node affinity will match nodes with a size that is not set to small.
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution
nodeSelectorTerms:
- matchExpresions:
- key: Size
operator: NotIn
values:
- large
we have just set the name on enormous and medium hubs, the more modest hub doesn’t have the mark set. So we don’t actually need to try and really take a look at the worth of the mark, as long as we are certain we don’t set the name size to the more modest hubs, utilizing the “Exist” administrator will give us a similar outcome as “NotIn” for our situation.
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution
nodeSelectorTerms:
- matchExpresions:
- key: Size
operator: Exist
Exist:- This operator will just check if the mark “size” exists on the hubs and we needn’t bother with the worth segment for that, as it doesn’t think about the qualities.
node affinity syntax supports the following operators: In
, NotIn
, Exists
, DoesNotExist
, Gt
, Lt, etc.
There are currently 2 types of node affinity available:
- requiredDuringSchedulingIgnoreDuringExecution(Required during scheduling, ignored during execution; we are also known as “soft” requirements)
- preferredDuringSchedulingIgnoreDuringExecution(Preferred during scheduling, ignored during execution; we are also known as “hard” requirements)
lets understand During Scheduling and During Execution
DuringScheduling:-
During scheduling is the state where a Pod doesn’t exist and is made interestingly and when its originally made the liking rules indicated are considered to put the unit on the right hub.
Presently, consider the possibility that the hubs with matching marks are not accessible Or we neglected to name the hubs as enormous (for our situation). This is the place where node affinity come into picture.
If we choose the needed type (the first affinity), the scheduler will force the pod to be deployed on a node that follows the affinity rules we specified. The pod will not be scheduled if the relevant node cannot be found. This kind will be utilised in situations where the pod’s location is critical.
However, if pod placement is less critical than performing the workload, we may set the affinity to the preferred kind (the 2nd affinity) and the scheduler will simply ignore node affinity restrictions and place the Pods on any available node if a matching node cannot be identified. The preferable method is to tell the scheduler, “Hey, do your hardest to put the pod on the matching node, but if you can’t locate one, put it wherever.”
DuringExecution:-
During Execution is the second state. It is a condition in when a pod has been operating for some time and the environment has changed in a way that impacts node affinity, such as a change in a node’s label.
if an administrator deleted the node’s label (size = small) that we had previously specified. What happens to the pods that are now executing on the node?
lets understand requiredDuringSchedulingIgnoreDuringExecution with the below cases
Case 1:– You are trying to create a pod by using node affinity but the node does not have any label? What will happen?
So, if the node does NOT have any label, then we have 2 affinity types to consider while launching the pod
If you have used the above method, then the pod will not be placed on the node because the node does not have the labels.
Case 2:– Assume that the given pod is running on the node and someone removed the label “region” on the node. Now, what will happen to the PODs that are running on the node?
So, if someone removed the labels, then we have 2 affinity types to consider while launching the pod
If you have used the above ‘required….’ method, then the pod will still continue to run and the new changes will not impact the pod in anyway.
lets understand preferredDuringSchedulingIgnoreDuringExecution with the below cases
Case 1:- You are trying to create a pod by using node affinity but the node does not have any label? What will happen?
Case 2:– Assume that the given pod is running on the node and someone removed the label “region” on the node. Now, what will happen to the PODs that are running on the node?
If you have used the above ‘preferred…’ method, then the pod will still continue to run and the new changes will not impact the pod in any ways. Ideally both provide same results.
Because if you notice the second part in both type of affinity, it says “IgnoredDuringExecution” which means, “Pod will continue to run and any changes in node affinity will not impact them once they are scheduled”.
Finally, we came to the end, but there is one Node Affinity type which is not available right now but may come in future Kubernetes releases, please refer below type,
requiredDuringSchedulingRequiredDuringExecution
In this type, the pod will be removed from the node which does not have a label.
Thanks….. for reading Follow more Blogs on Cloudsbaba
Refferences:
- https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
- https://kubernetes.io/blog/2017/03/advanced-scheduling-in-kubernetes/