Network Policy to Secure Kubernetes Workloads

Muhammad Yuga N.
7 min readFeb 7, 2021

Most organization are adopting Kubernetes in their production workloads and it’s necessary to aware of security in the whole cluster both in network level, application level, OS level, etc. But on this chapter we will discuss about Network Policy (Network level) to securing communication between an application in the workloads, networking in Kubernetes is really complex because we need to understand the basics concept of networking like TCP/UDP, OSI model, encapsulated network, etc. Kubernetes comes with a few core component with API-driven approach which run in the cluster and used to manage our infrastructure easily like handle networking, auto scaling, deploying an application with the help of kubectl command-line tools and the resources are considerable.

In Kubernetes, there is a Container Network Interface (CNI) as specification to manage network resources and uses as an interface in Pods to communicate with each others in Kubernetes. There are a kinds of IP in Kubernetes:

  1. Pod IP : Pod is the smallest unit in Kubernetes, each Pod has different IP which uses to communicate between containers in the same or different Pod. Pod is not only 1 container, but it’s possible to run a multi-container inside of it because in some cases we will need to add another container like a sidecar to be run in the same Pod maybe to collecting log events, networking purposes, etc.
  2. Cluster IP : You have a deployment and want to expose it as a Service then you will be noticed there is a ClusterIP when execute kubectl get svc, so that’s is another IP to redirects internal or external request into Pod and handles by kube-proxy component.
  3. External IP : This IP uses to expose our internal service into outside the world by accessing the Public IP and usually defined as NodePort or LoadBalancer.

That’s a little introduce about IP in Kubernetes and if you’re interest to explore further more about networking model like Pod-to-Pod, Pod-to-Service and etc please visit this link.

Now, let’s talk about the Network plugin in Kubernetes by looking at the following link, various kinds of CNI plugin that we can use in our cluster, and each plugin have a different approach to do networking, how is the performance and the features provided. In general, we also know some plugin like Flannel, Calico, Cilium, and Weave but only Flannel doesn’t provide policy capability then if we define the NetworkPolicy resources it won’t work.

Why Network Policy is important?

By default, our cluster is used flat networking model it means no limitation or restriction when communicating between each others. Let’s assume in your organization you have a lot of application both Backend and Frontend in the cluster, and you’ve defined “Not all applications need to communicate, for example Payment service need information from Order service to be processed first or maybe its should be Checkout service? or you’re using API Gateway? it’s based on your application architecture looks like.” so it will be easier for us to define our Network Policy rules. To see how it works, let’s get started!

Cluster Setup

For setup, I run Kubernetes in my local machine with the help of my automation using Vagrant and Ansible to provision 2 VMs as master and worker node. To simplify the setup, you can use Minikube or Docker Desktop to creating a cluster without any configuration and not wasting your time. This is how my cluster looks like:

vagrant@cluster-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cluster-master Ready master 17m v1.19.1
cluster-worker1 Ready <none> 8m48s v1.19.1

And already installed Calico as CNI.

vagrant@cluster-master:~$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-86bddfcff-r5v9k 1/1 Running 0 16m
kube-system calico-node-4mzcd 1/1 Running 0 16m
kube-system calico-node-d99qr 1/1 Running 0 9m10s
kube-system coredns-f9fd979d6-8xz8p 1/1 Running 0 17m
kube-system coredns-f9fd979d6-rpqdx 1/1 Running 0 17m
kube-system etcd-cluster-master 1/1 Running 0 17m
kube-system kube-apiserver-cluster-master 1/1 Running 0 17m
kube-system kube-controller-manager-cluster-master 1/1 Running 0 17m
kube-system kube-proxy-7cprb 1/1 Running 0 9m10s
kube-system kube-proxy-p45zr 1/1 Running 0 17m
kube-system kube-scheduler-cluster-master 1/1 Running 0 17m

If you haven’t installed it yet, you just need to execute the following command within your cluster.

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

And the manifest file will creating a DaemonSet.

vagrant@cluster-master:~$ kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
calico-node 2 2 2 2 2 kubernetes.io/os=linux 17m
kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 18m

Create namespaces

We will create two namespace called development and production.

kubectl create ns development
kubectl create ns production

Create a deployments

Use the following manifests file to create a deployment in production namespace.

The manifest file will create a deployment which run NGINX container in a Pod, Kubernetes will give the Pod name like webserver-79fdff489f-59bgx, it must be different in your machine because that’s generated random string. Let’s have a look a details about the Pod.

As I mentioned before, when the Pod is created it will create an interface to assign the IP address. Next, expose a deployment as service.

$ kubectl expose deploy webserver --port=80 -n production

Check the service is created.

kubectl get svc -n production
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
webserver ClusterIP 10.110.86.83 <none> 80/TCP 3m39s

And we got another IP that assigned to our services called Cluster IP, this IP will redirect the request to the Pod. We can check it by list the Kubernetes Service NAT rules in iptables.

sudo iptables -t nat -L KUBE-SERVICES

And change the previous label to the label name that assigned 10.104.254.86 as destination IP and we can see the source IP of the service is anywhere which means accessible by all resources within the cluster.

Beside of that, let’s create another deployment in development namespace expose it as service like previous one.

kubectl expose deploy webserver --port=80 -n development

Now, we have 1 deployment in each namespace that has been created.

Create a network policy

In the beginning, let’s use the default policy from Kubernetes documentation to deny all ingress traffic in development namespace.

Don’t forget to add argument -n to specify the namespace when creating a resource.

We’ve define the policy and lets do some testing by create another Pod in default namespace with the following manifest.

kubectl exec curl -it -- curl -m 5 webserver.development
curl: (28) Connection timed out after 5004 milliseconds
command terminated with exit code 28

Note: webserver.development is internal DNS name, makes us easy to test an application without describe the Service or Pod.

As expected, we got timed out when hit the services from default namespace. This policy will denied all ingress that will comes into those pod, even though in the same namespace. To allow the ingress traffic from same namespace, we can add the following syntax after policyTypes.

ingress:
- {}

Now, we will try another simple scenarios which is Pod in development namespace can talk to Pod in production namespace only port 80 then create a Pod in production namespace that running redis without expose it as service.

Let us try to understand the manifest file:

  • YAML file start with apiVersion and uses networking.k8s.io/v1 as API resources with kind of object we’re creating is NetworkPolicy
  • The name of the policy is allow-devel-pod-to-http and applied in production namespace.
  • podSelector is {}, it means all pods in the namespace and uses Ingress as traffic to allow connection from namespace that have labels env=development and only allow TCP port 80 or http.

To ensure the above policy is right, we will check the label of development namespace.

kubectl describe ns development
Name: development
Labels: <none>
Annotations: <none>
Status: Active
No resource quota.No LimitRange resource.

Oh there is no label, so we need to add a label into the namespace.

kubectl label namespace development env=development

And the last step is creating a new pod in production namespace, you can use the following manifest file.

Let’s verify the policy with the following command.

kubectl exec curl -n development -it -- curl -m 5 webserver.production

It works! TCP port 80 reachable by another Pod in development namespace.

Let’s check if we can’t reach another port except 80 by telnet into redis Pod.

# Check Pod IP of redis
kubectl describe po -n production redis | grep " IP:"
kubectl run -n development -it busybox --image=busybox --restart=Never -- sh
If you don't see a command prompt, try pressing enter.
/ # telnet 192.168.200.6 6379

If you got stuck, we are successful work with our the network policy. But if you got a status like below, maybe you forgot to apply the policy.

Connected to 192.168.200.6

Still not satisfied? you can try to practicing some examples from this repository https://github.com/ahmetb/kubernetes-network-policy-recipes, and make yourself comfortable to exploring about Network Policy in Kubernetes.

Thank you for reading this article and hope you enjoyed!

--

--