
One of the first things you realize when learning Kubernetes is that all Pods can communicate with all other Pods by default. It’s a flat network structure.
While this is convenient for development, it’s a security nightmare in production. Imagine your frontend web server having unrestricted access to your database, or a compromised pod having access to your entire internal network.
To solve this, we use NetworkPolicy. In this post, I’ll walk through how to isolate namespaces and avoid the most common "gotcha" in the CKA exam: Forgetting DNS.
Think of NetworkPolicy as a "Kubernetes-native Firewall."
Unlike traditional firewalls that use IPs, NetworkPolicy controls traffic based on Labels and Selectors.
Key concepts to understand:
The Golden Rule: "If no policy exists, everything is allowed. If a policy exists, everything else is denied (Default Deny)."
This is where many developers (and CKA candidates) fail. When you restrict Egress traffic, you might think: "I only want my app to talk to the Backend Service, so I'll only allow that."
The Problem:
Your application doesn't talk to IP addresses directly. It uses domain names (e.g., backend-svc, google.com).
To resolve these names to IPs, your Pod must talk to the cluster's DNS server (CoreDNS).
The Trap:
If you block all Egress traffic and only allow the Backend Service, you are also blocking the DNS query.
Your app will crash with a "Name Resolution Error" or "Timeout" because it can't ask "Where is backend-svc?"
The Solution:
Whenever you restrict Egress, you must explicitly allow UDP/TCP port 53.
Let's look at a practical requirement:
space1 and space2.space1 can only send traffic to space2. (All other outgoing traffic is blocked).space2 can only receive traffic from space1. (All other incoming traffic is blocked).space1 (Don't forget DNS!)We need to allow traffic to space2 AND traffic to the DNS server.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-space1
namespace: space1
spec:
podSelector: {} # Selects ALL pods in this namespace
policyTypes:
- Egress # We are controlling outgoing traffic
egress:
# Rule 1: Allow traffic to 'space2' namespace
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: space2
# Rule 2: CRITICAL! Allow DNS resolution
- ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Pro Tip: Since Kubernetes v1.21, namespaces automatically have the label
kubernetes.io/metadata.name: <ns-name>. You can use this instead of creating manual labels!
space2Here, we only need to filter who is coming in. We don't need to worry about DNS here, as that is an Egress concern.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-space2
namespace: space2
spec:
podSelector: {}
policyTypes:
- Ingress # We are controlling incoming traffic
ingress:
# Rule 1: Allow traffic coming FROM 'space1'
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: space1
Applying the YAML is easy. Verifying it is the real skill. Use a temporary busybox pod to test connectivity.
Try to reach Google. This should fail (timeout) because we only allowed space2.
kubectl -n space1 run test-pod --image=busybox -it --rm -- wget -O- google.com
# Result: "wget: download timed out" (Pass!)
Try to reach a service in space2. Note: You must use the FQDN (service.namespace).
kubectl -n space1 run test-pod --image=busybox -it --rm -- wget -O- -T 5 microservice1.space2
# Result: HTML output or connection success (Pass!)
If Test 2 fails immediately with "Bad Address," it means DNS is blocked. Check your port 53 rules!
kubectl -n space1 run debug --image=busybox -it --rm -- nslookup microservice1.space2
Mastering this flow is crucial not just for the CKA exam, but for securing any production Kubernetes cluster.