Why Network Policies matter
In Kubernetes, by default, all pods can communicate freely with each other regardless of namespace. This permissiveness is convenient during development, but in production it is a serious problem. A compromised pod can scan the internal network, access databases, cache services, and any other workload without restriction.
The Zero Trust model operates on the principle that no traffic should be trusted by default. Every communication must be explicitly authorized. Network Policies are the Kubernetes mechanism to implement this principle at the network layer, creating segmentation between workloads.
In practice, this means:
- Pods only access what they need (principle of least privilege)
- A compromised service cannot move laterally across the cluster
- Egress traffic is controlled, preventing data exfiltration to unauthorized destinations
For Network Policies to work, the cluster needs a CNI (Container Network Interface) that implements them. Not every CNI supports Network Policies. Among those that do are Calico, Cilium, and Weave Net.
Native Kubernetes NetworkPolicy
The native NetworkPolicy uses the networking.k8s.io/v1 API and works with any CNI that implements it. It operates at L3/L4 of the OSI model, meaning it filters traffic by IP, port, and protocol (TCP/UDP).
What it supports
- Pod selection by labels (
podSelector) - Filtering by namespace (
namespaceSelector) - Filtering by IP blocks (CIDR)
- Control of ingress (incoming traffic) and egress (outgoing traffic)
- Specific ports and protocols
What it does not support
- Filtering by domain name (FQDN)
- Layer 7 inspection (HTTP path, method, headers)
- Service identity-based rules
- TLS/SNI filtering
Example: default deny for the entire namespace
The first step in any Network Policy strategy is to block all traffic by default. Then you allow only what is necessary.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
With this policy applied, no pod in the production namespace accepts incoming traffic or can send outgoing traffic. From here, you create specific policies for each allowed flow.
Example: allow ingress from a specific pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-api-to-db
namespace: production
spec:
podSelector:
matchLabels:
app: postgres
ingress:
- from:
- podSelector:
matchLabels:
app: api-server
ports:
- port: 5432
protocol: TCP
Only pods with the label app: api-server can access port 5432 of pods labeled app: postgres. All other ingress traffic is blocked.
Limitations
The major limitation of the native NetworkPolicy is that it operates only at L3/L4. You can say “pod A can access pod B on port 8080”, but you cannot say “pod A can GET /api/health but not POST to /api/admin”. Additionally, it is not possible to create rules based on domain names (FQDN), which makes it difficult to control access to external services whose IPs change frequently.
CiliumNetworkPolicy
CiliumNetworkPolicy uses the cilium.io/v2 API and requires Cilium as the cluster’s CNI. It offers everything the native NetworkPolicy does, plus significant capabilities at higher layers.
Additional capabilities
- L7 filtering: HTTP request inspection by path, method, and headers
- FQDN rules (
toFQDNs): allow or block access to external domains - DNS-aware: resolves names and applies rules dynamically as IPs change
- TLS/SNI filtering: control based on Server Name Indication
- Identity-based: rules based on cryptographic workload identity, not just labels
Example: L7 filtering by HTTP path and method
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-health-check-only
namespace: production
spec:
endpointSelector:
matchLabels:
app: api-server
ingress:
- fromEndpoints:
- matchLabels:
app: monitoring
toPorts:
- ports:
- port: "8080"
protocol: TCP
rules:
http:
- method: GET
path: "/health"
- method: GET
path: "/metrics"
In this example, the monitoring service can only make GET requests to /health and /metrics on port 8080 of the api-server. Any other HTTP request is blocked, even from the same pod.
Example: FQDN-restricted egress
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-external-apis
namespace: production
spec:
endpointSelector:
matchLabels:
app: payment-service
egress:
- toFQDNs:
- matchName: "api.stripe.com"
- matchName: "api.pagar.me"
toPorts:
- ports:
- port: "443"
protocol: TCP
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
rules:
dns:
- matchPattern: "*.stripe.com"
- matchPattern: "*.pagar.me"
The payment-service can only make external requests to api.stripe.com and api.pagar.me on port 443. The second rule allows DNS queries to kube-dns, but only for the permitted domains. Without this DNS rule, the pod cannot resolve the names and the FQDN rule does not work.
Side-by-side comparison
| Feature | NetworkPolicy (native) | CiliumNetworkPolicy |
|---|---|---|
| API | networking.k8s.io/v1 | cilium.io/v2 |
| CNI | Any that implements it | Cilium only |
| Layer | L3/L4 | L3/L4 + L7 |
| Label selection | Yes | Yes |
| Namespace selection | Yes | Yes |
| CIDR filtering | Yes | Yes |
| FQDN filtering | No | Yes (toFQDNs) |
| HTTP filtering (path, method) | No | Yes |
| DNS filtering | No | Yes |
| TLS/SNI filtering | No | Yes |
| Identity-based | No | Yes |
| Portability across CNIs | Yes | No |
When to use which
| Scenario | Choice |
|---|---|
| Cluster without Cilium as CNI | NetworkPolicy |
| Need to filter by domain name (FQDN) | CiliumNetworkPolicy |
| Need to filter by HTTP path/method | CiliumNetworkPolicy |
| Want portability across different CNIs | NetworkPolicy |
| Need TLS/SNI filtering | CiliumNetworkPolicy |
| Simple L3/L4 rules are sufficient | NetworkPolicy |
| Need to control which domains a pod can resolve via DNS | CiliumNetworkPolicy |
If the cluster already uses Cilium, take advantage of CiliumNetworkPolicies for scenarios that require more granularity. For basic pod and namespace isolation rules, the native NetworkPolicy is sufficient and more portable.
The two approaches are not mutually exclusive. You can use native NetworkPolicy for basic L3/L4 rules and CiliumNetworkPolicy for L7 and FQDN rules on the same Cilium-powered cluster.
Practical example: allowing access to external APIs by domain
A common scenario is when a service needs to access external APIs such as payment gateways, monitoring services, or third-party APIs. With the native NetworkPolicy, you would need to define CIDR blocks for each IP of these services. The problem is that these IPs change frequently (CDNs, load balancers, failovers), and maintaining fixed CIDRs becomes impractical.
With CiliumNetworkPolicy and toFQDNs, you define the domain directly. Cilium intercepts DNS resolutions and updates firewall rules automatically as IPs change.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: notification-service-egress
namespace: production
spec:
endpointSelector:
matchLabels:
app: notification-service
egress:
- toFQDNs:
- matchName: "api.sendgrid.com"
- matchName: "fcm.googleapis.com"
- matchName: "api.twilio.com"
toPorts:
- ports:
- port: "443"
protocol: TCP
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
rules:
dns:
- matchPattern: "*.sendgrid.com"
- matchPattern: "*.googleapis.com"
- matchPattern: "*.twilio.com"
The notification-service can only send requests to SendGrid, Firebase Cloud Messaging, and Twilio. Any attempt to access another domain or IP is blocked. If an attacker compromises this pod, they cannot exfiltrate data to external servers.
Pod-to-pod communication via Service DNS
For internal communication between services in the cluster, both the native NetworkPolicy and CiliumNetworkPolicy allow selecting pods by labels. In Cilium’s case, the toEndpoints directive works similarly to podSelector, but integrated with Cilium’s identity model.
Pods communicate using Kubernetes internal DNS:
- Same namespace:
http://service-name:port - Different namespace:
http://service-name.namespace.svc.cluster.local:port
Example: api-server accessing postgres and redis in the same namespace
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: api-server-egress
namespace: production
spec:
endpointSelector:
matchLabels:
app: api-server
egress:
- toEndpoints:
- matchLabels:
app: postgres
toPorts:
- ports:
- port: "5432"
protocol: TCP
- toEndpoints:
- matchLabels:
app: redis
toPorts:
- ports:
- port: "6379"
protocol: TCP
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
The api-server can only communicate with postgres on port 5432 and redis on port 6379. The DNS rule allows it to resolve internal service names. The Network Policy controls whether communication is allowed. The Service DNS resolves where traffic goes. They are complementary layers.