Mistake on this page? Email us

Securely separating containers with network policies

With Izuma Edge, gateways host application containers that are isolated from one another for security. This protects neighboring containers in the event of a compromised application. To securely separate containers, Izuma Edge includes a network plugin that implements the Kubernetes Network Policy. You can use the Kubernetes Network Policy to control traffic between Pods, as well as traffic between Pods and external networking entities at the IP address and port level.

Because Izuma Edge doesn't allow gateways to communicate with one another, the network policies are applied to Pods in each gateway separately.

Note: Network policies don’t apply to the Pods configured with hostNetwork: true. If a Pod is configured with hostNetwork: true, the Pod can use the host network namespace. The Pod can access loopback device, services listening in localhost and all traffic in the host network namespace.

Network policy specification

An example network policy looks like this:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: rule-engine-policy
spec:
  podSelector:
    matchLabels:
      role: rule-engine
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 192.168.10.0/24
        except:
        - 192.168.10.1/24
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 80
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978

The specification has four key fields:

  • podSelector: selects a group of Pods to which the policy applies. An empty podSelector selects all the Pods. In the above example, the policy applies to all the Pods with role=rule-engine.
  • policyType: indicates whether the policy applies to ingress traffic to selected Pods, egress traffic from selected Pods or both. This field is a list that may include either Ingress, Egress or both.
  • ingress: identifies Pods or other network entities that are allowed to connect to the Pods the policy applies to. This field is a list of allowed rules, and each rule can be a podSelector or an ipBlock (IP CIDR range).
  • egress: identifies Pods or other network entities that are allowed to be connected from the Pods the policy applies to. This field is a list of allowed rules, and each rule can be a podSelector or an ipBlock (IP CIDR range).

In addition to podSelector and ipBlock rules, you can specify port and protocol to control traffic. Port ranges are not supported.

namespaceSelector based ingress and egress rules are not supported in Izuma Edge.

By default, Pods allow all the traffic. After a network policy selects a Pod, all traffic is denied except the traffic explicitly allowed by the policy. The network policies are additive - when multiple policies affect a Pod, the Pod is restricted to what is allowed by the union of those policies' ingress and egress rules. See the Kubernetes network policy documentation for more details.

Network policy examples

This section lists a few example scenarios to illustrate how to set up network policies. To set up a network policy for your gateway, you need to set up the appropriate labels to Pods and identify IP CIDR ranges for the various networks that Pods communicate with.

Scenario 1 - Pods connecting to the Internet

In this scenario, the gateway has one or more Pods that are allowed to connect to the Internet but aren't allowed to connect to any other Pods or destinations:

# Pod pod-a
apiVersion: v1
kind: Pod
metadata:
  name: pod-a
  labels:
    role: internet-pod
spec:
  automountServiceAccountToken: false
  hostname: pod-a
  nodeName: 017729140448000000000001001d7ef9 
  containers:
  - name: app-a 
    image: alpine:latest       
# Pod pod-b
apiVersion: v1
kind: Pod
metadata:
  name: pod-b
  labels:
    role: internet-pod
spec:
  automountServiceAccountToken: false
  hostname: pod-b
  nodeName: 017729140448000000000001001d7ef9 
  containers:
  - name: app-b 
    image: alpine:latest  

The policy below assumes the gateway is connected to a local network with IP CIDR range 192.168.20.0/24:

# Network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: internet-pod-policy
spec:
  podSelector:
    matchLabels:
      role: internet-pod
  policyTypes :
  - Ingress # Deny all ingress
  - Egress  # Deny all egress
  egress:
  - to
    - ipBlock: 
        cidr: 0.0.0.0/0
        except:        
          - 192.168.20.0/24 # Deny local network

Scenario 2 - message broker Pod

In this scenario, the gateway has a central message broker Pod that other Pods publish and subscribe to. This example assumes that broker is listening on port 8181:

# Pod pod-broker
apiVersion: v1
kind: Pod
metadata:
  name: pod-broker
  labels:
    role: message-broker-pod
spec:
  automountServiceAccountToken: false
  hostname: pod-broker
  nodeName: 017729140448000000000001001d7ef9 
  containers:
  - name: app-broker
    image: alpine:latest
# Pod pod-publisher
apiVersion: v1
kind: Pod
metadata:
  name: pod-publisher
  labels:
    role: message-client-pod
spec:
  automountServiceAccountToken: false
  hostname: pod-publisher
  nodeName: 017729140448000000000001001d7ef9 
  containers:
  - name: app-publisher
    image: alpine:latest
# Pod pod-subsciber
apiVersion: v1
kind: Pod
metadata:
  name: pod-subscriber
  labels:
    role: message-client-pod
spec:
  automountServiceAccountToken: false
  hostname: pod-subscriber
  nodeName: 017729140448000000000001001d7ef9 
  containers:
  - name: app-subscriber
    image: alpine:latest
# Network policies
# Message broker network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: message-broker-pod-policy
spec:
  podSelector:
    matchLabels:
      role: message-broker-pod
  policyTypes :
  - Ingress # Deny all ingress
  - Egress  # Deny all egress
  ingress:
  - from
    - podSelector:
        matchLabels:
          role: message-client-pod
    ports:
    - protocol: TCP
      port: 8181

# Message client network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: message-client-pod-policy
spec:
  podSelector:
    matchLabels:
      role: message-client-pod
  policyTypes :
  - Ingress # Deny all ingress
  - Egress  # Deny all egress
  egress:
  - to
    - podSelector:
        matchLabels:
          role: message-broker-pod
    ports:
    - protocol: TCP
      port: 8181

Scenario 3 - datastore and service Pods

In this scenario, the gateway has two Pods – one service Pod and one datastore Pod. The datastore allows a connection only from the service Pod. The service Pod connects only to the datastore, which allows connection from Pods with label the role=service-client-pod. The Pods with label the role=service-client-pod aren't shown in the example:

# Pod pod-datastore
apiVersion: v1
kind: Pod
metadata:
  name: pod-datastore
  labels:
    role: datastore-pod
spec:
  automountServiceAccountToken: false
  hostname: pod-datastore
  nodeName: 017729140448000000000001001d7ef9 
  containers:
  - name: datastore
    image: alpine:latest
# Pod pod-service
apiVersion: v1
kind: Pod
metadata:
  name: pod-service
  labels:
    role: service-pod
spec:
  automountServiceAccountToken: false
  hostname: pod-service
  nodeName: 017729140448000000000001001d7ef9 
  containers:
  - name: service
    image: alpine:latest
# Network policies
# Datastore network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: datastore-pod-policy
spec:
  podSelector:
    matchLabels:
      role: datastore-pod
  policyTypes :
  - Ingress # Deny all ingress
  - Egress  # Deny all egress
  ingress:
  - from
    - podSelector:
        matchLabels:
          role: pod-service
    ports:
    - protocol: TCP
      port: 3456

# Service network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: service-pod-policy
spec:
  podSelector:
    matchLabels:
      role: service-pod
  policyTypes :
  - Ingress # Deny all ingress
  - Egress  # Deny all egress
  ingress:
  - from
    - podSelector:
        matchLabels:
          role: service-client-pod
    ports:
    - protocol: TCP
      port: 9191
  egress:
  - to
    - podSelector:
        matchLabels:
          role: datastore-pod
    ports:
    - protocol: TCP
      port: 3456

Scenario 4 - local private network

In this scenario, the gateway has two Pods. Both can connect to a local privete network (identified by an IP CIDR range). One of the Pods is also a publisher as described in scenario 2:

# Pod pod-private
kind: Pod
metadata:
  name: pod-private
  labels:
    network-access: local
spec:
  automountServiceAccountToken: false
  hostname: pod-private
  nodeName: 017729140448000000000001001d7ef9 
  containers:
  - name: app-private
    image: alpine:latest

# Pod pod-private-publisher
kind: Pod
metadata:
  name: pod-private-publisher
  labels:
    network-access: local
    role: message-client-pod
spec:
  automountServiceAccountToken: false
  hostname: pod-private-publisher
  nodeName: 017729140448000000000001001d7ef9 
  containers:
  - name: app-private-publisher
    image: alpine:latest

# Network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: private-pod-policy
spec:
  podSelector:
    matchLabels:
      network-access: local
  policyTypes:
  - Ingress # Deny all ingress
  - Egress  # Deny all egress
  egress:
  - to
    - ipBlock: 
        cidr: 192.168.20.0/24 # Connection to local network

Because Pod pod-private-publisher has the label role=message-client-pod, the network policy message-client-pod-policy from scenario 2 will also be applied to it.