Description
The NP docs point out that NP's semantics for north/south traffic are not very clearly defined:
Cluster ingress and egress mechanisms often require rewriting the source or destination IP of packets. In cases where this happens, it is not defined whether this happens before or after NetworkPolicy processing, and the behavior may be different for different combinations of network plugin, cloud provider,
Service
implementation, etc.In the case of ingress, this means that in some cases you may be able to filter incoming packets based on the actual original source IP, while in other cases, the "source IP" that the NetworkPolicy acts on may be the IP of a
LoadBalancer
or of the Pod's node, etc.For egress, this means that connections from pods to
Service
IPs that get rewritten to cluster-external IPs may or may not be subject toipBlock
-based policies.
However, what's not ambiguous is that a pod which is fully isolated for ingress should not accept E/W or N/S traffic. Regardless of how the ingress/LB/cloud handles the traffic, it should end up getting blocked. (Well, unless it gets masqueraded to the pod's node IP, because then it hits that exception to the rules.)
Unfortunately we can't say the same for egress; I think we assume that a pod-to-service-IP connection will be allowed to reach kube-proxy even if the pod is fully isolated-for-egress, but we explicitly don't require that ipBlock
policies get applied after service proxying.
Anyway, we should be able to add a test to test/e2e/network/netpol/network_policy.go
that confirms that cluster-ingress traffic to a fully isolated-for-ingress pod is not allowed. In particular, if we create a LoadBalancer Service with externalTrafficPolicy: Local
, and a NetworkPolicy blocking all ingress to that Service's Pods, then we should not be able to connect to the service via either the LoadBalancer IP/name or via its NodePort (on one of the correct nodes).
/sig network
/area network-policy
/priority backlog
/help
/good-first-issue