Question
I have a VPC network with a subnet in the range 10.100.0.0/16, in which the nodes reside. There is a route and firewall rules applied to the range 10.180.102.0/23, which routes and allows traffic going to/coming from a VPN tunnel.
If I deploy a node in the 10.100.0.0/16 range, I can ping my devices in the 10.180.102.0/23 range. However, the pod running inside that node cannot ping the devices in the 10.180.102.0/23 range. I assume it has to do with the fact that the pods live in a different IP range(10.12.0.0/14).
How can I configure my networking so that I can ping/communicate with the devices living in the 10.180.102.0/23 range?
Answer
I don't quite remember exactly how to solve, but I'm posting what I have to help @tdensmore.
You have to edit the ip-masq-agent(which is an agent running on GKE that masquerades the IPs) and this configuration is responsible for letting the pods inside the nodes, reach other parts of the GCP VPC Network, more specifically the VPN. So, it allows pods to communicate with the devices that are accessible through the VPN.
First of all we're gonna be working inside the kube-system
namespace, and
we're gonna put the configmap that configures our ip-masq-agent, put this in a
config
file:
nonMasqueradeCIDRs:
- 10.12.0.0/14 # The IPv4 CIDR the cluster is using for Pods (required)
- 10.100.0.0/16 # The IPv4 CIDR of the subnetwork the cluster is using for Nodes (optional, works without but I guess its better with it)
masqLinkLocal: false
resyncInterval: 60s
and run kubectl create configmap ip-masq-agent --from-file config --namespace kube-system
afterwards, configure the ip-masq-agent, put this in a ip-masq-agent.yml
file:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: ip-masq-agent
namespace: kube-system
spec:
template:
metadata:
labels:
k8s-app: ip-masq-agent
spec:
hostNetwork: true
containers:
- name: ip-masq-agent
image: gcr.io/google-containers/ip-masq-agent-amd64:v2.4.1
args:
- --masq-chain=IP-MASQ
# To non-masquerade reserved IP ranges by default, uncomment the line below.
# - --nomasq-all-reserved-ranges
securityContext:
privileged: true
volumeMounts:
- name: config
mountPath: /etc/config
volumes:
- name: config
configMap:
# Note this ConfigMap must be created in the same namespace as the daemon pods - this spec uses kube-system
name: ip-masq-agent
optional: true
items:
# The daemon looks for its config in a YAML file at /etc/config/ip-masq-agent
- key: config
path: ip-masq-agent
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
- key: "CriticalAddonsOnly"
operator: "Exists"
and run kubectl -n kube-system apply -f ip-masq-agent.yml
Note: this has been a long time since I've done this, there are more infos in this link: <https://cloud.google.com/kubernetes-engine/docs/how-to/ip- masquerade-agent>