A very common scenario in an everyday cloud-native developer’s life is to make disparate systems and services to talk to each other. A classic example is how to call a Kubernetes service from a Virtual Machine (VM) without using LoadBalancer
-type Kubernetes services, i.e. using Cluster-IP
or Pod-IP
.
These types of communication needs are common in typical production deployments. In production deployments the communication between Kubernetes nodes and non-Kubernetes nodes are handled with some sophisticated techniques like a VPC or VPN. In this blog, we will explore how to do that on a developer box.
So before we deploy the demo here are the few tools that you need:
Download or clone the demo sources from this GitHub repository:
For the rest of this blog, we will refer to the demo sources folder as $DEMO_HOME
. If you have direnv
then the environment variables DEMO_HOME
,KUBE_CONFIG_DIR
and KUBECONFIG
are automatically set for you via the .envrc
file in the sources. If you don’t use direnv
, set them as shown before proceeding further.
For this demo we will be using k3s as our Kubernetes platform. We will setup k3s on a multipass Ubuntu VM. Before we create our VM and deploy k3s let us understand the settings that we will be using for the k3s Kubernetes cluster:
--cluster-cidr=172.16.0.0/24
this setting allows us to create65–110
pods on the k3s node. In our case this will be a single node cluster.--service-cidr=172.18.0.0/20
this setting allows us to create4096
services on this Kubernetes cluster.- Finally we will disable
--disable=traefik
as we will not need or deploy anyLoadBalancer
services as part of this demo.
Let’s now create the VM and deploy k3s onto it. To make the process simpler we will do cloud-init which does the setup for us while the VM is created.
Once we have the VM running, we can check its details using the command multipass info cluster1
. As k3s deployed as part of the VM creation, let’s pull the kubeconfig
to the host machine:
If you do kubectl get pods --all-namespaces
will show all the pods in Pending
state, that’s because we don’t have any network plugin configured with our Kubernetes cluster. We use the Calico network plugin for this demo, but you can use any Kubernetes network plugin of your choice as long it has ability to define host gateway routes.
Let us deploy Calico network plugin to get the pods to life:
Now let’s create the Calico installation to match our Pod settings described earlier and we also enable IPv4 forwarding to enable pods/services communicate to outside world via Calico’s host gateway.
Now if you do kubectl get pods --all-namespaces
you will notice all pods coming to life along with few Calico pods.
To complete our story on the Kubernetes side, let’s deploy a nginx
pod and a service which will use for our connectivity test from the VM:
The goal is to make a VM communicate with Kubernetes services without using LoadBalancer
services. In our case we need to talk to the nginx
service.
Let’s create a new multipass VM called vm1
:
Let’s copy the cluster1
kubeconfig into vm1
, it’s not mandatory but it helps to do run kubectl commands from within the vm1
.
multipass transfer "$KUBECONFIG" vm1:/home/ubuntu/.kube/config
The rest of the demo commands needs to be executed from within the VM, so let’s shell in to vm1, multipass exec vm1 bash
.
Once inside thevm1
, run the following commands to get the nginx
service ip(CLUSTER-IP
) and its pod ip (POD_IP
):
Let’s try connecting to the NGINX service using its service IP $NGINX_SVC_IP
or its pod IP $NGINX_POD_IP
. You will notice both the commands timeout as we don’t have a route to reach the Kubernetes service/pod.
When we set up Calico, it set up the host routes and we enabled IP forwarding in its settings. Now adding a route from our vm1
to cluster1
for the cluster-cidr
and service-cidr
should enable us to communicate with the NGINX service using its $NGINX_SVC_IP
or $NGINX_POD_IP
.
Let’s check the IP routes using the command sudo ip route show
, and it should now have route to the service-cidr
and cluster-cidr
via the VM’s default route 192.168.64.1
i.e. via the host.
Now when we try the cURL commands again, it should result give us a successful result like HTTP/1.1 200
.
Just to summarize what we did:
- Setup mutlipass VMs
cluster1
andvm1
- Setup a Kubernetes k3s cluster in
cluster1
with the Calico network plugin - As Calico has host routes to Kubernetes host VM i.e.
cluster1
, it will enable traffic to route the Kubernetes services from thevm1
using their service ip (CLUSTER-IP
) or the pod IP - Finally we added routes in the
vm1
via thecluster1
host IP to allow us to directly communicate with Kubernetes Pods/Services using its service IP or Pod IP