No items found.

Demystifying Cilium: Learn How to Build an eBPF CNI Plugin from Scratch

January 12, 2024
Adam Sayah

Over the past couple of years, my focus has been on application networking, dealing with API gateways and service mesh, with the core mission of securing traffic and the network. Lately, I’ve been delving into pushing the boundaries, aiming to control traffic at an even earlier stage in the network stack – think CNI and eBPF.

Fascinating technologies, like Cilium and other projects, use a mix of CNI and eBPF to provide a performant and secure networking layer to Kubernetes. I recently led a CNI eBPF workshop to demystify these technologies. In case you couldn’t attend, or want a refresher, this is my write-up of the event. We won’t be reinventing the wheel or creating a complex system – it’ll be like popping the hood of a car and understanding the engine. So, buckle up, and let’s get started!

This blog post will cover:

First, let’s dive into the technical details. We’ll start with understanding what a CNI is and how it operates.

What Is A CNI?

CNI, or Container Network Interface, serves as the wiring for pods in a Kubernetes cluster. It determines how a pod can have a specific IP, connect to the network, and interact with other entities. A CNI plugin, in our case, acts like a plumber, creating the necessary network plumbing between the nodes and the host network of the node and the pods.

The CNI project provides us with three main things: the specification of what a CNI is and how it should operate, some example implementation to use and reuse, and, finally, some libraries to simplify our lives so we don’t have to reinvent the wheel every single time.

CNI Workflow

The CNI workflow involves several steps. When a container runtime needs to create a pod, it interacts with the CNI plugin. The plugin reads a network configuration, often a JSON file, and is then invoked with specific environment variables and configuration details. The CNI plugin performs its tasks, such as creating interfaces and wiring, and returns the results to the container runtime.

CNI Configuration File

Let’s take a look at a simple CNI configuration file:

{
"cniVersion":"1.0.0",
"name":"ebpfcni",
"type":"ebpfcni",
"podcidr":"10.10.0.0/24"
}

This basic file includes the CNI version, type (plugin name), well-known keys like IPAM and DNS, capabilities, and custom key-value pairs (like podcidr).

CNI Plugin Execution

Executing a CNI plugin involves the container runtime reading a configuration file (typically found at /etc/cni/net.d/) and invoking the plugin binary with environment variables. The crucial environment variables include CNI_COMMAND (add, delete, check), CNI_CONTAINERID (unique pod identifier), CNI_NETNS (network namespace path), CNI_IFNAME (interface name), CNI_ARGS (custom arguments), and CNI_PATH (optional, path to the CNI plugin binary).

Multiple CNIs and Delegation

You can use multiple CNIs for different purposes. They can complement each other rather than replacing one another. For instance, one CNI may create interfaces, while another focuses on IP assignment. The result of one CNI invocation can be passed as input to another.

What is eBPF?

eBPF, or extended Berkeley Packet Filter, is a versatile and lightweight virtual machine that runs within the Linux kernel. It allows for efficient, customizable packet filtering, monitoring, and tracing of network events and system behavior, making it a powerful tool for networking and performance analysis in the Linux ecosystem. In Cilium, it is used to optimize the performance of the CNI and to enforce network policies.

Hands-On Workshop

Exercise 1: Create a CNI

First, I’ll guide you through the creation of a Bridge CNI plug-in using Bash.

Bridge CNI plugin

During the live coding session, we set up a Kubernetes cluster and explored the need for a dynamic solution to fetch Pod CIDR information in a real-world scenario. Watch exercise 1:

Exercise 2: eBPF Basics

Now that we understand how to successfully install the CNI, I’ll introduce eBPF (Extended Berkeley Packet Filter) to enhance and control the interface we just created. eBPF is Linux technology that enables users to run custom programs “sandboxed” in the kernel. eBPF is events-based, attaching code to specific hooks in the networking stack for tasks like monitoring, securing, or detecting anomalies.

eBPF basics

Watch how to set up a basic eBPF code, then integrate it into our CNI plugin, showcasing its potential for advanced network traffic control and analysis:

Exercise 3: eBPF Maps/Monitoring

Now let’s explore the use of eBPF for various networking scenarios. eBPF is ideal for observability:

BPF programs can be attached to almost any kernel function, making it possible to extract data or metrics for observing the state of your systems
Specific hookpoints (e.g. ‘kprobes’ and ‘tracepoints’) and infrastructure exist to enable BPF-based observability, performance monitoring, and tracing
Due to eBPF’s efficiency, you can process raw events as they happen in the kernel, enabling visibility that isn’t possible with non-eBPF based solutions

eBPF doesn’t allow direct access to data from the user space to the kernel space, to share data between the user space and kernel, we will use a BPF map.

Now we’ll capture the traffic again, write the number of packets that we receive from a certain source in the map, then use a user space program to read a certain map and expose it as metrics we can see in Prometheus.

eBPF maps and monitoring

Watch how to set up this new code segment focused on monitoring:

Exercise 4: eBPF for Security

Finally, let’s discuss security using eBPF, particularly in the context of Kubernetes Network Policies, which play an important role in fine-tuning and defining multi-tenancy.

eBPF for security

This final exercise showcases how eBPF can be used to enforce security rules in networking:

Build on Your Knowledge

When you dive into your Kubernetes cluster, I hope you’re now able to recognize elements and think, “This is probably an eBPF program, and this is how things are enforced.”

Learn how Gloo Network for Cilium, harnessing the power of eBPF, makes embracing Cilium in hybrid cloud setups a breeze.

To further explore this topic, check out the following links:

Cloud connectivity done right