Kubernetes CNI Plugin Implementation

Updated: Nov 18




Customizing Kubernetes network functions may require detecting the creation/delete event of Pod for each Pod and controlling the creation/delete of the Pod's network interface. This is because in order to connect the pad's network interface to a customized network, it is necessary to manage the information on the pad.


Pod Controller, which manages Pod events, was easy to implement using Information and Listener provided by the Kubernetes library, as in the last blog post. The virtual interface information used by Pod is not directly managed by Kubernetes, but by defining CRDs provided by Kubernetes and then registering them with Kubernetes, you can enable Kubernetes to manage them through API. However, if you want to customize Pod's Container Network Interface (CNI) creation/deletion function directly, you need to create your own CNI plug-in and provide it for Kubernetes to use.


The CNI plug-in connects to the network when a container is created and removes resources allocated to the container when the container is deleted. Kubernetes searches for the CNI plug-in to use, and then executes the CNI plug-in when a pad is generated.


Implementing CNI using the CNI Go library

https://github.com/containernetworking/cni provides a Go library and plug-in template for creating CNI plug-ins.


At the same time, https://github.com/containernetworking/plugins provides plug-ins implemented using the above library.


Using and referring to these libraries, basic CNI plug-in implementation is simple. The CNI library provides skeleton code for the CNI plug-in. The following is a CNI plug-in code developed using skeleton code.


package main

import (
	"github.com/containernetworking/cni/pkg/skel"
	cniversion "github.com/containernetworking/cni/pkg/version"
  bv "github.com/containernetworking/plugins/pkg/utils/buildversion"
)

func main() {
	skel.PluginMain(
		cmdAdd,
		cmdCheck,
		cmdDel,
		cniversion.All,
		bv.BuildString("loxi-cni"),
	)
}

Skel.PluginMain is the main implementation of the CNI plugin. The factors of the function are as follows.

  • cmdAdd : CNI creation time called function

  • cmdCheck : CNI check time called function

  • cmdDel : CNI delete time called function

  • cniversion.All : CNI version supported by the plugin. cniversion.All means support for all versions from 0.1.0 to 1.0.0 (current CNI version's latency is 1.0.0).

  • bv.BuildString("loxi-cni") : You can put a string that explains what plug-in is.

In other words, the CNI plug-in can be implemented as soon as the user defines a function for the creation/check/delete behavior and registers it with the PluginMain function.

The cmdAdd, cmdDel, and cmdCheck functions must have the following factors and return values.


func cmdAdd(args *skel.cmdArgs) error

As long as the factor and return value match, it doesn't matter what the actual implementation is. As mentioned above, cmdAdd only needs to ensure virtual interface creation-network connection behavior when creating containers, and cmdDel only needs to ensure cleaning of related resources when deleting containers.


In my case, I implemented cmdAdd as follows because I need to modify the virtual interface name to be connected to the bridge for networking when creating the container.


func cmdAdd(args *skel.CmdArgs) error {
	// 컨테이너 생성시 입력받은 옵션값들을 불러와 변수 n에 저장
	n, cniVersion, err := loadNetConf(args.StdinData, args.Args)
	if err != nil {
		return err
	}

	// 컨테이너간 네트워킹을 위한 브릿지 설정
	br, brInterface, err := setupBridge(n)
	if err != nil {
		return err
	}

	// 컨테이너가 실행되는 네트워크 네임스페이스 정보를 가져옴
	netns, err := ns.GetNS(args.Netns)

	if err != nil {
		return fmt.Errorf("failed to open netns %q: %v", args.Netns, err)
	}
	defer netns.Close()

	// 한 쌍의 가상 인터페이스를 생성함.
	// containerInterface는 컨테이너의 네임스페이스(netns)가 사용하도록 설정
	// hostInterface는 컨테이너간 네트워킹을 위한 브릿지에 추가.
	// args.IfName은 containerInterface의 이름으로 설정
	hostInterface, containerInterface, err := setupVeth(netns, br, args.IfName, n.MTU, n.HairpinMode, n.Vlan, n.mac)
	if err != nil {
		return err
	}

The host interface is a virtual interface object to be connected to the networking bridge, and the container interface is a container-side virtual interface paired with the host interface. Modify the name of the host interface and set the namespace of the container interface within the setupVeth function. The CNI library also provides functions for these settings.


	// hostInterface에 설정할 이름 생성
	loxiVethName, loxiIndex, err := findNotUsedVethName()
	if err != nil {
		return nil, nil, err
	}

	// container쪽 네임스페이스에 진입해서 작업
	err = netns.Do(func(hostNS ns.NetNS) error {
		// CNI 라이브러리(github.com/containernetworking/plugins/pkg/ip) 를 사용해서 
		// 가상 인터페이스의 이름 & MTU & 네임스페이스 설정
		// ifName=containerVeth 이름, loxiVethName=hostVeth 이름
		hostVeth, containerVeth, err := ip.SetupVethWithName(ifName, loxiVethName, mtu, hostNS)
		if err != nil {
			return err
		}
		contIface.Name = containerVeth.Name
		contIface.Mac = containerVeth.HardwareAddr.String()
		contIface.Sandbox = netns.Path()
		hostIface.Name = hostVeth.Name
		return nil
	})

Similarly, cmdDel can delete interfaces using functions provided by the library.


// Container쪽 네임스페이스에 진입해서 작업
	err = ns.WithNetNSPath(args.Netns, func(hostNS ns.NetNS) error {
		var err error
		// 라이브러리 함수를 호출해 가상 인터페이스 삭제
		_, err = ip.DelLinkByNameAddr(args.IfName)
		if err != nil && err == ip.ErrLinkNotFound {
			return nil
		}
		return err
	})
	if err != nil {
		return err
	}

If the build is successful by creating a plug-in, a binary file is created. I built a binary file under the name of loxy-cni. Now, we need to set up to use the CNI plug-in in Kubernetes.


Kubernetes is configured to use CNI plug-in

To use the CNI plug-in, you must turn on the —network-plugin=cni option when running the Kubelet. When that option is turned on, Kubelet reads the setup file from the CNI setup directory (default: /etc/cni/net.d) and sets the CNI for each pod using the CNI configuration of that file.

The following is an example of a setup file for CNI that I made.


// /etc/cni/net.d/01-loxilight.conf
{
        "cniVersion": "0.2.0",
        "name": "loxilight_cni",
        "type": "loxi-cni",
        "bridge": "hsvlan100",
        "isGateway": true,
        "isDefaultGateway" : false,
        "mtu" : 1500,
        "hairpinMode" : false,
        "promiscMode" : false,
        "vlanID": 100,
        "ipam": {
                "type": "host-local",
                "subnet": "10.233.68.0/24",
                "routes" : [
                        {"dst": "0.0.0.0/0"}
                ]
        }
}

Here, the essential items in the setting file are cniVersion, name, and type.

  • cniVersion : CNI Spec Version supported by the corresponding CNI (Up to date: 1.0.0)

  • name : CNI logical name

  • type : CNI name to be used in Kubernetes. An executable file with that name must be in the /opt/cni/bin folder

If you look at the type item in the setup example, you can see that you have registered the name loxy-cni of the CNI plug-in you built above. Kubelet reads the setup file and finds the plugin specified in type in the CNI binary directory (default: /opt/cni/bin). If you have implemented a CNI plug-in, you must copy it to the CNI binary directory in advance.


Other options are for my CNI plug-in. Here, the IP Address registered with the subnet under ipam is the Pod CIDR band that Kubernetes designated to that node. The CNI plug-in has been implemented to set the IP of the band in the newly created container.


Now, if you have a setup file and a CNI plug-in on the correct path, from the time you create the next Pod, Kubelet automatically calls the CNI plug-in specified in type to configure the CNI.







130 views0 comments

Recent Posts

See All