It never stops to amaze me how technology pushes forward making our lives easier. I like to look at the history of networking, even ten years ago the physical network labs were very popular among computer network adepts. Nowadays, we have many options to play with networking equipment in a software manner, making the learning experience much cheaper and more time-efficient. One of the shining stars in the networking sky is undoubtedly Containerlab. A great piece of software that sets up and configures network environment for us. In this article, we will go through two examples of topologies based on the Cumulus Linux routers.
Containerlab installation
First, we need to install Containerlab. There are multiple ways to do this; for the most recent instructions, visit the official documentation.
Docker image
To deploy a Containerlab topology, we need a Cumulus Linux Docker image. As for now, there’s no official release available, but it’s not as big a concern as you might think. We have talented and ingenious engineers in the network automation realm, and one of them – Michael Kashin decided to build a Cumulus Linux images from scratch. A complete list of available versions is present on the Docker Hub. Dockerfiles used to build images are committed to the GitHub repository.
For this article, I’ll use the Cumulus Linux 5.3.0 image, which can be accessed by the name and tag – networkop/cx:5.3.0.
Let’s use it to build our topologies!
Topologies
I’ve prepared two topologies. Each has two Cumulus nodes – Stormwind and Ironforge. They’re interconnected with each other via port swp1. The difference is the way the nodes are configured.
Configs from this article are committed to the Containerlab-topologies repository.
Nodes with bound config
In this scenario, the interface config is prepared before spinning up the topologies. Since Cumulus Linux saves interface config in the /etc/network/interfaces file, we can spin up a Cumulus container, collect the default configuration, change it, and then bind it to the containers that will be deployed in the future.
Today’s config is rather small because it’s just a static IP address assignment. Let’s take a look at the interface file prepared for the first container – Stormwind:
auto swp1
iface swp1
address 10.0.0.1/24
And the second one – Ironforge:
auto swp1
iface swp1
address 10.0.0.2/24
Having those files prepared, we can attach them to the containers we want to create. Let’s take a glance at the Containerlab topology file.
name: 2-nodes-config-bind
topology:
kinds:
cvx:
image: networkop/cx:5.3.0
nodes:
Stormwind:
kind: cvx
runtime: docker
binds:
- Stormwind/interfaces:/etc/network/interfaces
Ironforge:
kind: cvx
runtime: docker
binds:
- Ironforge/interfaces:/etc/network/interfaces
links:
- endpoints: ["Stormwind:swp1", "Ironforge:swp1"]
Each topology has its own name, in our case it’s 2-nodes-config-bind.
Right after the name, we’re starting the biggest configuration block of the file – topology.
Kinds part has the common configuration of the Cumulus Linux nodes across the topology. In our case, for both nodes we’re using the same docker image, that’s why it’s specified only in this section. If you want to learn more about kinds, visit official Containerlab documentation.
Nodes section has the definition of our two nodes. It’s a key-value structure, where keys are the names of nodes, and values are the configuration of each. Containerlab needs to know what container we want to deploy, that’s why there is a kind attribute. Then, we’re defining runtime – in our case, it’s a docker. In the last section, we have a list of binds.
Binds are the files or even directories from the host system, that we’re attaching to the containers. In our case, we have two files with interface configuration. We want each container to be able to read the contents of those files, that’s why we need to attach them. To define a bind, we need two paths:
- Source – path to the file/directory on the file in the host filesystem
- Destination – path in the container filesystem where we want to bind the source file/directory
Having those paths, we can create a bind entry. It’s as follows: source_path:destination_path
We can have more than one bind per container. Binds section is a list, so if you want to add more, append them in the following lines, starting from an indentation and dash (-).
Deployment
Let’s check how this configuration works in practice.
To deploy a Containerlab configuration, in the directory with the topology file – topology.clab.yaml in our case, execute the sudo clab deploy command. After pressing enter just sit, relax, and observe the magic!
radokochman@Hellfire:~/projects/repos/Containerlab-topologies/Cumulus Linux/2-nodes-config-bind$ sudo clab deploy
INFO[0000] Containerlab v0.58.0 started
INFO[0000] Parsing & checking topology file: topology.clab.yaml
INFO[0000] Creating docker network: Name="clab", IPv4Subnet="172.20.20.0/24", IPv6Subnet="3fff:172:20:20::/64", MTU=1500
INFO[0000] Creating lab directory: /home/radokochman/projects/repos/Containerlab-topologies/Cumulus Linux/2-nodes-config-bind/clab-2-nodes-config-bind
INFO[0000] Creating container: "Stormwind"
INFO[0000] Creating container: "Ironforge"
INFO[0000] Created link: Stormwind:swp1 <--> Ironforge:swp1
INFO[0000] Adding containerlab host entries to /etc/hosts file
INFO[0000] Adding ssh config for containerlab nodes
INFO[0000] 🎉 New containerlab version 0.60.1 is available! Release notes: https://containerlab.dev/rn/0.60/#0601
Run 'containerlab version upgrade' to upgrade or go check other installation options at https://containerlab.dev/install/
+---+------------------------------------+--------------+--------------------+------+---------+----------------+----------------------+
| # | Name | Container ID | Image | Kind | State | IPv4 Address | IPv6 Address |
+---+------------------------------------+--------------+--------------------+------+---------+----------------+----------------------+
| 1 | clab-2-nodes-config-bind-Ironforge | 404bb35264a3 | networkop/cx:5.3.0 | cvx | running | 172.20.20.2/24 | 3fff:172:20:20::2/64 |
| 2 | clab-2-nodes-config-bind-Stormwind | fd84e360ac5d | networkop/cx:5.3.0 | cvx | running | 172.20.20.3/24 | 3fff:172:20:20::3/64 |
+---+------------------------------------+--------------+--------------------+------+---------+----------------+----------------------+
After the deployment is done, we have all the necessary information about nodes in the form of a table.
Containerlab creates a hostname alias for each deployed node. It consists of the topology name and the hostname that’s defined in the topology file.
Let’s check if our Cumulus Linux containers are in the expected state!
Stormwind
We can connect to the Stormwind container via the ssh. Default credentials are cumulus/cumulus.
radokochman@Hellfire:~/projects/repos/Containerlab-topologies$ ssh cumulus@clab-2-nodes-config-bind-Stormwind
Warning: Permanently added 'clab-2-nodes-config-bind-stormwind' (ED25519) to the list of known hosts.
cumulus@clab-2-nodes-config-bind-stormwind's password:
Linux Stormwind 6.8.0-49-generic #49~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Nov 6 17:42:15 UTC 2 x86_64
Welcome to NVIDIA Cumulus (R) Linux (R)
cumulus@Stormwind:~$
The container is reachable, and we’re able to log in. Let’s check the interface status with the nv show interface command.
cumulus@Stormwind:~$ nv show interface
Interface MTU Speed State Remote Host Remote Port Type Summary
--------- ----- ----- ----- ----------- ----------- -------- --------------------------------
+ eth0 1500 10G up Ironforge eth0 eth IP Address: 172.20.20.3/24
eth0 IP Address: 3fff:172:20:20::3/64
+ lo 65536 up loopback IP Address: 127.0.0.1/8
lo IP Address: ::1/128
+ swp1 9216 10G up Ironforge swp1 swp IP Address: 10.0.0.1/24
There are three interfaces listed:
- eth0 – the management interface with an IP address automatically assigned by the Containerlab
- lo – default loopback interface
- swp1 – our custom interface connecting Stormwind to Ironforge
Let’s check the details of the swp1 interface with the nv show interface swp1 command.
cumulus@Stormwind:~$ nv show interface swp1
operational applied description
----------------------- ----------------- ------- ----------------------------------------------------------------------
type swp The type of interface
lldp
dcbx-ets-config-tlv off DCBX ETS config TLV flag
dcbx-ets-recomm-tlv off DCBX ETS recommendation TLV flag
dcbx-pfc-tlv off DCBX PFC TLV flag
[neighbor] Ironforge LLDP neighbors
ip
[address] 10.0.0.1/24 ipv4 and ipv6 address
link
auto-negotiate off Link speed and characteristic auto negotiation
duplex full Link duplex
mtu 9216 interface mtu
speed 10G Link speed
state up The state of the interface
stats
carrier-transitions 2 Number of times the interface state has transitioned between up and...
in-bytes 14107 B total number of bytes received on the interface
in-drops 0 number of received packets dropped
in-errors 0 number of received packets with errors
in-pkts 51 total number of packets received on the interface
out-bytes 14177 B total number of bytes transmitted out of the interface
out-drops 0 The number of outbound packets that were chosen to be discarded eve...
out-errors 0 The number of outbound packets that could not be transmitted becaus...
out-pkts 52 total number of packets transmitted out of the interface
mac aa:c1:ab:39:6a:19 MAC Address on an interface
ifindex 9 The kernel/system assigned interface index
Besides the information that we got earlier with the shorter version of the command, we have counters here. We also have information about the neighbor, – Ironforge – our second node. Let’s check the LLDP section of the swp1 interface.
cumulus@Stormwind:~$ nv show interface swp1 lldp neighbor
Neighbor Remote IP Model SW Version Remote Port
--------- ----------- -------------------------------- ---------- -----------
Ironforge 172.20.20.2 Standard PC (i440FX + PIIX, 1996 5.3.0 swp1
We’re getting more details regarding our neighboring device, but there is an option to inspect it even further by adding the neighbor hostname to the command.
cumulus@Stormwind:~$ nv show interface swp1 lldp neighbor Ironforge
operational applied description
------------------------- ----------------------------------------------------------------------------- ------- ----------------------------------------------------------------------
age 1115 Seconds since initial discovery
bridge
[vlan] Set of vlans understood by this neighbor.
chassis
chassis-id 02:42:ac:14:14:02 Chassis ID of the neighbor
management-address-ipv4 172.20.20.2 Network IPv4 address that can be used to reach the neighbor
management-address-ipv6 3fff:172:20:20::2 Network IPv6 address that can be used to reach the neighbor
system-description Cumulus Linux version 5.3.0 running on QEMU Standard PC (i440FX + PIIX, 1996) The neighbor system description connected to this interface
system-name Ironforge The neighbor system name that is connected to this interface
lldp-med
device-type Network Connectivity Device Device Type
inventory
firmware-revision rel-1.16.3-0-ga6ed6b701f0a-prebu Firmware Revision
manufacturer QEMU Manufacturer
model Standard PC (i440FX + PIIX, 1996 Model
serial-number Not Specified Serial Number
software-revision 5.3.0 Software Revision
port
description swp1 Description of the neighbor's port, as described by the neighbor
name swp1 The port that is connected to this interface
ttl 120 How long, in seconds, information from the neighbor should be consi...
type ifname Type of the neighbor's port, as described by the neighbor
pmd-autoneg
[advertised] Autoneg advertised capabilities
mau-oper-type 10GigBaseCX4 - X copper over 8 pair 100-Ohm balanced cable MAU oper type
Now we have complete information about the neighbor, including hardware and software details.
Ironforge
Let’s also check interface details on the Ironforge node.
cumulus@Ironforge:~$ nv show interface swp1
operational applied description
----------------------- ----------------- ------- ----------------------------------------------------------------------
type swp The type of interface
lldp
dcbx-ets-config-tlv off DCBX ETS config TLV flag
dcbx-ets-recomm-tlv off DCBX ETS recommendation TLV flag
dcbx-pfc-tlv off DCBX PFC TLV flag
[neighbor] Stormwind LLDP neighbors
ip
[address] 10.0.0.2/24 ipv4 and ipv6 address
link
auto-negotiate off Link speed and characteristic auto negotiation
duplex full Link duplex
mtu 9216 interface mtu
speed 10G Link speed
state up The state of the interface
stats
carrier-transitions 2 Number of times the interface state has transitioned between up and...
in-bytes 16927 B total number of bytes received on the interface
in-drops 1 number of received packets dropped
in-errors 0 number of received packets with errors
in-pkts 61 total number of packets received on the interface
out-bytes 16857 B total number of bytes transmitted out of the interface
out-drops 0 The number of outbound packets that were chosen to be discarded eve...
out-errors 0 The number of outbound packets that could not be transmitted becaus...
out-pkts 60 total number of packets transmitted out of the interface
mac aa:c1:ab:a3:d7:11 MAC Address on an interface
ifindex 10 The kernel/system assigned interface index
Similarly to the Stormwind node, we can observe that packets are flowing through that interface just by looking at the counters. But still, there is no other way to be sure that everything works than performing connectivity tests. Let’s do some!
Reachability checks
We will start with a ping from the Stormwind node to the Ironforge. The second node has a 10.0.0.2 IP address assigned. We will send 5 ICMP echo requests.
cumulus@Stormwind:~$ ping 10.0.0.2 -c 5
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.045 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.049 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.048 ms
64 bytes from 10.0.0.2: icmp_seq=5 ttl=64 time=0.048 ms
--- 10.0.0.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 107ms
rtt min/avg/max/mdev = 0.042/0.046/0.049/0.006 ms
Connectivity is there, that’s a good sign. Let’s check also the other side – ping from Ironforge to Stormwind.
cumulus@Ironforge:~$ ping 10.0.0.1 -c 5
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.041 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.057 ms
64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=0.060 ms
64 bytes from 10.0.0.1: icmp_seq=5 ttl=64 time=0.060 ms
--- 10.0.0.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 86ms
rtt min/avg/max/mdev = 0.041/0.054/0.060/0.009 ms
It also works.
Our work is done here, let’s destroy our topology with the sudo clab destroy command.
radokochman@Hellfire:~/projects/repos/Containerlab-topologies/Cumulus Linux/2-nodes-config-bind$ sudo clab destroy
INFO[0000] Parsing & checking topology file: topology.clab.yaml
INFO[0000] Parsing & checking topology file: topology.clab.yaml
INFO[0000] Destroying lab: 2-nodes-config-bind
INFO[0000] Removed container: clab-2-nodes-config-bind-Ironforge
INFO[0000] Removed container: clab-2-nodes-config-bind-Stormwind
INFO[0000] Removing containerlab host entries from /etc/hosts file
INFO[0000] Removing ssh config for containerlab nodes
Containers are now stopped and everything is cleaned automatically by the Containerlab.
Nodes configured with exec
There is another way to configure our nodes besides attaching the prepared config. We can execute the configuration commands after the containers are deployed, so when we SSH to them, they will have the configuration ready for us. In this scenario, all we want is just to configure the swp1 interface on both nodes.
In Cumulus Linux, we can set static IP address with the nv set interface swp1 ip address command. But that’s not the end. After changing the configuration, we need to apply the config with the nv config apply command. It seems very easy, but there is a trap. Apply command requires user interaction because we need to confirm changes to be made. It creates a dialog scenario, which is hard to handle in some automation cases, so we want to avoid that. Fortunately, there are —no-prompt and -y options that can be appended to the nv config apply command to automatically approve all changes without the dialog.
Let’s build a Containerlab topology file with the configuration commands.
name: 2-nodes-config-exec
topology:
kinds:
cvx:
image: networkop/cx:5.3.0
nodes:
Stormwind:
kind: cvx
runtime: docker
exec:
- sleep 30
- nv set interface swp1 ip address 10.0.0.1/24
- nv config apply --no-prompt -y
Ironforge:
kind: cvx
runtime: docker
exec:
- sleep 30
- nv set interface swp1 ip address 10.0.0.2/24
- nv config apply --no-prompt -y
links:
- endpoints: ["Stormwind:swp1", "Ironforge:swp1"]
The main difference here is the replacement of binds with the exec section. Under it, there is a list of commands that are passed to the container after the deployment. What’s important here is the presence of sleep 30 command at the beginning. The Cumulus Linux container needs some time for initialization after it’s spanned up. After all the daemons are up and running, we can execute the configuration commands. Sleep at the beginning waits for 30 seconds before going to the following commands. From my experience, it’s enough for the container to complete the initialization process, but keep in mind, that in your case it may take longer, so if you encounter problems with the configuration of the containers, you can extend the sleep time.
The deployment process is the same, after the sudo clab deploy command, the topology is deployed, and after some time we’re able to connect to the containers.
Configuration check
Let’s see if the Stormwind and Ironforge nodes have the desired interface configuration. We will use an already known command for that – nv show interface
cumulus@Stormwind:mgmt:~$ nv show interface
Interface MTU Speed State Remote Host Remote Port Type Summary
--------- ----- ----- ----- ----------- ----------- -------- --------------------------------
+ eth0 1500 10G up cumulus eth0 eth IP Address: 172.20.20.2/24
eth0 IP Address: 3fff:172:20:20::2/64
+ lo 65536 up loopback IP Address: 127.0.0.1/8
lo IP Address: ::1/128
+ swp1 9216 10G up cumulus swp1 swp IP Address: 10.0.0.1/24
Interface swp1 on the Stormwind node has the correct IP address, let’s now move to the Ironforge.
cumulus@Ironforge:mgmt:~$ nv show interface
Interface MTU Speed State Remote Host Remote Port Type Summary
--------- ----- ----- ----- ----------- ----------- -------- --------------------------------
+ eth0 1500 10G up cumulus eth0 eth IP Address: 172.20.20.3/24
eth0 IP Address: 3fff:172:20:20::3/64
+ lo 65536 up loopback IP Address: 127.0.0.1/8
lo IP Address: ::1/128
+ swp1 9216 10G up cumulus swp1 swp IP Address: 10.0.0.2/24
Everything seems fine, we’re ready to check the connectivity between nodes.
Reachability checks
Again, we will start from the Stormwind node.
cumulus@Stormwind:mgmt:~$ ping 10.0.0.2 -c 5
vrf-wrapper.sh: switching to vrf "default"; use '--no-vrf-switch' to disable
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.046 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.070 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.052 ms
64 bytes from 10.0.0.2: icmp_seq=5 ttl=64 time=0.051 ms
--- 10.0.0.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 108ms
rtt min/avg/max/mdev = 0.046/0.066/0.113/0.025 ms
Ping works, let’s check the opposite direction.
cumulus@Ironforge:mgmt:~$ ping 10.0.0.1 -c 5
vrf-wrapper.sh: switching to vrf "default"; use '--no-vrf-switch' to disable
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.066 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.072 ms
64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=0.056 ms
64 bytes from 10.0.0.1: icmp_seq=5 ttl=64 time=0.069 ms
--- 10.0.0.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 136ms
rtt min/avg/max/mdev = 0.053/0.063/0.072/0.009 ms
Conclusion
The usability of the Containerlab is just amazing. We can set up a fully configured network topology within seconds with just a topology file and one command. I view this as a huge step forward compared to classic virtual machines because environment preparation time went ridiculously short, which gives us precious time to do more and more labbing!