2.3.2. Run container using docker¶
This section describes an example that uses Docker to run Virtual Service Router. It has been tested on Ubuntu 20.04, Ubuntu 22.04 and Red Hat Enterprise Linux 8.
Load kernel modules¶
Load the required kernel modules on the host node, as listed in Kernel modules.
Docker installation¶
To install the latest version of Docker, follow the installation procedure of the Docker documentation.
Example on Ubuntu 22.04:
# apt -qy update
# apt -qy install apt-transport-https ca-certificates curl software-properties-common
# cat <<EOF > /etc/apt/sources.list.d/docker.list
deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -sc) stable
EOF
# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
# apt -qy update
# apt -qy install docker-ce docker-ce-cli containerd.io
User configuration¶
Ensure that the user used for the following is part of the Docker and sudo group:
# usermod -a -G docker <user>
# usermod -a -G sudo <user>
Docker configuration¶
Configure docker to use systemd as cgroup manager:
# cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# systemctl restart docker
Pull image from 6WIND registry¶
The Docker image is available at:
https://download.6wind.com/vsr/<arch>-ce/<version>
.
First, login to the 6WIND registry, using the credentials provided by 6WIND support:
$ docker login download.6wind.com
Username: user@example.com
Password: *******
Pull the image:
$ VSR_IMAGE=download.6wind.com/vsr/x86_64-ce/3.9:3.9.1
$ docker pull ${VSR_IMAGE}
Note
You can also authenticate and browse your images from a web browser at https://download.6wind.com/.
Create and run the container¶
Session private directory¶
Create a private directory for this session:
$ SESSION_NAME=my_vsr
$ SESSION_PATH=/tmp/docker_sessions/${SESSION_NAME}
$ mkdir -p ${SESSION_PATH}
CPU and memory configuration¶
The following variables will be used by the next commands.
The CPUSET
variable below contains the list of CPUs that will be
dedicated to Virtual Service Router. It has to be customized to match the server
hardware.
See also
lstopo(1)
gives information about the hardware topology.
The variables related to memory usually do not need to be customized.
# CPUs in which to allow execution of the container.
# At least 2 CPUs are required (1 for fast path, and 1 for control plane)
CPUSET=0-3,12-15
# Limit for standard memory.
# Using a lower value requires additional configuration.
MEMORY_LIMIT_MB=2048
# Amount of hugepage memory in megabytes.
# Using a lower value requires additional configuration.
HUGEPAGE_MEM_MB=8192
# Amount of POSIX shared memory in megabytes.
# Using a lower value requires additional configuration.
POSIX_SHMEM_MB=512
Warning
Adapt the CPU list (CPUSET variable) to your platform.
Hugepages global configuration¶
The Virtual Service Router container requires hugepages to run.
The example below shows how to reserve the required number of hugepages on each NUMA node:
$ CONTAINERS_PER_NUMA_NODE=2
$ HUGEPAGE_SZ_KB=$(grep 'Hugepagesize:' /proc/meminfo | sed 's,[^0-9]*\([0-9]*\)[^0-9]*,\1,')
$ NB_HUGEPAGES=$((CONTAINERS_PER_NUMA_NODE * HUGEPAGE_MEM_MB / (HUGEPAGE_SZ_KB / 1024)))
$ NB_NUMA_NODES=$(echo /sys/devices/system/node/node* | wc -w)
$ for nr_hugepages in /sys/devices/system/node/node*/hugepages/hugepages-${HUGEPAGE_SZ_KB}kB/nr_hugepages; do
sudo sh -c "echo ${NB_HUGEPAGES} > ${nr_hugepages}"
done
Note
This step is done once for all containers.
Warning
Adapt the number of container per NUMA node (CONTAINERS_PER_NUMA_NODE variable) to your use-case.
Mount hugepages¶
Mount a hugetlbfs of the same size in the session private directory:
$ HUGEPAGES_PATH=${SESSION_PATH}/hugepages
$ mkdir -p ${HUGEPAGES_PATH}
$ sudo mount -t hugetlbfs -o size=${HUGEPAGE_MEM_MB}M nodev ${HUGEPAGES_PATH}
For more information about hugepages, please refer to Linux kernel hugepages documentation.
Dataplane network devices¶
You can provide physical PCI devices or PCI virtual functions to the
container. These devices will be bound to the vfio-pci
driver on the
node, and managed by the Virtual Service Router fast path inside the container: the
network ports won’t be usable until the fast path is started.
Note
This subsection applies to Intel network devices (ex: Niantic, Fortville). Other devices like Nvidia Mellanox NICs require different operations, which are not detailed in this document.
Source the following helper shell functions:
# Bind a device to a driver
# $1: pci bus address (ex: 0000:04:00.0)
# $2: driver
bind_device () {
echo "Binding $1 to $2"
sysfs_dev=/sys/bus/pci/devices/$1
if [ -e ${sysfs_dev}/driver ]; then
sudo sh -c "echo $1 > ${sysfs_dev}/driver/unbind"
fi
sudo sh -c "echo $2 > ${sysfs_dev}/driver_override"
sudo sh -c "echo $1 > /sys/bus/pci/drivers/$2/bind"
if [ ! -e ${sysfs_dev}/driver ]; then
echo "Failed to bind device $1 to driver $2" >&2
return 1
fi
}
# Bind a device and devices in the same iommu group to a driver
# $1: pci bus address (ex: 0000:04:00.0)
# $2: driver
bind_device_and_siblings () {
bind_device $1 $2
# take devices in the same iommu group
for dir in $sysfs_dev/iommu_group/devices/*; do
[ -e "$dir" ] || continue
sibling=$(basename $(readlink -e "$dir"))
# we can skip ourself
[ "$sibling" = "$1" ] && continue
bind_device $sibling $2
done
}
# get the iommu group of a device
# $1: pci bus address (ex: 0000:04:00.0)
get_iommu_group () {
iommu_is_enabled || echo -n "noiommu-"
echo $(basename $(readlink -f /sys/bus/pci/devices/$1/iommu_group))
}
# return 0 (success) if there is at least one file in /sys/class/iommu
iommu_is_enabled() {
for f in /sys/class/iommu/*; do
if [ -e "$f" ]; then
return 0
fi
done
return 1
}
# get arguments to be passed to docker cli
# $*: list of pci devices
get_vfio_device_args () {
iommu_is_enabled || echo -n "--cap-add=SYS_RAWIO "
echo "--device /dev/vfio/vfio "
for d in $*; do
echo -n "--device /dev/vfio/$(get_iommu_group $d) "
done
echo
}
These helpers can be downloaded from there
.
For security reasons, we recommend to enable the IOMMU in the BIOS and in the kernel.
See also
Providing physical devices or virtual functions to the container paragraph.
The following command sets the unsafe
mode in case the IOMMU is not
available.
$ if ! iommu_is_enabled; then \
sudo sh -c "echo Y > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode"; \
fi
Bind the devices you want to pass to the container to the vfio-pci
driver:
$ PCI_DEVICES="0000:04:00.0 0000:04:00.1"
$ for dev in ${PCI_DEVICES}; do bind_device_and_siblings $dev vfio-pci; done
$ PCI_DEVICES_ARGS=$(get_vfio_device_args ${PCI_DEVICES})
Warning
Adapt the list of PCI device addresses (PCI_DEVICES variable) to your platform.
Embedded configuration¶
Optionally, you can provide an initial configuration to the container, that will be applied at startup.
The example below configures a license in CLI text format.
$ INIT_CONF=${SESSION_PATH}/init-config.cli
$ echo '/ system license online serial xxx' > ${INIT_CONF}
$ INIT_CONF_ARG="-v ${INIT_CONF}:/etc/init-config.cli:ro"
Warning
Replace the license serial identifier (xxx
) in the
command above.
See also
Automated pre-configuration section.
Create and run the container¶
Create the container like this:
$ docker create \
--name ${SESSION_NAME} \
--cpuset-cpus=${CPUSET} \
--memory=${MEMORY_LIMIT_MB}M \
--shm-size=${POSIX_SHMEM_MB}M \
--ulimit memlock=4194304:4194304 \
--hostname ${SESSION_NAME} \
--device /dev/net/tun --device /dev/vhost-net \
--device /dev/ppp \
${PCI_DEVICES_ARGS} \
--security-opt seccomp=unconfined \
--security-opt apparmor=unconfined \
--cap-add=SYS_ADMIN --cap-add=NET_ADMIN --cap-add=NET_BROADCAST \
--cap-add=NET_RAW \
--cap-add=SYS_NICE --cap-add=IPC_LOCK --cap-add=SYSLOG \
-v ${HUGEPAGES_PATH}:/dev/hugepages \
--tmpfs /run --tmpfs /run/lock \
${INIT_CONF_ARG} \
--cidfile ${SESSION_PATH}/.dockerid \
--rm \
${VSR_IMAGE}
Start the container:
$ docker start ${SESSION_NAME}
You can get information about the running container with:
$ docker inspect ${SESSION_NAME}
Note
The container can be stopped and deleted with:
$ docker stop ${SESSION_NAME}
$ rm /tmp/docker_sessions/${SESSION_NAME}/.dockerid
$ sudo umount -l /tmp/docker_sessions/${SESSION_NAME}/hugepages
Connect to the container¶
You can now connect to the cli with:
$ docker exec -it ${SESSION_NAME} login