Example: add a telnet server management

This section describes the different stages to add the management of a telnet server in 6WINDGate. The full source of the service is in the appendix.

The YANG model will be added in $YAMS/yang/vrouter-telnet-server.yang, and the service code will be added in $YAMS/service/telnet_server.py.

To prepare your environment, compile 6WINDGate as described in the 6WINDGate Getting Started Guide, and start management:

# systemctl start management.target

Add a dependency to the telnetd package

Our telnet service will use the telnetd package provided by the distribution. We need to add a new dependency to the 6windgate-mgmt-base-yams package.

For that, edit $TOOLS_BUILD_FRAMEWORK/packages/yams/package.mk and add telnetd to the YAMS_BIN_DEPS variable, as in the patch below:

--- packages/yams/package.mk
+++ packages/yams/package.mk
@@ -5,6 +5,7 @@ YAMS_LICENSE = 6WIND
 YAMS_SUMMARY = Yang management system
 YAMS_BUILD_DEPS = aionetlink libffi-devel
 YAMS_BIN_DEPS = \
+       telnetd \
        bind9 \
        bind9utils \
        cp-routing \

Rebuild, install and restart the management:

$ cd $TOOLS_BUILD_FRAMEWORK
$ make yams
$ make repo
# systemctl stop management.target
# apt update
# apt install --reinstall 6windgate-mgmt-base-yams
# systemctl start management.target

We can check that telnetd is installed:

# dpkg-query --status telnetd
Package: telnetd
Status: install ok installed
(...)

Add the service skeleton and the YANG model

The step consists in adding a new service in YAMS, and an associated YANG model. The service will do nothing for now. The code source of YAMS is located in:

$ YAMS=$TOOLS_BUILD_FRAMEWORK/sources/mgmt-base-yams

As seen above, the CLI contexts are generated from the YANG model. We want to create a new CLI context called telnet-server in vrf <name>. Therefore, we augment /vrouter:config/vrouter:vrf with a new container called telnet-server.

This container contains several leaves:

  • a boolean enabled, which is enabled by default

  • the IP address to listen on: it uses one of the common vrouter types defined in vrouter-inet-types.yang.

  • the port to listen on, an integer whose default value is 23.

We add the presence statement to the telnet-server container: the service configuration is only created when entering into the context. Else, no configuration is created, and the service is disabled.

Let’s add this new YANG module called vrouter-telnet-server in $YAMS/yang/vrouter-telnet-server.yang:

module vrouter-telnet-server {
  namespace "urn:mycompany:vrouter/telnet-server";
  prefix vrouter-telnet-server;

  import vrouter {
    prefix vrouter;
  }
  import vrouter-inet-types {
    prefix vr-inet;
  }

  organization
    "My Company";
  contact
    "My Company support - <support@mycompany.com>";
  description
    "Telnet server service.";

  revision 2018-10-03 {
    description
      "Initial version.";
    reference "";
  }

  augment "/vrouter:config/vrouter:vrf" {
    description
      "Telnet server configuration.";

    container telnet-server {
      presence "Makes telnet available";
      description
        "Telnet server configuration.";

      leaf enabled {
        type boolean;
        default "true";
        description
          "Enable or disable the telnet server.";
      }

      leaf address {
        type vr-inet:ip-address;
        description
          "The IP address of the interface to listen on. The TELNET
           server will listen on all interfaces if no value is
           specified.";
      }

      leaf port {
        type vr-inet:port-number;
        default "23";
        description
          "The local port number on this interface the telnet server
           listens on.";
      }
    }
  }
}

Then, write a minimal telnet service, that only declares the name of the YANG modules that it uses. This service will be added in $YAMS/yams/service/telnet_server.py:

"""
The telnet server service manages telnetd using the inetd systemd service.
"""

from yams.service import Service


#------------------------------------------------------------------------------
class TelnetServerService(Service):
    @classmethod
    def required_modules(cls):
        return {'vrouter-telnet-server'}

To enable this service, it has to be referenced in $YAMS/setup.py:

--- a/setup.py
+++ b/setup.py
@@ -80,6 +80,7 @@ setuptools.setup(
     ssh-server = yams.service.ssh_server:SshServerService
     system-loopback = yams.service.system_loopback:SystemLoopbackService
     system = yams.service.system:SystemService
+    telnet-server = yams.service.telnet_server:TelnetServerService
     veth = yams.service.veth:VethService
     vlan = yams.service.vlan:VlanService
     vrf = yams.service.vrf:VrfService

Let’s test these changes. Rebuild, install and restart the management:

$ cd $TOOLS_BUILD_FRAMEWORK
$ make yams
$ make repo
# systemctl stop management.target
# apt update
# apt install --reinstall 6windgate-mgmt-base-yams
# systemctl start management.target

The configuration of the service through the CLI can now be tested. Start nc-cli and try to configure the service:

vsr> edit running
vsr running config# vrf main
vsr running vrf main# telnet-server
vsr running telnet-server# <?>
  (...)
  address              The IP address of the interface to listen on. The TELNET
                       server will listen on all interfaces if no value is
                       specified.
  enabled              Default: true.
                       Enable or disable the telnet server.
  port                 Default: 23.
                       The local port number on this interface the telnet server
                       listens on.
vsr running telnet-server# show config
telnet-server
    enabled true
    port 23
    ..
vsr running telnet-server# address 0.0.0.0
vsr running telnet-server# show config
telnet-server
    enabled true
    address 0.0.0.0
    port 23
    ..
vsr running telnet-server# show config running
vsr running telnet-server# commit
Configuration committed.
vsr running telnet-server# show config running
telnet-server
    enabled true
    address 0.0.0.0
    port 23
    ..

The configuration is properly applied. Of course, it does nothing because there is no code in the service. We’ll add some in the next section.

Add the configuration code in main vrf

In this step, we will add a new asynchronous method in our service, that will be in charge of configuring the service (inetd.conf).

The official method to add or remove a service in inetd.conf is to use update-inetd(8), but in our example we will consider that the only user of inetd is yams, for our telnet server.

By default, the inetd service only runs in the main vrf (which is actually the init_net netns). So in this step we will ensure that only the main vrf can be configured. This can be done in the YANG by adding a condition:

--- a/yang/vrouter-telnet-server.yang
+++ b/yang/vrouter-telnet-server.yang
@@ -27,6 +27,9 @@ module vrouter-telnet-server {
       "Telnet server configuration.";

     container telnet-server {
+      must "string(../vrouter:name) = 'main'" {
+        error-message "Only main vrf is supported.";
+      }
       presence "Makes telnet available";
       description
         "Telnet server configuration.";

As a result, the CLI will prevent the user from creating an invalid configuration:

vsr> edit running
vsr running config# vrf main
vsr running vrf main# telnet-server
vsr running telnet-server# show config
telnet-server
    enabled true
    port 23
    ..
vsr running telnet-server# validate
OK.
vsr running telnet-server# /
vsr running config# vrf vr1
vsr running vrf vr1# telnet-server
vsr running telnet-server#! validate
/ vrf vr1 telnet-server:
  Must condition "string(../vrouter:name) = 'main'" not satisfied.
/ vrf vr1 telnet-server:
  Only main vrf is supported.
Invalid configuration.

To add the configuration code, change the service code to this:

"""
The telnet server service manages telnetd using the inetd service.
"""

import logging

from yams.service import Service
from yams.service import config_route
from yams.util import templates
from yams.util.systemd import Systemd


LOG = logging.getLogger(__name__)


#------------------------------------------------------------------------------
class TelnetServerService(Service):
    SERVICE = 'inetd.service'
    CONF_FILE = '/etc/inetd.conf'

    @classmethod
    def required_modules(cls):
        return {'vrouter-telnet-server'}

    @config_route("/config/vrf[name='main']/telnet-server")
    async def apply(self, config):
        """
        If telnet_server is enabled, generate the inetd configuration file and
        restart the service, else stop the service.

        Configuration example:

        {
            'address': '0.0.0.0',
            'port': 23,
            'enabled': True,
        }
        """
        if config.get('enabled'):
            buf = templates.render('inetd.conf', **config)
            with open(self.CONF_FILE, 'w') as f:
                f.write(buf)

            LOG.info('starting %s', self.SERVICE)
            await Systemd.restart_unit(self.SERVICE)
        else:
            LOG.info('stopping %s', self.SERVICE)
            await Systemd.stop_unit(self.SERVICE)

Thanks to the @config_route decorator, the apply() async method gets called when a configuration change occurs in the specified xpath. The config argument is a Python dictionary which contains the configuration, which can be directly reused when generating the inetd.conf file.

For this, we uses a jinja template that must be added in $YAMS/yams/service/templates/inetd.conf:

# /etc/inetd.conf:  see inetd(8) for further informations.
# Generated by yams, do not edit
{{ address|d("0.0.0.0") }}:{{ port }}          stream  tcp     nowait  telnetd /usr/sbin/tcpd  /usr/sbin/in.telnetd

Now, we can enable the telnet server from the CLI:

vsr> edit running
vsr running config# vrf main
vsr running vrf main# telnet-server
vsr running telnet-server# commit
Configuration committed.
vsr running telnet-server# show config running
telnet-server
    enabled true
    address 0.0.0.0
    port 23
    ..
# cat /etc/inetd.conf
# /etc/inetd.conf:  see inetd(8) for further informations.
# Generated by yams, do not edit
0.0.0.0:23          stream  tcp     nowait  telnetd /usr/sbin/tcpd  /usr/sbin/in.telnetd
$ telnet 127.0.0.1
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.

vsr login:

It can also be disabled:

vsr running telnet-server# enabled false
vsr running telnet-server# commit
$ telnet 127.0.0.1
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused

Add the code to retrieve the state of the service

To know if the service is running and what is its effective configuration, we use the show state CLI command or a NETCONF GET request. To support this feature, we first need to update the YANG module to augment the /vrouter:state container. Given that the content of state is the same as config, we factorize some YANG nodes inside a grouping.

--- a/yang/vrouter-telnet-server.yang
+++ b/yang/vrouter-telnet-server.yang
@@ -22,6 +22,34 @@ module vrouter-telnet-server {
     reference "";
   }

+  grouping system-telnet-server-config {
+    description
+      "Configuration data for system telnet configuration.";
+
+    leaf enabled {
+      type boolean;
+      default "true";
+      description
+        "Enable or disable the telnet server.";
+    }
+
+    leaf address {
+      type vr-inet:ip-address;
+      description
+        "The IP address of the interface to listen on. The TELNET
+        server will listen on all interfaces if no value is
+        specified.";
+    }
+
+    leaf port {
+      type vr-inet:port-number;
+      default "23";
+      description
+        "The local port number on this interface the telnet server
+         listens on.";
+    }
+  }
+
   augment "/vrouter:config/vrouter:vrf" {
     description
       "Telnet server configuration.";
@@ -33,29 +61,18 @@ module vrouter-telnet-server {
       presence "Makes telnet available";
       description
         "Telnet server configuration.";
+      uses system-telnet-server-config;
+    }
+  }

-      leaf enabled {
-        type boolean;
-        default "true";
-        description
-          "Enable or disable the telnet server.";
-      }
-
-      leaf address {
-        type vr-inet:ip-address;
-        description
-          "The IP address of the interface to listen on. The TELNET
-           server will listen on all interfaces if no value is
-           specified.";
-      }
+  augment "/vrouter:state/vrouter:vrf" {
+    description
+      "Telnet server state.";

-      leaf port {
-        type vr-inet:port-number;
-        default "23";
-        description
-          "The local port number on this interface the telnet server
-           listens on.";
-      }
+    container telnet-server {
+      description
+        "Telnet server state.";
+      uses system-telnet-server-config;
     }
   }
 }

Then, we need to implement the get_state() async function in our service. This function will check the inetd systemd service status, and will use the ss command to determine what are the listening address and port. Again, for this example we suppose that inetd is only used for serving the telnet service.

This method returns a dictionary that must match the YANG model.

Note that the state_route() decorator contains a <netns> parameter which must also be present in the signature of the method. This parameter contains the name of the VRF for which the state must be returned. Therefore, when requesting the full state, this async function will be called for every active VRF.

@state_route("/state/vrf[name=<netns>]/telnet-server")
async def get_state(self, *, netns):
    """
    Generates the telnet server state.

    State example:

    'telnet-server': {
        'address': '0.0.0.0',
        'port': 23,
        'enabled': True,
    }
    """
    enabled = netns == 'main' and SystemdUnitCache.status(self.SERVICE)
    state = {'enabled': enabled}

    if enabled:
        data = await process.run_command(['ss', '-pln'])
        pattern = r'''^(?P<proto>\S+)\s+
                       (?P<state>\S+)\s+
                       (?P<recv_queue>\d+)\s+
                       (?P<send_queue>\d+)\s+
                       (?P<local_addr_port>([0-9a-fA-F:\.\]\[\*]*))\s+
                       (?P<peer_addr_port>([0-9a-fA-F:\.\]\[\*]*))\s+
                       users:.*\"inetd\".*$'''
        _re = re.compile(pattern, re.MULTILINE | re.VERBOSE)
        match = _re.search(data)

        if match:
            m = match.groupdict()
            idx = m['local_addr_port'].rfind(':')
            if idx != -1:
                state['port'] = m['local_addr_port'][(idx + 1):]
                state['address'] = m['local_addr_port'][:idx].replace('[', '').replace(']', '').replace('*', '::')
        else:
            state = {'enabled': False}

    return state

The import list has to be updated accordingly:

--- a/yams/service/telnet_server.py
+++ b/yams/service/telnet_server.py
@@ -3,11 +3,16 @@
 """

 import logging
+import re

+from yams import util
 from yams.service import Service
 from yams.service import config_route
+from yams.service import state_route
 from yams.util import templates
+from yams.util import process
 from yams.util.systemd import Systemd
+from yams.util.systemd import SystemdUnitCache


 LOG = logging.getLogger(__name__)

Rebuild, install and restart the management:

$ cd $TOOLS_BUILD_FRAMEWORK
$ make yams
$ make repo
# systemctl stop management.target
# apt update
# apt install --reinstall 6windgate-mgmt-base-yams
# systemctl start management.target

We can check that the state is now advertised in the CLI:

vsr> edit running
vsr running config# vrf main telnet-server
vsr running telnet-server# address 127.0.0.1
vsr running telnet-server# show config
telnet-server
    enabled true
    address 127.0.0.1
    port 23
    ..
vsr running telnet-server# commit
Configuration committed.
vsr running telnet-server# show state
telnet-server
    enabled true
    address 127.0.0.1
    port 23
    ..

Now let’s simulate a daemon crash by killing inetd manually:

root@ubuntu1804hwe:~/tools-build-framework# killall inetd

If we ask the state again, we can see that the telnet server is disabled.

vsr running telnet-server# show state
telnet-server
    enabled false
    ..

Add VRF support

Now that we have a running service in the main VRF, let’s modify it to run in any VRF. For that, we need to make the following changes:

  • The inetd systemd service provided by the distribution is not designed to run in another VRF. So we need to create a new one that has a the VRF as argument. For that, we use a systemd template unit file (see Systemd documentation). This template derives from the legacy service, but overrides the start command to use ip netns exec and the path to the configuration file.

  • Add a netns argument to the config route and to the signature of the apply() async method. This method will now be called with the VRF as argument. Note that if there are several instances of this service in the configuration, the apply() async method can be called concurrently on these instances.

  • Before using the new systemd template, it has to be installed by yams along with the drop-in file that prepends the proper ip netns exec prefix. These “system files” need to be declared in the system_files() class method. System files from all enabled services are installed once at startup.

  • Replace self.SERVICE and self.CONF by functions that take the VRF as parameter.

  • Before doing the configuration, we need to wait that the VRF exists. The VRF are created by yams in the VrfService.

  • In the get_state() async method, we just need to get the status of the correct service and launch the ss command in the proper VRF.

The diff looks like this (or you can directly see the final version of the files in the appendix):

--- a/yams/service/telnet_server.py
+++ b/yams/service/telnet_server.py
@@ -3,14 +3,18 @@
 """

 import logging
+import os
 import re

 from yams import util
-from yams.service import Service
 from yams.service import config_route
+from yams.service import Service
 from yams.service import state_route
-from yams.util import templates
+from yams.system_files import SystemdDropinTemplate
+from yams.system_files import SystemdUnitClone
 from yams.util import process
+from yams.util import templates
+from yams.util.netns import NetnsCache
 from yams.util.systemd import Systemd
 from yams.util.systemd import SystemdUnitCache

@@ -21,14 +25,38 @@
 #------------------------------------------------------------------------------
 class TelnetServerService(Service):
     SERVICE = 'inetd.service'
+    NETNS_SERVICE = 'inetd@%s.service'
     CONF_FILE = '/etc/inetd.conf'
+    NETNS_CONF_FILE = '/etc/netns/%s/inetd.conf'

     @classmethod
     def required_modules(cls):
         return {'vrouter-telnet-server'}

-    @config_route("/config/vrf[name='main']/telnet-server")
-    async def apply(self, config):
+    @classmethod
+    def system_files(cls, distro):
+        netns_service = cls.NETNS_SERVICE % ''
+        yield SystemdUnitClone(
+            name=netns_service,
+            original=cls.SERVICE,
+        )
+        yield SystemdDropinTemplate(
+            unit=netns_service,
+            template='inetd-netns.conf',
+        )
+
+    def _get_service(self, netns):
+        if netns == 'main':
+            return self.SERVICE
+        return self.NETNS_SERVICE % netns
+
+    def _get_conf_file(self, netns):
+        if netns == 'main':
+            return self.CONF_FILE
+        return self.NETNS_CONF_FILE % netns
+
+    @config_route("/config/vrf[name=<netns>]/telnet-server")
+    async def apply(self, config, *, netns):
         """
         If telnet_server is enabled, generate the inetd configuration file and
         restart the service, else stop the service.
@@ -41,16 +69,24 @@ async def apply(self, config):
             'enabled': True,
         }
         """
+        service = self._get_service(netns)
+        conf_file = self._get_conf_file(netns)
+        directory = os.path.dirname(conf_file)
+        os.makedirs(directory, exist_ok=True)
+
         if config.get('enabled'):
-            buf = templates.render('inetd.conf', **config)
-            with open(self.CONF_FILE, 'w') as f:
+            buf = templates.render(
+                 'inetd.conf', **config,
+                 proto='tcp' if '.' in config.get('address', '0.0.0.0') else 'tcp6')
+            with open(conf_file, 'w') as f:
                 f.write(buf)

-            LOG.info('starting %s', self.SERVICE)
-            await Systemd.restart_unit(self.SERVICE)
+            await NetnsCache().wait(netns)
+            LOG.info('starting %s', service)
+            await Systemd.restart_unit(service)
         else:
-            LOG.info('stopping %s', self.SERVICE)
-            await Systemd.stop_unit(self.SERVICE)
+            LOG.info('stopping %s', service)
+            await Systemd.stop_unit(service)

    @state_route("/state/vrf[name=<netns>]/telnet-server")
    async def get_state(self, *, netns):
@@ -65,11 +101,11 @@ async def get_state(self, *, netns):
            'enabled': True,
        }
        """
-       enabled = netns == 'main' and SystemdUnitCache.status(self.SERVICE)
+       enabled = SystemdUnitCache.status(self._get_service(netns))
        state = {'enabled': enabled}

        if enabled:
-           data = await process.run_command(['ss', '-pln'])
+           data = await process.run_command(['ss', '-pln'], netns=netns)
            pattern = r'''^(?P<proto>\S+)\s+
                           (?P<state>\S+)\s+
                           (?P<recv_queue>\d+)\s+

We need to add the inetd-netns.conf file that overrides the parameters of the systemd service in the $YAMS/yams/service/templates directory:

[Unit]
Description=Internet superserver netns %i
ConditionPathExists=/etc/netns/%i/inetd.conf

[Service]
ExecStart=
ExecStart=/sbin/ip netns exec %i /usr/sbin/inetd

[Install]
Alias=

We also need to remove the YANG condition that prevents us from configuring the service in any VRF:

--- a/yang/vrouter-telnet-server.yang
+++ b/yang/vrouter-telnet-server.yang
@@ -55,9 +55,6 @@ module vrouter-telnet-server {
       "Telnet server configuration.";

     container telnet-server {
-      must "string(../vrouter:name) = 'main'" {
-        error-message "Only main vrf is supported.";
-      }
       presence "Makes telnet available";
       description
         "Telnet server configuration.";

Rebuild, install and restart the management:

$ cd $TOOLS_BUILD_FRAMEWORK
$ make yams
$ make repo
# systemctl stop management.target
# apt update
# apt install --reinstall 6windgate-mgmt-base-yams
# systemctl start management.target

Configure a telnet server in another VRF, and check that it is running:

vsr running config# vrf vr1
vsr running vrf vr1# telnet-server
vsr running telnet-server# commit
Configuration committed.
vsr running telnet-server# /
vsr running config# show state vrf vr1 telnet-server
telnet-server
    enabled true
    address 0.0.0.0
    port 23
    ..
vsr running config# show state vrf main telnet-server
telnet-server
    enabled false
    ..

We can test it with the telnet command:

# telnet 127.0.0.1
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
# ip netns exec vr1 telnet 127.0.0.1
Trying 127.0.0.1...
Connected to 127.0.0.1.

Display in service summary

The show summary CLI command is used to get a broad picture of system status, including enabled services. We need very little work in order to make the summary include our telnet service.

We’ll use the @summary() decorator to plug our formatted description of the service status into the command’s output. This decorator does not use any xpath as it is relevant to the CLI only, and is not related to the YANG model.

Fortunately, formatting code already exists so our new code will be small.

--- a/yams/service/telnet_server.py
+++ b/yams/service/telnet_server.py
@@ -9,12 +9,15 @@
 from yams import util
 from yams.service import config_route
 from yams.service import Service
+from yams.service import SummaryGroup
 from yams.service import state_route
+from yams.service import summary
 from yams.system_files import SystemdDropinTemplate
 from yams.system_files import SystemdUnitClone
 from yams.util import process
 from yams.util import templates
 from yams.util.netns import NetnsCache
+from yams.util.summary import summary_netns_service_string
 from yams.util.systemd import Systemd
 from yams.util.systemd import SystemdUnitCache

@@ -55,6 +58,13 @@ def _get_conf_file(self, netns):
             return self.CONF_FILE
         return self.NETNS_CONF_FILE % netns

+    @summary('telnet-server', group=SummaryGroup.MANAGEMENT)
+    async def get_summary_string(self):
+        """
+        Display the summary for telnet server.
+        """
+        return summary_netns_service_string(self._get_service)
+
     @config_route("/config/vrf[name=<netns>]/telnet-server")
     async def apply(self, config, *, netns):
         """

Now try to enable the telnet service, and see what’s printed by the CLI :

vsr running config# vrf main
vsr running vrf main# telnet-server
vsr running telnet-server# commit
Configuration committed.
vsr running telnet-server# show summary
Service                   Status
=======                   ======

product                   6WINDGate 5.999.20220519

fast-path                 disabled
linux                     4 cores, memory total 4.82GB available 2.68GB
network-port              3 ports detected

vrf                       2 configured

interface physical        enabled in vrf main (1 up iface, 2 down ifaces), vrf0 (1 up iface, 2 down ifaces)
interface system-loopback enabled in vrf main (1 up iface), vrf0 (1 up iface)

lldp                      enabled in vrf main
routing                   enabled in vrf main (3 ipv4 routes, 5 ipv6 routes), vrf0 (3 ipv4 routes, 5 ipv6 routes)

auth                      0 user
dns                       enabled in vrf main
dns-server                failed in vrf main
ipv6-autoconf             failed in vrf main
netconf-server            enabled in vrf main, vrf0
ntp                       enabled in vrf main
ssh-server                enabled in vrf main
telnet-server             enabled in vrf main

Add a custom RPC

RPC are used in the following cases:

  • To execute actions that are not related to configuration. Examples: importing or exporting a certificate, generating cryptographic keys, flush the ARP or Conntracks table, …

  • To retrieve a more detailed or filtered state: the input parameters can be used to select the objects that will be displayed, and in which format.

  • To retrieve the state in a text format. This is useful for the CLI, but is less interesting for the NETCONF API.

In this section, we will implement a simple RPC that lists the active telnet connections. As there are several ways to do it, we will implement 3 different versions:

  • a version that returns the result as text

  • another version to illustrate the use of the long running command mechanism, which is used when the command takes some time or can display a large amount of data (ex: ping, traceroute, show traffic, show routes, …)

  • a simple version that returns the result as formatted data (adapted to NETCONF API)

In the YANG module, the input parameter is the VRF for all versions of the RPC.

The output parameters depend on what we want to return: a simple text buffer, the status and output in case of long running command, or a list of addresses/ports.

These three commands will use a simple script that formats the output of the ss command. We add this script in $YAMS/yams/service/templates/show-telnet-sessions.sh:

#!/bin/sh

ss -pnt | grep in.telnetd | awk '{ print $5 " -> " $4 }'

Simple version

The first version below is the simplest one: the output is a string buffer. Note the use of vr-ext:nc-cli-show "telnet-sessions-text" which makes this RPC available as a show command in the CLI.

YANG description:

import vrouter-extensions {
  prefix vr-ext;
}

import vrouter-api {
  prefix vr-api;
}

rpc show-telnet-sessions-text {
  description
    "Show the active telnet sessions in text format.";
  input {
    leaf vrf {
      type string;
      default "main";
      description
        "VRF to look into.";
    }
  }
  output {
    leaf buffer {
      type string;
      description
        "The command output buffer.";
      vr-ext:nc-cli-stdout;
      vr-ext:nc-cli-hidden;
    }
  }
  vr-ext:nc-cli-show "telnet-sessions-text";
  vr-api:internal;
}

Note

The use of vr-api:internal means that that there is no backward compatibility guarantee for this RPC. Its input and/or output arguments may change from a major release to another.

This is acceptable since it is intended for our own CLI and not for direct NETCONF use.

In the service, the async method of the service is decorated @rpc(<rpc_xpath>). The returned value is a dictionary containing the output of the command, as a text buffer.

from yams.service import rpc

@rpc('/vrouter-telnet-server:show-telnet-sessions-text')
async def show_sessions_text(self, params):
    netns = params.get('vrf', 'main')
    args = ['bash', templates.filepath('show-telnet-sessions.sh')]
    buf = await process.run_command(args, netns=netns)
    return {'buffer': buf}

Behavior in CLI:

vsr> show telnet-sessions-text vrf vr1
127.0.0.1:37258 -> 127.0.0.1:23

Long running command version

The second version has the same input, but a different output that can be used by the CLI to retrieve the result in several pieces:

import vrouter-commands {
  prefix vr-cmd;
}

rpc show-telnet-sessions-text2 {
  description
    "Show the active telnet sessions in text format.";
  input {

    leaf vrf {
      type string;
      default "main";
      description
        "VRF to look into.";
    }
  }
  output {
    uses vr-cmd:long-cmd-status;
    uses vr-cmd:long-cmd-output;
  }
  vr-ext:nc-cli-show "telnet-sessions-text2";
  vr-api:internal;
}

Note

Since this kind of RPC is also intended for our CLI and not for direct NETCONF use, it is also acceptable to extend it with vr-api:internal. See previous chapter for more details.

The Python code is almost the same, except that the running of the command is delegated to the commands service:

@rpc('/vrouter-telnet-server:show-telnet-sessions-text2')
async def show_sessions_text2(self, params):
    netns = params.get('vrf', 'main')
    args = ['bash', templates.filepath('show-telnet-sessions.sh')]
    return await util.bgcmd.start_process(args, netns=netns)

The behavior in CLI is similar the the simple version, except that it can be interrupted if the command takes too much time:

vsr> show telnet-sessions-text2 vrf vr1
127.0.0.1:37258 -> 127.0.0.1:23

The commands service has an API to start a process, used in the previous example to run the script, and an API to start a coroutine, which used when there is more work that just calling a script. Here is an example of use:

@rpc('/vrouter-telnet-server:show-telnet-sessions-text2')
async def show_sessions_text2(self, params):
    await util.bgcmd.start_task(self._show_sessions, params)

async def _show_sessions(self, write, params):
    netns = params.get('vrf', 'main')
    args = ['bash', templates.filepath('show-telnet-sessions.sh')]
    # call the script every second until the command is stopped
    while True:
        buf = await process.run_command(args, netns=netns)
        await write(buf)
        await asyncio.sleep(1)

API version

In the last version, the output is not a string buffer anymore, instead it is described in YANG, making its parsing much easier. This is the preferred version if the RPC has to be used by a NETCONF client.

By convention in CLI, a show operation displays text, so we use vr-ext:nc-cli-cmd instead of vr-ext:nc-cli-show to make the RPC available through cmd show-telnet-sessions.

rpc show-telnet-sessions {
  description
    "Show the active telnet sessions.";
  input {
    leaf vrf {
      type string;
      default "main";
      description
        "VRF to look into.";
    }
  }
  output {
    list session {
      key "local-address local-port remote-address remote-port";
      description "List of telnet sessions.";

      leaf local-address {
        type vr-inet:ip-address;
        description "The local IP address of the telnet session.";
      }

      leaf local-port {
        type vr-inet:port-number;
        description "The local port number of the telnet session.";
      }

      leaf remote-address {
        type vr-inet:ip-address;
        description "The remote IP address of the telnet session.";
      }

      leaf remote-port {
        type vr-inet:port-number;
        description "The remote port number of the telnet session.";
      }
    }
  }
  vr-ext:nc-cli-cmd "show-telnet-sessions";
}

Important

An “API version” RPC should not have the vr-api:internal extension unless we have a good reason to refuse to ensure backward compatibility for it.

The Python code does the parsing of the output command, and returns a dictionary:

@rpc('/vrouter-telnet-server:show-telnet-sessions')
async def show_sessions(self, params):
    netns = params.get('vrf', 'main')
    args = ['bash', templates.filepath('show-telnet-sessions.sh')]
    buf = await process.run_command(args, netns=netns)
    pattern = r'''^
                  (?P<local_addr>(\*|::|\[::\]|[0-9\.]*))
                  (?P<local_iface>%[^:]+)?
                  :(?P<local_port>\d+)
                  \s+->\s+
                  (?P<remote_addr>(\*|::|\[::\]|[0-9\.]*))
                  (?P<remote_iface>%[^:]+)?
                  :(?P<remote_port>\d+)
                  $'''
    _re = re.compile(pattern, re.MULTILINE | re.VERBOSE)
    ret = []
    for line in buf.splitlines():
        match = _re.search(line)
        if not match:
            continue
        d = match.groupdict()
        ret.append({
            'local-address': d['local_addr'],
            'local-port': d['local_port'],
            'remote-address': d['remote_addr'],
            'remote-port': d['remote_port'],
        })
    return {'session': ret}

In the CLI, the result can be displayed in text, in xml, or in json:

vsr running config# cmd show-telnet-sessions vrf vr1
show-telnet-sessions
    session 127.0.0.1 37258 127.0.0.1 23
        ..
    ..
vsr running config# cmd xml show-telnet-sessions vrf vr1
<show-telnet-sessions xmlns="urn:mycompany:vrouter/telnet-server">
  <session>
    <local-address>127.0.0.1</local-address>
    <local-port>37258</local-port>
    <remote-address>127.0.0.1</remote-address>
    <remote-port>23</remote-port>
  </session>
</show-telnet-sessions>
vsr running config# cmd json show-telnet-sessions vrf vr1
{
  "vrouter-telnet-server:show-telnet-sessions": {
    "session": [
      {
        "local-address": "127.0.0.1",
        "local-port": 37258,
        "remote-address": "127.0.0.1",
        "remote-port": 23
      }
    ]
  }
}

API improved text version

We saw how to send text in yang with the vr-ext:nc-cli-stdout extension. There is a more advanced way to do it. We can use our new API version RPC created just before show-telnet-sessions and complete it to return a well formatted text when called by the CLI.

This shall replace the existing show-telnet-sessions-text, which we can now remove both from YANG file and from Python service.

Modify the YANG file with the nc-cli-text-output extension. As this RPC will display text, the nc-cli-cmd extension must also be replaced by nc-cli-show:

rpc show-telnet-sessions {
  ...
  vr-ext:nc-cli-show "telnet-sessions";
  vr-ext:nc-cli-text-output;
}

Next modify the Python service as below:

from yams.service import rpc_text

@rpc_text('/vrouter-telnet-server:show-telnet-sessions')
async def show_sessions_text(self, params):
    sessions = params['output']['session']
    if not sessions:
        return 'No available telnet session\n'

    out = []
    for x, session in enumerate(sessions):
        listen_local = ':'.join((
            session['local-address'], str(session['local-port'])))
        listen_remote = ':'.join((
            session['remote-address'], str(session['remote-port'])))
        out.append(f'Session {x}: '
                   f'listening on -> local: {listen_local} | '
                   f'remote: {listen_remote}')
    return '{}\n'.format('\n'.join(out))

The CLI will now use the rpc_text in order to print the result:

vsr> show telnet-sessions vrf vr1
Session 0: listening on -> local: 127.0.0.1:60976 | remote: 127.0.0.1:23
Session 1: listening on -> local: 127.0.0.1:60978 | remote: 127.0.0.1:23

Important

The params parameter of the method decorated with @rpc_text contains two keys:

  • input: contains the parameter passed by the client

  • output: contains the result of the linked @rpc method

Advanced completion

The CLI provides contextual completions, which are deduced from the YANG model. For instance, the CLI completes with node names, enum values, or list keys in the configuration.

It is possible to add contextuel completions with the vr-ext:nc-cli-completion-xpath YANG extension. The argument of this extension is a xpath that should return the list of added completions.

  • If the xpath references configuration data (in /vrouter:config), it only works in CLI edition mode.

  • If the xpath references state data (in /vrouter:state), some code should be added in the service to provide this data to the CLI

The rationale behind this is:

  • Doing the xpath query through the NETCONF API for each completion would be too long, so the CLI maintain a local cache of the current state.

  • Storing the whole state in the CLI cache would be too big, so only a subset of the state is synchronized in the cache.

This state data is provided in YAMS by functions that are decorated with @completion(). Let’s add a specific completion to our RPCs. The VRF name input is a good candidate for this.

Modify the YANG model as below:

--- a/yang/vrouter-telnet-server.yang
+++ b/yang/vrouter-telnet-server.yang
@@ -65,6 +65,8 @@ module vrouter-telnet-server {
         default "main";
         description
           "VRF to look into.";
+        vr-ext:nc-cli-completion-xpath
+          "/vrouter:state/vrouter:vrf/vrouter:name";
       }
     }
     output {
@@ -89,6 +91,8 @@ module vrouter-telnet-server {
         default "main";
         description
           "VRF to look into.";
+        vr-ext:nc-cli-completion-xpath
+          "/vrouter:state/vrouter:vrf/vrouter:name";
       }
     }
     output {
@@ -108,6 +112,8 @@ module vrouter-telnet-server {
         default "main";
         description
           "VRF to look into.";
+        vr-ext:nc-cli-completion-xpath
+          "/vrouter:state/vrouter:vrf/vrouter:name";
       }
     }
     output {

In the example above, it is not needed to add a @completion() function in our TelnetServerService, because the list of VRF is already advertised by the SystemLoopbackService.

If we want to complete with data from the TelnetServerService, the easiest completion method is to return the same as the @state() method:

from yams.service import completion

@completion("/state/vrf[name=<netns>]/telnet-server")
async def get_completion(self, *, netns):
    return await self.get_state(netns=netns)

As for the state, the dictionary returned by the @completion() async method must match the YANG model. For performance reason, it is not advised to return too many objets (above hundreds).

NETCONF API

The NETCONF API is available without additional change. It can be tested with netopeer2-cli, or with ncclient as described in Automation section of the User Guide.

Here is a simple example with netopeer2-cli:

root@vsr:~# netopeer2-cli
> connect --ssh --host 127.0.0.1 --port 830
Interactive SSH Authentication
Type your password:
> get --filter-xpath /vrouter:state/vrouter:vrf[name='main']/vrouter-telnet-server:telnet-server
DATA
<state xmlns="urn:6wind:vrouter">
  <vrf>
    <name>main</name>
    <telnet-server xmlns="urn:mycompany:vrouter/telnet-server">
      <enabled>false</enabled>
    </telnet-server>
  </vrf>
</state>

Display the service logs

The show log service <service> CLI command displays the logs associated to a service. To make the Telnet logs available through this command, only few modifications are needed.

In the YANG model, an identity statement is added. This identity derives from vr-types:SERVICE_LOG_ID, which makes the show log service telnet-server command available.

--- a/yang/vrouter-telnet-server.yang
+++ b/yang/vrouter-telnet-server.yang
@@ -14,6 +14,9 @@ module vrouter-telnet-server {
   import vrouter-commands {
     prefix vr-cmd;
   }
+  import vrouter-types {
+    prefix vr-types;
+  }

   organization
     "My Company";
@@ -28,6 +31,12 @@ module vrouter-telnet-server {
     reference "";
   }

+  identity telnet-server {
+    base vr-types:SERVICE_LOG_ID;
+    description
+      "Telnet server service.";
+  }
+
   grouping system-telnet-server-config {
     description
       "Configuration data for system telnet configuration.";

On the service side, we only need to advertise the list of systemd units associated to this service:

--- a/yams/service/telnet_server.py
+++ b/yams/service/telnet_server.py
@@ -44,6 +45,9 @@ class TelnetServerService(Service):
             return self.CONF_FILE
         return self.NETNS_CONF_FILE % netns

+    def systemd_units(self, netns):
+        yield self._get_service(netns)
+
     @config_route("/config/vrf[name=<netns>]/telnet-server")
     async def apply(self, config, *, netns):
         """

For reference, the backend of the show log CLI command is implemented in $YAMS/service/log.py:show_log().

Push state changes automatically

Up to now, our service’s state was displayed on a show state command. This implies that the state is only updated when the user asks for it. This is useful, but it means that if some unexpected event were to affect our operational state, the user would not be aware of it until the next state inspection.

The sysrepo datastore manipulation library supports subscribing to various events: selected state changes can pushed to consumers immediately. In sysrepo terms, we will allow subscribing to module changes.

Let us push state whenever the systemd unit changes status; so the user is able to react to unexpected service failures.

Our patch to the service code will be very small:

--- a/yams/service/telnet_server.py
+++ b/yams/service/telnet_server.py
@@ -7,10 +7,13 @@
 import re

 from yams import util
+from yams.service import PushStateMixin
 from yams.service import Service
+from yams.service import ServiceLoopMixin
 from yams.service import SummaryGroup
 from yams.service import completion
 from yams.service import config_route
+from yams.service import push_state_route
 from yams.service import rpc
 from yams.service import rpc_text
 from yams.service import state_route
@@ -143,6 +146,10 @@ async def get_state(self, *, netns):

         return state

+    @push_state_route("/state/vrf[name=<netns>]/telnet-server")
+    async def push_state(self, *, netns):
+        return {'enabled': SystemdUnitCache.status(self._get_service(netns))}
+
     @completion("/state/vrf[name=<netns>]/telnet-server")
     async def get_completion(self, *, netns):
         return await self.get_state(netns=netns)

The PushStateMixin is the basic building block enabling to push to the operational datastore. Inheriting from this mixin allows using the @push_state_route decorator; but by itself, it does not tell what event exactly should trigger the push_state_route() hook.

Inheriting from ServiceLoopMixin specifies this event: we want to push an update whenever the systemd unit running telnetd changes status. Because our previous work already described the unit files we need using systemd_units(), YAMS already has everything needed to detect the event.

Several update patterns are described by other mixins in YAMS, e.g. the InterfaceLoopMixin which detects link state changes. Take a look at them if you ever want to further enhance the push state capabilities of your service.

For clarity, let’s also specify in the YANG model that we push the “enabled” boolean. We’ll mark it with vr-ext:pushed.

--- a/yang/vrouter-telnet-server.yang
+++ b/yang/vrouter-telnet-server.yang
@@ -153,7 +153,11 @@ module vrouter-telnet-server {
     container telnet-server {
       description
         "Telnet server state.";
-      uses system-telnet-server-config;
+      uses system-telnet-server-config {
+        refine "enabled" {
+          vr-ext:pushed;
+        }
+      }
     }
   }
 }

Remember, we only want to push state. The refine keyword allows us to extend the usual grouping, while leaving it untouched for config.

Let’s try our change. As of the time of this writing, nc-cli does not support subscribing to state changes, so we’ll use an interactive Python session:

import sysrepo

def callback(_, _, changes, _):
  print(f"Observed state changes {changes}")

conn = sysrepo.SysrepoConnection()
sess = conn.start_session('operational')
sess.subscribe_module_change(
  'vrouter',
  '/vrouter:state/vrouter:vrf[vrouter:name="main"]/vrouter-telnet-server:telnet-server',
  callback,
  done_only=True
)

# Subscribing is not a blocking operation. Leave the prompt open so that
# the sysrepo session will remain active.

Make sure the telnet service is enabled. Then manually stop inetd.service to trigger a change:

$ systemctl stop inetd.service

Our Python prompt should display:

>>>
Observed state changes: [ChangeModified(/vrouter:state/vrf[name='main']/vrouter-telnet-server:telnet-server/enabled: 'true' -> False)]

The service failure has been detected.