Monitoring IPSec/BGP without SNMP

Why?

For a client, I recently needed more visibility for IPSec tunnels and BGP session statuses.
Normally with a routing equipment you tend to look into SNMP for this.

But that was not a good option for variaous reasons:
1. the routing appliances use VyOS: basically a network oriented Linux distro, which is quite good for our needs
2. a quick search found little in the realm of SNMP MIB for BGP and nothing for IPsec as a result
3. I’d rather treat those machines as appliances and install as few extras as possible1
4. I need others to be able to understand and extend it
5. I need something quickly and cheaply
6. we’ve got a lovely Prometheus + Grafana platform I use

How?

What do we do?

node_exporter seems like the way to go.
It is a single static go binary with no dependency.
So deploying a solution would be basically copying that binary, maybe a configuration and a local probe to generate the data.
Sounds simple. I like this as it complies with #3.

IPSec and BGP are managed by different daemons and I’d rather avoid reading extensive documentation to figure out how to fetch what I need.
Also these are software made prior the cloudy-cloud era so I doubt a nice REST API will be available to me.2
This low-level approach will also make it more difficult to debug the probe. I’ll probably need one for each type.
So this incerases the learning curve for me and the others. Not so #4

Fortunately, the data I need can be found using the CLI.
VyOS provides also some built-in scripting functions for bash, so it would be easy to prototype something by parsing the output. #5 friendly.
I’d rather use python for parsing but since getting the data requires invoking CLI commands, we might as well stick to bash for now.
A cronjob will be in charge of updating the stats periodicaly in text format..

This sounds like a plan, let’s get started.

The node_exporter daemon

All we need is to copy the binary and a startup script.
I only need to enable the textfile collector, so I can just set --collector.textfile.directory flag inside that script and get away with loading a configuration file.
This is not the nicest startup script but it will work properly with this older sysV init system.

#!/bin/bash

### BEGIN INIT INFO
# Provides:          node_exporter
# Required-Start:    $all
# Required-Stop:
# Default-Start:     2 3 4 5
# Default-Stop:
# Short-Description: your description here
### END INIT INFO

RETVAL=0
PROG="node_exporter"
EXEC="/opt/node_exporter/bin/node_exporter"
LOCKFILE="/var/lock/subsys/$PROG"
OPTIONS="--web.listen-address=:9100 --collector.textfile.directory=/opt/node_exporter/data"

success() {
  echo -n "OK"
}

failure() {
  echo -n "FAIL"
}

start() {
  if [ -f $LOCKFILE ]
  then
    echo "$PROG is already running!"
  else
    echo -n "Starting $PROG: "
    nohup $EXEC $OPTIONS >/dev/null 2>&1 &
    RETVAL=$?
    [ $RETVAL -eq 0 ] && touch $LOCKFILE && success || failure
    echo
    return $RETVAL
  fi
}

stop() {
  echo -n "Stopping $PROG: "
  kill $(pidof $PROG)
  RETVAL=$?
  [ $RETVAL -eq 0 ] && rm -r $LOCKFILE && success || failure
  echo
}

status() {
  if [ -f $LOCKFILE ]
  then
    echo "$PROG is running"
  else
    echo "$PROG is not running"
  fi
}

restart ()
{
  stop
  sleep 1
  start
}

case "$1" in
  start)
    start
    ;;
  stop)
    stop
    ;;
  status)
    status
    ;;
  restart)
    restart
    ;;
  *)
    echo "Usage: $0 {start|stop|restart|status}"
    exit 1
esac
exit $RETVAL

The custom probe

As described in https://wiki.vyos.net/wiki/Command_scripting, command scripting in VyOS isn’t complicated.
Once you source /opt/vyatta/etc/functions/script-template, you can execute the same commands as you would interactively.

TIPS: DO make sure your script uses exit on every possible termination case or it will not fnished cleanly
and you will see many fuse filesystems mount; which could become a problem after a while.

#!/bin/bash

source /opt/vyatta/etc/functions/script-template


promFile="/opt/node_exporter/data/stats.prom"
buffer=$(mktemp)

tmpData=$(mktemp)
tplIPSEC=$(mktemp)
tplBGP=$(mktemp)

# see note about clean exit
cleanup() {
    rm ${tmpData} ${tplIPSEC} ${tplBGP}
}
trap cleanup 1 2 3 6 15

# Hint prometheus about the data
cat <<EOT > ${buffer}
# HELP node_ipsec_peer_status IPSec VPN status UP/DOWN for peer (1/0)
# TYPE node_ipsec_peer_status gauge
# HELP node_ipsec_peer_bytes_in IPSec VPN bytes IN from peer)
# TYPE node_ipsec_peer_bytes_in counter
# HELP node_ipsec_peer_bytes_out IPSec VPN bytes out to peer
# TYPE node_ipsec_peer_bytes_out counter
EOT

# template to IPSec
cat << EOT > ${tplIPSEC}
node_ipsec_peer_status{peer="PEER", name="NAME"} STATUS
node_ipsec_peer_bytes_in{peer="PEER", name="NAME"} IN
node_ipsec_peer_bytes_out{peer="PEER", name="NAME"} OUT
EOT

# template for BGP
cat <<EOT > ${tplBGP}
node_bgp_neighbor_status{neighbor="NEIGHBOR"} STATUS
node_bgp_neighbor_prefixes_in{neighbor="NEIGHBOR"} IN
node_bgp_neighbor_prefixes_out{neighbor="NEIGHBOR"} OUT
EOT

# loop on all existing peers
peers=$(run show vpn ipsec sa | sed -nE -e 's/^([[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}).*/\1/p')
for peer in ${peers}
do
    TS="$(date +%s)000"
    run show vpn ipsec sa peer ${peer} > ${tmpData}
    status="$(cat ${tmpData} | grep -c up).0"
    name=$(cat ${tmpData} | awk '/Description/ { print $2 }')
    in=$(cat ${tmpData} | awk '/vti/ { split ($3, b, "/"); m=b[2]; c=sprintf("%g", m); if(match(m, "K")) c=c*1000; if(match(m, "M")) c=c*1000000; if(match(m, "G")) c=c*1000000000; print c }')
    out=$(cat ${tmpData} | awk '/vti/ { split ($3, b, "/"); m=b[1]; c=sprintf("%g", m); if(match(m, "K")) c=c*1000; if(match(m, "M")) c=c*1000000; if(match(m, "G")) c=c*1000000000; print c }')
    cat ${tplIPSEC} | sed -e "s/PEER/${peer}/g;s/STATUS/${status}/;s/IN/${in}/;s/OUT/${out}/;s/TS/${TS}/g;s/NAME/${name}/" >> ${buffer}
done

# loop on all neighbors
neighbors=$(run show ip bgp neighbors | awk '/BGP neighbor/ { sub( ",", "", $4); print $4 }')
for neighbor in ${neighbors}
do
    run show ip bgp neighbor ${neighbor} | grep -e state -e accepted > ${tmpData}
    status=$(cat ${tmpData} | grep -c Established)
    in=$(cat ${tmpData} | awk '/accepted/ { print $1 }')
    out=$(run show ip bgp neighbor ${neighbor} advertised-routes | awk '/Total/ { print $NF }')
    [[ -z ${out} ]] && out=0 # ensure we always have a value vhen none is found
    cat ${tplBGP} | sed -e "s/NEIGHBOR/${neighbor}/g;s/STATUS/${status}/;s/IN/${in}/;s/OUT/${out}/;s/TS/${TS}/g;s/NAME/${name}/" >> ${buffer}
done

# make the data available to node_exporter
mv ${buffer} ${promFile}
cleanup

if [[ ! -z ${DEBUG} ]]; then
    cat ${promFile}
fi

exit

And we can just run this via a cronjob every minute.

Let’s check the results before going any further

> # we switch to proper bash because '|' behaves a differently with vbash
> bash -c 'curl http://localhost:9100/metrics | grep -e ipsec -e bgp'

# HELP node_ipsec_peer_bytes_in IPSec VPN bytes IN from peer)
# TYPE node_ipsec_peer_bytes_in counter
node_ipsec_peer_bytes_in{name="superVPN",peer="10.11.12.23"} 281200
# HELP node_ipsec_peer_bytes_out IPSec VPN bytes out to peer
# TYPE node_ipsec_peer_bytes_out counter
node_ipsec_peer_bytes_out{name="superVPN",peer="10.11.12.13"} 97800
# HELP node_ipsec_peer_status IPSec VPN status UP/DOWN for peer (1/0)
# TYPE node_ipsec_peer_status gauge
node_ipsec_peer_status{name="superVPN",peer="10.11.12.13"} 1
# HELP node_bgp_neighbor_prefixes_in number of received BGP prefixes from neighbor
# TYPE node_bgp_neighbor_prefixes_in counter
node_bgp_neighbor_prefixes_in{neighbor="169.254.1.2"} 1
# HELP node_bgp_neighbor_prefixes_out number of advertised BGP prefixes to neighbor
# TYPE node_bgp_neighbor_prefixes_out counter
node_bgp_neighbor_prefixes_out{neighbor="169.254.1.2"} 13
# HELP node_bgp_neighbor_status Metric read from /opt/node_exporter/data/stats.prom
# TYPE node_bgp_neighbor_status untyped
node_bgp_neighbor_status{neighbor="169.254.1.2"} 1

Good!

Prometheus and Grafana

We are using the prometheus operator to deploy this monitoring system.
So the configuration looks like this.

To add the new probes, update the additionalScrapeConfigs list under global / prometheus / prometheusSpec
of prometheus-operator-values.yaml:

...
additionalScrapeConfigs:
    - job_name: "superRouter1"
    static_configs:
    - targets:
        - 10.11.12.13:9100

To add new rules to prometheus create a file like this and deploy using kubectl apply -f prometheus-rules.yaml
I don’t like to put the rules as part of the helm chart because they can change often and redeploying prometheus everytime is inefficiant.
Both actions can be run in sequence / dependency with a Makefile or pipeline anyway.

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  labels:
    prometheus: k8s
    role: alert-rules
  name: network.rules
  namespace: monitoring
spec:
  groups:
    - name: ipsec-status
      rules:
      - alert: IPSecDown
        annotations:
          message: 'IPSec from {{ $labels.job }} to {{ $labels.name }}/{{ $labels.peer }} is DOWN'
          runbook_url: ""
        expr: |
          node_ipsec_peer_status{job=~".*"} == 0
        for: 1m
        labels:
          severity: critical
    - name: bgp-peer-status
      rules:
      - alert: BGPDown
        annotations:
          message: 'BGP from {{ $labels.job }} to {{ $labels.neighbor }} is DOWN'
          runbook_url: ""
        expr: |
          node_bgp_neighbor_status{job=~".*"} == 0
        for: 1m
        labels:
          severity: critical

Alertmanager and Slack notifications

Using the prometheus-operator, prometheus will be aware of alertmanager, so that’s one job done.
To add notification to slack is rather simple too.
Edit the prometheus-operator-values.yaml under global / alertmanager to add the slack webhook, channel and filters

config:
global:
    slack_api_url: https://hooks.slack.com/services/FOO/BAR/BAZ
    resolve_timeout: 5m
route:
    group_by: ['job']
    group_wait: 30s
    group_interval: 5m
    repeat_interval: 15m
    receiver: 'slack-notifications'
    - match:
        alertname: Watchdog
    receiver: 'null'
receivers:
- name: slack-notifications
    slack_configs:
    - channel: '#OMG_WERE_ALL_GONNA_DIE'
    title: "Network event detected"
    text: "<!channel>\n{{ .CommonAnnotations.message }}"
- name: 'null'

You can get fancy with filtering and the message but here I didn’t need to.

Improvements

This is a simple setup to get metrics out of a system that doesn’t expose them by default.
There is room for improvements, here are a few ideas (good or bad):
- Rewrite the probe in something more pottent: a simpler way of parsing the output
or maybe ever probing the daemons directly could make extension easier.
We could also get rid of the cronjob and maybe even implement the collector into node_exporter directly
and submit it upstream.
- Updating/deploying the probe could also be improved by implementing an API if you expect a lot of changes.
The examples above detect all existing configurations for VPN/BGP so they will adapt automatically
and won’t require a matching reconfiguration/deployment.
- I deploy all this using Ansible and a jenkins pipeline which are too specific for this article
but this is something you probably want. That or bake your own vyos image with the monitoring built-in.
- Add a mechanism to allow prometheus scraper to discover new endpoints rather than use a static configuration.
- Add a timestamp to the metrics


  1. I’m okay with installing a small agent/config once; but this should not become a config management/patching liability
    [return]
  2. in all honesty, I didn’t to check.
    [return]

Contents