IVE ARP’d on for too long

The purpose of this blog is to highlight how different platforms respond to ARP requests and to explore some strange default operations on Juniper IVE VPN platforms. This quirk was found during a datacentre migration, during which the top-of-rack/first-hop device changed from a Cisco IOS 6500 environment to a Nexus Switching environment. The general setup looks like this and follows an example customer with a Shared IVS setup:


In order to understand this scenario, it’s important to know what the Juniper IVE platform is and how it provides its VPN services.  To that end, I’ll give a brief overview of the platform before looking at the quirk.

IVE Platform

The Juniper 6500 IVE (Instant Virtual Extranet) platform, is a physical appliance that offers customers a unique VPN solution linking to their MPLS network. Once connected, a home worker will be connected to their corporate MPLS network just as if they were at a Branch Office.

(In order to avoid confusion between the Juniper 6500 IVE and the Cisco 6500 L3 switch -which also plays an important role in this setup but is a very different kind of device – I will just use the term IVE to refer to the Juniper platform)

IVE Ports

As you can see from the digram above, an IVE appliance has an external port and an internal port.

The external port, as its name implies, is typically assigned a public IP address. It also has virtual ports, which are analogous to sub-interfaces, each with their own IPs. Each of these virtual ports links to an individual customers VPN platform, or a shared VPN platform that holds multiple customer solutions. A common design involves placing a firewall in between the external interface and the internet. This allows the virtual interfaces to share the same subnet as the main external interface. Customer public IPs are destination NAT’d inbound (or MIP’d if you’re using a Juniper firewall) to their corresponding virtual IPs.

The internal port, similarly services multiple customers. This port can be thought of as a trunk port, whereby each VLAN links to an individual customers VRF, typically with an SVI as the gateway – sometimes used with HSRP or other FHRP.

Shared or Dedicated

Customers can have either a Shared or Dedicated VPN solution. These solutions are called IVS’s (or Instant Virtual Systems). You can have multiple IVS’s on a single IVE appliance.

Shared IVS Solutions represent a single multi-tenant IVS. Basically, multiple customers connect to the same IVS and are segmented by allocating them different sign-in pages and connection policies. Options are more limited than having a Dedicated IVS but can be more cost effective.

Dedicated IVS solutions give customers more flexibility. They can have more connected users and added customisation such as 2FA and multiple realms.

When an IVS is created it needs to link to the internal port. To do this one or more VLANs can be assigned. If the platform is Dedicated, only a single VLAN needs to be assigned – namely that of the customer. This VLAN will link to an SVI in the customers VRF. If the platform is Shared, multiple the VLANs are assigned – one per customer. However in this case, a default VLAN will need to be assigned for when the IVS needs to communicate on a network that is independent from any of its individual customers. Typically the Shared Authentication VLAN is used for this.

But what is the Shared Authentication VLAN? This leads to the next part of the setup… how users authenticate.


When a VPN user logins in from home and authenticates, the credentials they enter on the sign-in page with need to be… well… authenticated. Much like the IVS solutions themselves, there are both Shared and Dedicated options.

Customers can have their own LDAP or RADIUS servers within their MPLS networks. In this case the IVE will make a request to this LDAP when a user connects. This is called Dedicated Authentication.

Alternatively, the Service Provider can offer a Shared Authentication solution. This alleviates the customer from having to build and maintain their own LDAP servers by utilising a multi-tenant platform managed by the Provider. The customer supplies the user details, and the Service Provider handles the rest. 

Shared Authentication is typically used for Shared IVS’s. In order to connect to the Shared Authentication Server, a Shared IVS will allocate a VLAN – alongside all of its customer VLANs – on the internal trunk port. This links to the Providers network (for example an internal VRF or VLAN) where the Shared Authentication servers reside. It is this VLAN that is assigned as the default VLAN for the Shared IVS. 

The below screenshot is taken from the Web UI of the IVE platform. It shows some of the configuration for a Shared IVS (namely IVS123).  It uses a default VLAN called Shared_Auth_Network as noted by the asterisk in the bottom right table:


We’re nearly ready to look at the quirk. There is just one last thing to note regarding how a Shared IVS Platform, like IVS123, communicates with one of its customers Authentication Servers.

Here is the key sentence to remember: When a Shared IVS platform communicates with any authentication server (shared or dedicated), it will use its Shared Auth VLAN IP as the source address in the IP packet.

This behaviour seems very counterintuitive and I’m not sure why the IVS wouldn’t use the source IP of the VLAN for that customer IVS.

Whatever the reason for this behaviour, the result is that packets sourced from a Shared IVS Platform communicating to one of its customer’s Dedicated authentication servers, will be sending packets with a source IP of the Shared Auth VLAN. But such a customer isn’t using Shared Auth. Their network doesn’t know or care about the Shared Auth environment.  So when their Dedicated LDAP server receives an authentication request from the IVE, it sees the source IP address as being from this Shared Auth VLAN.

The solution, however, is easy enough (barring any IP overlaps)… The customer simply places a redistributed static route into its VRF pointing any traffic to this Shared Auth subnet back to their internal port of the IVE.

To understand this better, let’s take a look at a diagram of the setup as a user attempts to connect:


Now we are equipped to investigate the quirk, which looks at a customer on a Shared IVS platform, but with Dedicated LDAP Authentication Servers.

The quirk

As mentioned earlier, this quirk follows a migration of an IVE platform from an environment using Cisco IOS 6500s switches to an environment using Cisco Nexus switches.

In the both environments, trunk ports connect to the internal IVE ports with SVIs acting as gateways. The difference comes in the control and dataplane that were used. The original IOS environment was a standard MPLS L3VPN network. The Nexus environment was part of a hierarchical VxLAN DC Fabric. Leaf switches connected direct to the IVEs and implemented anycast gateway on the SVIs. Prefix and MAC information was communicated over the EVPN BGP address family and ASR9k DCIs acted as border-leaves terminating the VTEPs, which were then stitched into the MPLS core.

The key difference however, isn’t in the overlays or dataplane protocols being used. The key is how each ToR device responds to ARP…

Once the move was completed and the IVE was connected to the Nexus switches everything seemed fine at first glance. Users with Dedicated IVS’s worked. Users on Shared IVS’s who utilised the Shared Auth server could also login and authenticate correctly. However a problem was found when checking any customer who had a VPN solution configured on a Shared IVS platform with Dedicated Authentication. Despite the customer login page showing up (implying that the public facing external side was working), authentication requests to their Dedicated Auth Servers were failing.

Below shows the Web UI output of a test to connect to our example customers LDAP servers at


As we searched for a solution to this problem, we had to keep in mind how a Shared IVS Platform makes Auth Server requests…

The search

Focusing on just one of the customers on the Shared platform, we first checked how far a trace would get from the IVE to the Dedicated Auth Server. We found pretty quickly that the trace would not even reach the first hop – that is, the anycast gateway IP that was on the SVI of the Nexus leaf switch.


However when checking from the Nexus, both routing and tracing, we saw we could reach the Dedicated Auth Server fine – as long as we sourced from the right VRF.

nexus1# sh ip route vrf CUST_A | b | head lines 5, ubest/mbest: 2/0
*via %default, [20/0], 7w2d, bgp-65000, external, tag 500 
    (evpn) segid: 12345 tunnelid: 0xc39dfe04 encap: VXLAN

*via %default, [20/0], 7w2d, bgp-65000, external, tag 500 
    (evpn) segid: 12345 tunnelid: 0xc39dfe05 encap: VXLAN

nexus1# traceroute vrf CUST_A
traceroute to (, 30 hops max, 40 byte packets
1 ( 1.455 ms 1.129 ms 1.022 ms
2 ( 6.967 ms 6.928 ms 6.64 ms
3 ( 8.002 ms 7.437 ms 7.92 ms
4 ( 6.789 ms 6.683 ms 6.764 ms
5 * * *
6 ( 12.374 ms 0.704 ms 0.62 ms

This led us to check the Layer 2 between the switch and the IVE. We did this by checking the ARP table entries on the IVE. We immediately found that there were no ARP entries to be found for the ToR SVI for any customer on a Shared Platform with a Dedicated Authentication setup.

The output below shows the ARP table as seen from the console of the IVE. Note the incomplete ARP entry for, the SVI on the Nexus for our example customer.

(As a quick aside, you may notice that the HWAddress of the Nexus is showing as 11:11:22:22:33:33. This is due to the fabric forwarding anycast-gateway-mac 1111.2222.3333 command being configured.)

Please choose from among the following options:
1. View/Set IP/Netmask/Gateway/DNS/WINS Settings
2. Print Routing Table
3. Print ARP Cache
4. Clear ARP Cache
5. Ping to a Server
6. Trace route to a Server
7. Remove Routes
8. Add ARP entry
9. View cluster status
10. Configure Management port (Enabled)

Choice: 3
Address       HWtype  HWaddress          Flags Mask  Iface   ether   11:11:22:22:33:33   C          int0.2387   ether   11:11:22:22:33:33   C          int0.1298  ether   11:11:22:22:33:33   C          int0.2347          (incomplete)                   int0.

So there is no ARP entry. But logically this appears to be more or less the same layer 2 segment when it connected to the 6500. So what gives?

It turns out that 6500s and Nexus switches respond to ARP requests in different ways. The process on the 6500 is fairly standard and works as follows:


But a Nexus will not respond to an ARP request if the source IP is from a subnet that it doesn’t recognise:


In our example case, the Nexus switch does not recognise as a valid source IP for the receiving interfaces (which has IP It sees it as off-net. We could also see the ARP check failing by using debug ip arp packet on the switch.

So what’s the solution? There are a couple of ways to tackle this. We could add a static ARP entry on the IVE, but this could be cumbersome if new needed to add it for each Shared IVS. Alternatively, we could add a secondary IP to the subnet on the SVI…

The Work

Adding a secondary IP is fairly straight forward. The config would be as follows:

nexus1# sh run interface vlan 2301
interface Vlan2301
description Customer_A
no shutdown
bandwidth 2000
vrf member CUST_A
no ip redirects
ip address
ip address secondary
fabric forwarding mode anycast-gateway

A /31 works well in this case, encompassing only the IPs that are needed (namely and . This allows the ARP request to pass the aforementioned check that the Nexus performs. From here the MAC entries began to show up and connectivity to the Shared Auth Server began to work.

Please choose from among the following options:
1. View/Set IP/Netmask/Gateway/DNS/WINS Settings
2. Print Routing Table
3. Print ARP Cache
4. Clear ARP Cache
5. Ping to a Server
6. Trace route to a Server
7. Remove Routes
8. Add ARP entry
9. View cluster status
10. Configure Management port (Enabled)

Choice: 3
Address       HWtype  HWaddress          Flags Mask   Iface   ether   11:11:22:22:33:33   C           int0.2387   ether   11:11:22:22:33:33   C           int0.1298  ether   11:11:22:22:33:33   C           int0.2347  ether   11:11:22:22:33:33   C           int0.2301


So this raises the question of whether or not this behaviour is desired. Should a device responding to an ARP request, check the source IP? I’d tend to lean in favour of this type of behaviour. It adds extra security and besides, it’s actually the behaviour of the IVE that is strange in this case. One would think that the IVS would use the source IP of the connecting customers subnet, instead of that of the Shared Auth VLAN. The behaviour certainly is unorthodox but finding a solution to this problem highlights some of the interesting scenarios that can arise when working with different vendors and operating systems.

I hope you’ve enjoyed the read. I’m always open to alternate ideas or general discussion so if you have any thoughts, let me know.

Peering into the Future

Network automation is becoming more and more ubiquitous these days. Configuration generation is good example of this – why spend time copy and pasting from prepared templates if a script can do it for you?

This small blog introduces the first python script to be released on netquirks. The script is called PeerPal and it automates the creation of Cisco eBGP peering configuration by referencing input from both a config file and details gather by utilising the peeringdb.com API. This serves as a good example of how network automation can make performing regular tasks faster, with fewer errors and more consistency.

The GitHub repo be found here.

It works by taking in a potential peers autonomous system number and checking with Peering DB to find which Internet Exchanges both your ASN and theirs have common presence. A list is then presented, one for IPv4 then one for IPv6, allowing you to select which locations to generate the peering config for. It can do this for either IOS or XR format. It reads the neighbors IP, prefix limits and even IRR descriptions from Peering DB and integrates them into the final output.

Other specifics of the peering, like your ASN, neighbor groups, MD5 passwords, ttl-security or what the operating system format should be, are all stored in a local config file. This can be customised per Internet Exchange.

The best way to demonstrate the script is to give a quick example. Let’s say the ISP netquirks (ASN 1234) wants to peer with ACME (ASN 5678). The script is run like this:

myhost:peerpal Steve$ python3 ./peerpal.py -p 5678
The following are the locations where Netquirks and ACME have 
common IPv4 presence:
(IPs for ACME are displayed)
1: LINX LON1 -
3: DE-CIX Frankfurt -,
4: IXManchester -
5: France-IX Paris -,
6: DE-CIX_Madrid -
Please enter comma-seperated list of desired peerings (e.g. 1,3,5) 
or enter 'n' not to peer over IPv4: 

The script first lists the Exchange names and their IPv4 IPs. Enter the Exchanges you want to peer at, and then do the same for IPv6…

myhost:peerpal Steve$ python3 ./peerpal.py -p 5678
The following are the locations where Netquirks and ACME have 
common IPv4 presence:
(IPs for ACME are displayed)
1: LINX LON1 -
3: DE-CIX Frankfurt -,
4: IXManchester -
5: France-IX Paris -,
6: DE-CIX_Madrid -
Please enter comma-separated list of desired peerings (e.g. 1,3,5) 
or enter 'n' not to peer over IPv4: 2,4

The following are the locations where Netquirks and ACME have 
common IPv6 presence:
(IPs for ACME are displayed)
1: LINX LON1 - 2001:1111:1::50
2: CATNIX - 2001:2345:6789::ca7
3: DE-CIX Frankfurt - 2001:abc:123::1,2001:abc:123::2
4: IXManchester - 2001:7ff:2:2::ea:1
5: France-IX Paris - 2001:abab:1aaa::60,2001:abab:1aaa::61
6: DE-CIX_Madrid - 2001:7f9:e12::fa:0:1
Please enter comma-separated list of desired peerings (e.g. 1,3,5) 
or enter 'n' not to peer over IPv6: 6

The output produced looks like this:

IPv4 Peerings:
The CATNIX IPv4 peerings are as follows:
Enter the following config onto these routers:

router bgp 5678
 neighbor remote-as 1234
 neighbor description AS-ACME
 neighbor inherit peer-session EXTERNAL
 address-family ipv4 unicast
  neighbor activate
  neighbor maximum-prefix 800 90 restart 60
  neighbor inherit peer-policy CATNIX

The IXManchester IPv4 peerings are as follows:
Enter the following config onto these routers:

router bgp 5678
  remote-as 1234
  use neighbor-group default_v4_neigh_group
  description AS-ACME
  address-family ipv4 unicast
   maximum-prefix 800 90 restart 60

router bgp 5678
 neighbor remote-as 1234
 neighbor description AS-ACME
 neighbor inherit peer-session peer-sess-mchr4
 neighbor ttl-security hops 1
 address-family ipv4 unicast
  neighbor activate
  neighbor maximum-prefix 800 90 restart 60
  neighbor inherit peer-policy peer-pol-mchr4

IPv6 Peerings:

The DE-CIX_Madrid IPv6 peerings are as follows:

router bgp 1042
 neighbor 2001:7f9:e12::fa:0:1 remote-as 1234
 neighbor 2001:7f9:e12::fa:0:1 description AS-ACME
 neighbor 2001:7f9:e12::fa:0:1 peer-group Mad1-6
 neighbor 2001:7f9:e12::fa:0:1 ttl-security hops 1
 address-family ipv6 unicast
  neighbor 2001:7f9:e12::fa:0:1 activate
  neighbor 2001:7f9:e12::fa:0:1 maximum-prefix 40 90 restart 60

From the output you can see that there are different specifics based on the internet exchange. Madrid uses ttl-security and peer-groups, whereas CATNIX doesn’t have ttl-security and uses peer session and policy templates. All of these specifics are stored in a local config file:

as = 1234
op_sys = xr
ttl_sec = true
xr_neigh_grp_v4 = default_v4_neigh_group
xr_neigh_grp_v6 = default_v6_neigh_group
ios_neigh_grp_v4 = default_v4_peer_group
ios_neigh_grp_v6 = default_v6_peer_group

routers = cat-rtr1.netquirks.co.uk
op_sys = ios
ios_neigh_grp_v4 = EXTERNAL,CATNIX
ios_neigh_grp_v6 = EXTERNAL,CATNIX6
ttl_sec = false
routers = mchr-rtr1.netquirks.co.uk,mchr-rtr3.netquirks.co.uk
op_sys = both
ios_neigh_grp_v4 = peer-sess-mchr4,peer-pol-mchr4
ios_neigh_grp_v6 = peer-sess-mchr6,peer-pol-mchr6

[France-IX Paris]
xr_neigh_grp_v4 = FRANCE-NEIGH-IX
xr_neigh_grp_v6 = FRANCE-NEIGH-IXv6
ttl_sec = false

as = 1042
op_sys = ios
ios_neigh_grp_v4 = Mad1-4
ios_neigh_grp_v6 = Mad1-6
correction = DE-CIX_Madrid

The script generally follows the structure of reading from the more specific sections first. If an IX section contains a characteristic like ttl-security, the config for that exchange will use that characteristic. If it is absent, the config will fall back on the DEFAULT section. There are a couple of exceptions to this and full details can be found in the README file on the repo. The script can also specify the routers to put the config onto and show the name of an Internet Exchange if Peering DB doesn’t have one set (DE-CIX_Madrid is an example of this as shown above). Again, full details are in the README.

This gives a brief introduction to PeerPal. It’s not a revolutionary script by any means but will hopefully come in handy for anyone working on peering or BGP configurations on a regular basis. Future planned features include pushing the actual config to the routers and conducting automated checks to make sure that prefixes and traffic levels adhere to your peering policy – watch this space.

So feel free to clone the repo and give it a go. Thoughts and comments welcome as always.


The A to Zabbix of Trapping & Polling

Monitoring is one of the most crucial parts to running any network. There are many tools available to perform network monitoring, some of which are more flexible than others. This quirk looks at the Zabbix monitoring platform – more specifically, how you use combined SNMP polling and trapping triggers to monitor an IP network, based on Zabbix version 3.2.

The blog assumes you’re already familiar with the workings of Zabbix. However if you aren’t, the follow section gives a whistle-stop tour, from the perspective of discovering and monitoring network devices using SNMP. If you are already familiar with Zabbix, skip to The quirk section below.

Zabbix –  SNMP Monitoring Overview

Zabbix can do much (much) more than I’ll outline here, but if you’re not familiar with it, I’ll describe roughly how it works in relation to this quirk.

The Zabbix application is installed on a central server with the option of having one or more proxy servers that relay information back to the central server. Zabbix has the capability to monitor a wide range of environments from cloud storage platforms to LAN switching. It uses a variety tools to accomplish this but here I’ll focus on its use of SNMP.

Anything that can be exposed in an SNMP MIB can be detected and monitored by Zabbix. Examples of metrics or values that you might want to monitor in a networking environment include:

  • Interfaces states
  • Memory and CPU levels
  • Protocol information (neighbors IPs, neighborship status etc)
  • System uptime
  • Spanning-Tree events
  • HA failover events

In Zabbix these metrics/values are called items. A device that is being monitored is referred to as a host.

Zabbix monitors items on hosts by both SNMP polling and trapping. It can, for example, poll a switch’s interfaces every 5 minutes and alert if a poll response comes back stating the interface is down (the ifOperStatus OID is good for this). Alternatively an item can be configured to listen for traps. If a switch interface drops, and that switch sends an SNMP trap (either to the central server or one of its proxies), Zabbix can pick this up and trigger an alert.

So how is it actually configured and setup?

The configuration of Zabbix to monitor SNMP follows these basic steps. Zabbix specific terms have been coloured red:

  • Add a new host into Zabbix – including its IP, SNMP community and name. The device in question will need to have the appropriate read-only SNMP community configured and have trapping/polling allowed to/from the Zabbix address.


  • Configure items for that host – An item can reference a poll (e.g. poll this device for its CPU usage) or a trap (e.g. listen for an ‘interface up/down’ trap).


  • Configure triggers that match particular expressions relating one or more items. For example a trigger could be configured to match against the ‘CPU usage’ item receiving a value (though polling) of 90 or more (e.g. 90% CPU). The trigger will then move from an OK state to a PROBLEM state. When the trigger clears (more on that below) it will move from a PROBLEM state back to an OK state.


  • Configure actions that correspond triggers moving to a PROBLEM state – options depend on the severity level of the trigger but could be something like sending an email or integrating with the API of something like PagerDuty to send an SMS

This process is pretty simple on the face of things, but what happens if you have 30 switches with 48 interfaces each? You couldn’t very well configure 30×48 items that monitor interfaces states. That’s a lot of copy and pasting!

Thankfully, Zabbix has two features that allow for large scale deployments like this: 

Templates – Templates allow you to configure what are called prototype items and triggers. These prototypes are bundled all into one common template. You can then apply that template to multiple devices and they will all inherit the items and triggers without them needing to be configured individually.

Low Level Discovery LLD allows you to discover multiple items based on SNMP tables. For example if you create an LLD rule with the SNMP OID ifIndex ( as the Key, Zabbix will walk that table and discover all of its interfaces. You can then take the index of each row in the table and use it to create items and triggers based on other SNMP tables. For example after discovering all the rows of the ifIndex table you could use the SNMP Index in each row to find the ifOperStatus of each of those interfaces. It doesn’t matter if the host has 48 or 8 interfaces, they will all be added using this LLD. Here’s an example of the principle using snmpwalk:


Now this is a very high level overview of Zabbix. I’m just giving a brief snapshot for those who haven’t worked with Zabbix.

Before mention the specifics of this quirk, I’ll go into a little more detail on how triggers work, since it plays a crucial role …

A trigger is an expression that is applied to an item and, as you might expect, is used to detect when a problem occurs. A trigger has two states: OK or PROBLEM. To detect when a problem occurs, a trigger uses an aptly named problem expression. The problem expression is basically a statement that describes the conditions under which the trigger should go off (e.g. move from OK to PROBLEM).

Examples of a problem expression could be “the last poll of interface x on switch y indicates it is down” or “the last trap received from switch y indicates interface x is down”.

Triggers also have a recovery expression. This is sort of the opposite of a problem expression. Once a trigger goes off, it will remain in the PROBLEM state until such time as the problem expression is no longer true. If the problem expression suddenly evaluates to false, the trigger will move to looking at the recovery expression (if one exists). At this point, the trigger will stay in a PROBLEM state until the recovery expression becomes true. The distinction to pay attention to here is that the even though the original condition that caused the trigger to go off is no longer true, the trigger remains in a PROBLEM state until the recovery expression is true. Most importantly, the recovery expression is not evaluated until the problem expression is false. Remember this for later.

So with all of that said. Let’s take a look at the quirk.

The quirk

This quirk explores how to configure triggers within Zabbix to use both polling and trapping to monitor a network device such as a router or switch.

To illustrate the idea I will keep it simple – interface states. Imagine a template applied to a switch that uses LLD to discover all of the interfaces using the ifIndex table.

Two items prototypes are created:

One that polls the interface state (ifOperStatus) every 5 minutes


One that listens for traps about interface states – either going down (for example listening for linkDown traps) or coming up (for example listening for linkUp traps)

The question is, how should the trigger be configured? We do not want to miss an interface that flaps. If an interface drops, we want the trigger to move to a PROBLEM state. But if our trigger is just monitoring the polling item and the interface goes down and comes back up within a polling cycle then Zabbix won’t see the flap.

To illustrate these concepts, I’ll use a diagram that shows a timeline together with what polling and trapping information is received by Zabbix. It uses the following legend:


This first diagram illustrates how Zabbix could “miss” an interface flap, if it occurs between polling responses:


You can see here, that without trapping, as far as Zabbix is concerned the interface never drops.

So what if we just make our trigger monitor traps?

This also runs into trouble when you consider that SNMP runs over UDP and there is no guarantee that a trap will get through (especially if the interface drop affects routing or forwarding). Worse still, if the trap stating that the interface is down (the DOWN trap) makes it to Zabbix but the recovery trap (the UP trap) doesn’t make it to Zabbix then the trigger will never recover!


It appears that both approaches on their own have setbacks. The logical next step would be to look at combining the best of both worlds – i.e. configure a trigger that will move to a PROBLEM state if it receives a DOWN trap or a poll sees the interface as down. That way, one backs the other up. The idea looks like this:


Seems simple enough. However, the quirk arises when you realise there is still a problem with this approach …. namely, if the UP trap is missed, the trigger will still not recover.

To understand why, we’ll look at the logic of the trigger expression. The trigger expression is a disjunction – an or statement. The two parts of this or statement are:

The last poll states the interface is down


The last trap received indicates the interface is down

A disjunction only requires one of the parts to be true for the whole expression to be true.

Consider this scenario: A DOWN trap is received, making the second part of the expression true. The trigger moves to a PROBLEM state. So far so good. Now image a few minutes later the interface comes back up but the UP trap is never received by Zabbix. Due to the fact that this is a disjunction, even if the last poll shows the interfaces as up, the second half of the expression is still true – as far as Zabbix is concerned that last trap it received showed the interface is down. As a result the alert will never clear (meaning the trigger will never move from PROBLEM back to OK).


There needs to be some way to configure the combination of the two that doesn’t leave the trigger in a PROBLEM state. When searching for a solution, the Recovery Condition comes into play…

The search

To focus on finding a solution we will first look at solving the missing UP trap problem. For now, don’t worry about polling.

Let’s say we a have trigger with the following trigger expression:

The last trap received indicates the interface is down

Then clearly if the trigger has gone off and we miss the UP trap when the interface recovers, this alert will never clear. So what if we combine this, using an and statement, with something else. Something else that will, no matter what, eventually become false. Since an and statement is a conjunction, both parts will need to be true. We can then use the recovery condition to control when the trigger moves back to an OK state.

We can leverage polling for this since, if the interface is down, polling will eventually detect it. So our trigger expression changes to this:

The last trap received indicates the interface is down


The last poll states the interface is up

At first this might seem counter intuitive to what we looked at above, but consider that when an interface drops and the switch sends a trap to Zabbix, stating that the interface is down, the last poll that Zabbix made to the switch should have shown the interface as up –  hence both statements are true and the trigger correctly moves to a PROBLEM state.

But as soon as polling catches up and detects that the interface is down, the second part of our trigger expression with become false. This makes the whole trigger expression false (since it is a conjunction) and the trigger will recover and move back to an OK state.


Now this is obviously not good. The interface is, afterall, still down! But we can use the recovery expression to control when the trigger recovers.

Remember from earlier that if a recovery expression exists, it will be looked at once the problem expression becomes false.

We can’t just configure a recovery expression on its own, until we made the above tweak, since as long as the problem expression says true the recovery expression will still be ignored.

From here the solution is simple. Our recovery expression simply states

The last two polls that we received stated the interface was up.

This means that as soon as polling detects that the interface is down, the problem statement becomes false and the recovery expression is looked at. Now, until two polls in a row detect that the interface is up, the trigger will stay in a PROBLEM state.


Interestingly, what we’ve essentially done is solve the missing UP trap problem, by removing the need to rely on UP traps at all! After two UP polls the trigger recovers (note the blue line of the timeline in the above diagram). You could optionally include an …or an UP trap is received to the recovery statement to make the recovery time quicker.

But there is a caveat to this case…

Consider what happens if an interface flaps within a polling cycle, meaning as far as polling is concerned, the interface never goes down. This would mean that, in the event that UP trap is missed, the problem statement will never become false. This means the trigger will never recover and we’re back to square one…


What we need is something that will inevitably cause the trigger statement to become false. Using polling doesn’t work because as we have seen, it can “miss” an interface flap.

Fortunately Zabbix has a function called nodata which can help us. The function can be found here and works as follows:

nodata(x) = 1(true) or 0(false), where x is the number of seconds where the referenced function has (true) or has not (false) received data.

To better understand this, let’s see what happens is we remove the statement The last poll states the interface is up, and replace it with one that implements this function. Our trigger statement would then become the following:

The last trap received indicates the interface is down


There has been some trap data received in last x seconds (where x > bigger than the polling interval)

The second part of this conjunction is represented by trap.nodata(350) = 0 (e.g. “It is false that there has been no trap information received in the last 350 seconds” which basically means “you have received some trap information in the last 350 seconds”).

Once the 350 seconds expires that statement becomes false and the trigger moves to looking at the recovery expression. Remember our polling interval was 5 minutes, or 300 seconds. 

The value x, must be at least as long as a polling interval, this will give the polling a chance to catch up as it were. Consider a scenario where x is less than a single polling interval and the interface drops just after the last poll. The nodata(x) expression will expire before the next poll comes through. When this happens, the trigger statement is false, so Zabbix will move to look at the Recovery Expression (which states that the last two polls are up). Zabbix will see the last two polls as up and trigger will recover when the interface is still down!


If x is bigger than the polling interval, polling can catch up and the trigger behaves correctly.


Now that we have solved this we can reintroduce polling into the trigger. Remember that the initial DOWN trap could still be missed. We saw that there were problems when trying to integrate polling and trapping together into a trigger’s Problem Expression, but we can easily create a single poll-based trigger.

This trigger can be relatively simple. The Problem Expression simply states that the last two polls show the interface as down. There doesn’t need to be a Recovery Expression, since when the trigger sees two UP polls it can recover without problems.

Now we’ve got another problem though. We don’t want two triggers to go off for just one event. Thankfully Zabbix has the feature of dependence. If we configure the poll-based trigger to only move to a PROBLEM state if the trap-based trigger is not in a PROBLEM state, then this poll-based trigger effectively acts as a backup to the trap-based one. I’ll explore the exact configuration of this in the work section.

Once this has been configured you’ll have a working solution that supports both polling and trapping without having to worry about alerts not triggering or clearing when they should. Let’s take a look at how this configured on the Zabbix UI.

The Work

In this section I will show screenshots of the triggers that are used in the aforementioned solution. I haven’t shown the configuration of the LLD or of any corresponding Actions (that will result in email or text messages being sent), but Zabbix has excellent documentation on the how to configure these features.

First we’ll look at the trapping configuration:


The Name field can use variables based on OIDs (like ifDesc and ifAlias) that are defined in the Low Level Discovery rule to make the trigger contain meaningful information of the affected interface. The trigger expression references the trap item that listens for interface down traps.

The trap item itself will look at the log output produced by the Zabbix snmptrapd process passing traps through an SNMPTT config file. This process parses incoming traps and creates log entries. Trap items can then match against these logs.

In this case, the item matches against log entries containing the string

“Link up on interface {#SNMPINDEX}” – which is produced when a linkup trap is received


“Link down on interface {#SNMPINDEX}”}” – which is produced when a linkdown trap is received

where {#SNMPINDEX} is the index of the table entry for the ifIndex table.

In this trigger expression the trap item is referenced twice. Firstly, it matches a trap item that has the “link down” substring in it (i.e. if a down trap is received for that ifIndex). Secondly, it uses the noData = 0 (false) function – this means that “some trap data has been received in the past 350 seconds”.

This matches the pseudo-expression we have above:

The last trap received indicates the interface is down


There has been some trap data received in last x seconds (where x > bigger than the polling interval).

If a trap is received stating the interface is up, the trap item will no longer contain the string “link down” – rather it will contain “link up”, so the first part will become false.

Alternative, if no trap is received in 350 second (either UP or DOWN) the second half of the AND statement will become false. The polling interval is less that 350 seconds so if the up trap is missed polling will have the chance to catch up.

Either way, the trigger will eventually look at the recovery expression. The recovery expression references the ifOperStatus item and the ifAdminStatus item.

The recovery expression basically states:


The last two polls of the interface operational state is up


The last poll of the administrative state of the interface is down (i.e. someone has issued ‘shutdown’ on the interface, if it’s an interface on a Cisco device)

THEN recover.

The second half of the disjunction is used to account for scenarios where an engineer deliberately shut down an interface – in which case you would not want the alert to persist.

Next we’ll look at the polling trigger:


This one is much simpler. The trigger will go off if the last two polls of the interface indicate that the operational state is down (2) AND the admin state is up (1) – meaning that it hasn’t been manually shutdown by an engineer.

Finally, the last trick to making this solution works is in the dependencies tab of this trigger prototype:


In this screen, the trap-based trigger has been selected as a dependency for the poll-based trigger. This means that the poll-based trigger will only go off if the trap-based trigger hasn’t gone of. 

So that’s the work involved in configuring the actual triggers and it brings us to the end of this quirk. It demonstrates how to combine polling and trapping into Zabbix triggers to allow for consistent and correct alerting.

Zabbix has a wide range of functions and capabilities – far more than what I’ve outlined there. There may very well be another way to accomplish the same goal so as usual, any thoughts or idea are welcome. 

The Friend of my Friend is my Enemy

Imagine you’re a provider routing a PI space prefix for one of your customers. Now imagine that one of your IX peers started to advertise a more specific subnet of that customer network to you. How would and how should you forward traffic destined for that prefix? This quirk looks at just a such a scenario from the point of view of an ISP that adheres to BCP38 best practice filtering policies…

The quirk

So here’s the scenario:


In this setup Xellent IT Ltd is both a customer and a provider. It provides transit for ACME Consulting but it is a customer of Provider A. ACME owns PI space and choses to implement some traffic engineering. It advertises a /23 to Xellent IT and a /24 to Provider B.

Now Provider B just happens to peer with Provider A over a public internet exchange. The quirk appears when traffic from the internet, destined to, enters Provider A’s network, especially when you consider that Provider A implements routing policies that adhere to BCP38.

But first, what is BCP38?

You can read it yourself here, but in short, it is a Best Current Practice document that advocates for prefix filtering to minimise threats like DDoS attacks. It does this by proposing inbound PE filtering on customer connections that block traffic whose source address does not match that of a known downstream customer network. DDoS attacks have spoofed source addresses. So if every Provider filtered traffic from their customers, to make sure that the source address was from the right subnet (and not spoofed) then these kinds of DoS attacks would disappear overnight.

To quote the BCP directly:

In other words, if an ISP is aggregating routing announcements for multiple downstream networks, strict traffic filtering should be used to prohibit traffic which claims to have originated from outside of these aggregated announcements.
BCP38 – P. Ferguson, D. Senie

To put it in diagram form, the basic idea is as follows:


A provider can also implement outbound filtering to achieve the same result. That is to say, outbound filters can be applied at peering and transit points to ensure that the source addresses of any packets sent out come from within the customer cone of the provider (a customer cone is the set of prefixes sourced by a provider, either as PI or PA space, that makes up the address space for is customer base). This can be done in conjunction with, or instead of, the inbound filtering approach.


There are multiple ways a provider can build their network to adhere to BCP38. As an example, an automated tool could be built that references an RIR database like RIPE. This tool could perform recursive route object lookups on all autonomous systems listed in the providers AS-SET and build an ACL that blocks all outbound border traffic whose source address is not in that list.

Regardless of the method used, this quirk assumes that Provider A is using both inbound and outbound filtering. But as we’ll see, it is the outbound filtering that causes all the trouble… here’s the traffic flow:


Now you might ask why the packet would follow this particular path. Isn’t Provider B advertising the more specific /24 it receives from ACME? How come the router that sent the packet to Provider A over the transit link can’t see the /24?

There are a number of reason for this and it depends on how the network of each Autonomous System along the way is designed. However, one common reason could be due to a traffic engineering service offered but Internet Providers call prefix scoping.

Prefix scoping allows a customer to essentially tell its provider how to advertise its prefix to the rest of the internet. This is done by including predetermined BGP communities in the prefix advertisements. The provider will recognise these communities and alter how they advertise that prefix to the wider internet. This could be done through something like route-map filtering on these communities.

In this scenario, perhaps Provider B is offering such a service. ACME may have chosen to attach the ‘do not advertise this prefix to your transit provider x’ community to its BGP advertisement to Provider B. As a result, the /24 prefix doesn’t reach the router connecting to Provider A over its transit link, so it forwards according to the /23.

This is just one example of how traffic can end up at Provider A. For now, let’s get back to the life of this packet as it enters Provider A.

Upon receipt of the packet destined for, Provider A’s border router will look in its routing table to determine the next hop. Because it is more specific, the learned over peering will be seen in the RIB as the best path, not the /23 from the Xellent IT link. The packet is placed in an LSP (assuming an MPLS core) with a next hop of the border router that peers with Provider B at the Internet Exchange.

You can probably see what’s going to happen. When Provider A’s border router at the Internet Exchange tries to forward the packet to Provider B it has to pass through an outbound ACL. This ACL has been built in accordance with BCP38. The ACL simply checks the source address to make sure it is from with the customer cone of Provider A. Since the source address is an unknown public address sourced from off-net, the packet is dropped.

Now this is inherently a good thing isn’t it? Without this filtering, Provider A would be providing transit for free! However, it does pose a problem after all, since traffic for one of its customers subnets is being blackholed.

From here, ACME Consulting gets complaints from its customers that they can’t access their webserver. ACME contacts its transit providers and before you know it, an engineer at Provider B has done a traceroute and calls Provider A to ask why the final hop in the failed trace ends in Provider As network.

So where to from here? What should Provider A do? It doesn’t want to provide transit for free, and its policy states that BCP38 filtering must be in place. Let’s explore the options.

The Search

Before I look at the options available, it worth pausing here to reference an excellent paper by Pierre Francois of the Universite catholique de Louvain entitled Exploiting BGP Scoping Services to Violate Internet Transit Policies. It can be read here and describes the principles underlying what is happening in this quirk in a more high level logistical way that sheds light on why this is happening. I won’t go into exhaustive detail, I highly recommend reading the paper yourself, but to summarise, there are 3 conditions that come together to cause this problem.

  1. The victim Provider whose policy is violated (Provider A) receives the more specific prefix from only peers or transit providers.
  2. The victim Provider also has a customer path towards the less specific prefix.
  3. Some of the victims Providers peer or transit providers did not receive the more specific path.

This is certainly what is happening here. Provider A sees a /24 from its peer (condition 1), a /23 from its customer (condition 2) and the Transit router that forwards the packet to Provider A cannot see the /24 (condition 3). The result of these conditions is that the packet is being forwarded from AS to AS based on a combination of the more specific route and the less specific route. To quote directly from Francois’ paper:

The scoping being performed on a more specific prefix might no longer let routing information for the specific prefix be spread to all ASes of the routing system. In such cases, some ASes will route traffic falling to the range of the more specific prefix, p, according to the routing information obtained for the larger range covering it, P.
Exploiting BGP Scoping Services to Violate
Internet Transit Policies – Pierre Francois

So what options does Provider A have? How can it ensure that traffic isn’t dropped, but at the same time, make sure it can’t be abused into providing free transit for off-net traffic? Well there’s no easy answer but there are several solutions that I’ll consider:

  • Blocking the more specific route from the peer
  • Asking Xellent IT Ltd to advertise the more specific
  • Allowing the transit traffic, but with some conditions

I’ll try to argue that allowing the transit traffic but only as an exception, is the best course of action. But before that, let’s look at the first two options.

Let’s say Provider A applies an inbound route-map on its peering with Provider B (and all other peers and transits for that matter) to block any advertised prefixes that come from its own customer cone (basically, stopping its own prefixes being advertise towards itself from a non-customer). So Provider A would see Provider B advertising and recognise that it as part of Xellent ITs supernet and block it.

This would certainly solve the problem of attempting to forward the traffic out of the Internet Exchange. Unfortunately, there are two crushing flaws with this approach.

Firstly, it undermines the intended traffic engineering employed by ACME and comes will all the inherent problems that asymmetric routing holds. For example, traffic ingressing back into ACME via Xellent IT could get dropped by a session-based firewall that it didn’t go through on its way out. Asymmetric routing is a perfect example of the problems than can result from some ASes forwarding on the more specific route and others forwarding on the less specific route.

Second, consider what happens if the link to Xellent IT goes down, or if Xellent IT stops advertising the /23. Suddenly Provider A has no access to the /24 network. Provider A is, in essence, relying on a customer to access part of the internet (this is of course assuming Provider A is not relying on any default routing). This would not only undermine the dual homing of Customer B, but would also stop Provider A’s other customers reaching ACMEs services.


Clearly forwarding the traffic based on the more specific doesn’t solve anything. It might get through Provider A, but traffic is still being forwarding on a combination of prefix lengths and Provider A could end up denying traffic from its other customers reaching a part of the internet. Not a good look for an internet provider.

What about asking Xellent IT to advertise the more specific? Provider A could then simply prefer the /24 from Xellent IT using local preference. This approach has problems too. ACME isn’t actually advertising the /24 to Xellent IT. Xellent IT would need to ask ACME to do so, however they may not wish to impose such a restriction on their customer. The question then becomes, does Provider A have the right to make such a request? They certainly can’t enforce it.

There is perhaps a legal argument to be made that by not advertising the more specific Provider A is losing revenue. This will be illustrated when we look at the third option of allowing off-net traffic. I won’t broach the topic of whether or not Provider could approach Xellent IT and ask for advertisement of the more specific due to revenue loss, but it is certainly food for thought. For now though, asking Xellent IT to advertise the more specific is perhaps not the preferred approach.

Let’s turn to the third option, which sees Provider A adjust its border policies by adding to its BCP38 ACL. Not only should this ACL permit traffic with source addresses from its customer cone, it should also permit traffic that is destined to prefixes in its customer cone. The idea looks like this:


Now this might look ok. Off-net transit traffic to random public address (outside of Provider As customer cone) is still blocked, and ACMEs traffic isn’t. But this special case of off-net transit opens the door for abuse in a way that could cause Provider A to lose money.

Here’s how it works. For the sake of this explanation, I’ve removed Xellent IT and made ACME a direct customer of Provider A. I’ve also introduced a third service provider.


  • ACME dual homes itself by buying transit from Provider’s A and B. Provider A happens to charge more.
  • ACME advertises its /23 PI space to Provider A
  • It’s /24 is then advertised to Provider B, with a prefix scoping attribute that tells provider B not to advertise the /24 on to any transit providers.
  • As a result of this, Provider C cannot see the more specific /24. Traffic from Provider C traverses Provider A, then Provider B before arriving at ACME.


As we’ve already discussed, this violates BCP38 principles and turns Provider A into free transit for off-net traffic. But of perhaps greater importance is the loss of revenue that Provider A experiences. No one is paying for the increased traffic volume across Provider A’s core and Provider A gains no revenue from the increase – since it only crosses free peering boundaries. Provider B benefits as it sees more chargeable bandwidth used on its downstream link to ACME. ACME Ltd benefits since it can use the cheaper connection and utilize Provider A’s peering and transit relationships for free. If ACME had a remote site connecting to Provider C, GRE tunnels across Provider A’s core could further complicate things.

If ACME was clever enough and used looking glasses and other tools to discover the forwarding path, then there clearly is potential for abuse.

Having said all of that, I would argue that if this is done on a case by case basis, in a reactionary way, it would be an acceptable solution.

For example, in this scenario, as long as traffic flows don’t reach too high a volume (something that can be monitored using something like netflow) and only this single subnet is permitted, then for a sake of maintaining network reachability, this is a reasonable exception. It is not likely the ACME is being deliberately malicious, and as long as this exception is monitored, then the revenue loss would be miniscule and allowing a one-off policy violation would seem to be acceptable.

Rather than try and account for these scenarios beforehand, the goal would be to add exceptions and monitor them as they crop up. There are a number of way to detect when these policy violations occur. In this case, the phone call and traceroute from Provider B is a good way to spot the problem. Regrettably that does require something to go wrong for it be found and fixed (meaning a disrupted service for the customer). There are ways to detect these violation apriori, but I won’t detail them here. Francois’ paper presents the option of using an open-source IP management tool like pmacct which is worth reading about.

If off-net transit traffic levels increase, or more policy violations started to appear, more aggressive tactics might need to be looked at. Though for this particular quirk, allowing the transit traffic as an exception and monitoring its throughout seems to me to be a prudent approach.

Because I’ve spoken about this at a very high level, I won’t include a work section with CLI output. I could show an ACL permitting outbound but this quirk doesn’t need that level of detail to understand the concepts.

So that’s it! A really fascinating conundrum that is as interesting to figure out as it is to troubleshoot. I’d love to hear if anyone has any thoughts or possible alternatives. I toyed with the idea of using static routing at the PE facing the customer or assigning a community to routes received from peering that are in your customer cone and reacting to that somehow, but both those ideas ran into similar problems to the ones I’ve outlined above. Let me if you have any other ideas. Thanks for reading.


This blog introduces PBB-EVPN over an MPLS network. But rather than just describe the technology from scratch, I have tried to structure the explanation assuming the reader is familiar with plain old MPLS L3VPN and is new to PBB and/or EVPN. This was certainly the case with me when I first studied this topic and I’m hoping others in a similar position will find this approach insightful.

I won’t be exploring a specific quirk or scenario – rather I will look at EVPN followed by PBB, giving analogies and comparisons to MPLS L3VPN as I go, before combining them into PBB-EVPN. I will focus on how traffic is identified, learned and forwarded in each section.

So what is PBB-EVPN? Well, besides being hard to say 3 times fast, it is essentially an L2VPN technology. It enables a Layer 2 bridge domain to be stretched across a Service Provider core while utilizing MAC aggregation to deal with scaling issues.

Let’s look at EVPN first.


EVPN, or Ethernet VPN, over an MPLS network works on a similar principle to MPLS L3VPN. The best way to conceptualize the difference is to draw an analogy (colour coded to highlight points of comparison)…

MPLS L3VPN assigns PE interfaces to VRFs. It then uses MP-BGP (with the vpnv4 unicast address family) to advertise customer IP Subnets as VPNv4 routes to Route Reflectors or other PEs. Remote PEs that have a VRF configured to import the correct route targets, accept the MP-BGP update and install an ipv4 route into the routing table for that VRF.

EVPN uses PE interfaces linked to bridge-domains with an EVI. It then uses MP-BGP (with the l2vpn evpn address family) to advertise customer MAC addresses as EVPN routes to Route Reflectors or other PEs. Remote PEs that have an EVI configured to import the correct route target, accept the MP-BGP update and install a MAC address into the bridge domain for that EVI.

This analogy is a little crude, but in both cases packets or frames destined for a given subnet or MAC will be imposed with two labels – an inner VPN label and an outer Transport label. The Transport label is typical communicated via something like LDP and will correspond to the next hop loopback of the egress PE. The VPN label is communicated in the MP-BGP updates.

These diagrams illustrate the comparison:


In EVPN, customer devices tend to be switches rather than routers. PE-CE routing protocols, like eBGP, aren’t used since it operates over layer 2. The Service Provider appears as one big switch. In this sense, it accomplishes the same as VPLS but (among other differences) uses BGP to distribute MAC address information, rather than using a full mesh of pseudowires.

EVPN uses an EVI, or Ethernet Virtual Identifier, to identify a specific instance of EVPN as it maps to a bridge domain. For the purposes of this overview, you can think of an EVI as being quasi-equivalent to a VRF. A customer facing interface will be put into a bridge domain (layer 2 broadcast domain), which will have an EVI identifier associated with it.

The MAC address learning that EVPN utilizes what is called control-plane learning, since it is BGP (a control-plane routing protocol) that distributes the MAC address information. This is in contrast to data-plane learning, which is how a standard switch learns MAC addresses – by associating the source MAC address of a frame to the receiving interface.

The following Cisco IOS-XR config shows an EVPN bridge domain and edge interface setup, side by side with a MPLS L3VPN setup for comparison:


NB. For MPLS L3VPN config  the RD config (which is usually configured under CE-PE eBGP config) is not shown. PBB config is shown in the EVPN Bridge domain, this will be explained further into the blog.

EVPN seems simple enough at first glance, but it has a scaling problem, which PBB can ultimately help with…

Any given customer site can have hundreds or even thousands of MAC addresses, as opposed to just one subnet (as in an MPLS L3VPN environment). The number of updates and withdrawals that BGP would have to send could be overwhelming if it needed to make adjustments for MAC addresses appearing and disappearing – not to mention the memory requirements. And you can’t summarise MAC addresses like you can IP ranges. It would be like an MPLS L3VPN environment advertising /32 prefixes for every host rather than just one prefix for the subnet. We need a way to summarise or aggregate the MAC addresses.

Here’s where PBB comes in…

PBB – Provider Backbone Bridging (802.1ah)

PBB can help solve the EVPN scaling issue by performing one key function – it maps each customer MAC address to the MAC address of the attaching PE. Customer MAC addresses are called C-MACs. The PE MAC addresses are call B-MACs (or Bridge MACs).

This works by adding an extra layer 2 header to frame as it is forwarded from one site to another across the provider core. The outer layer 2 header has a destination B-MAC address of the PE device that the inner frames destination C-MAC is associated with.  As a result, PBB is often called MAC-in-MAC. This diagram illustrates the concept:


NB. In PBB terminology the provider devices are called Bridges. So a BEB (Backbone Edge Bridge) is a PE and a BCB (Backbone Core Bridge) is a P. For sake of simplicity, I will continue to use PE/P terminology. Also worth noting is that PBB diagrams often show service provider devices as switches, to illustrate the layer 2 nature of the technology – which I’ve done above.

In the above diagram the SID (or Service ID) represents a layer 2 broadcast domain similar to what an EVI represents in EVPN.

Frames arriving on a PE interface will be inspected and, based on certain characteristics, it will be mapped or assigned to a particular Service ID (SID).

The characteristics that determine what SID a frame belongs to can be a number of things:

  • The customer assigned VLAN
  • The Service Provider assigned VLAN
  • Existing SID identifiers
  • The interface it arrives on
  • A combination of the above or other factors

To draw an analogy to MPLS L3VPN – the VRF that an incoming packet is assigned to is determined by whatever VRF is configured on the receiving interface (using ip vrf forwarding CUST_1 in Cisco IOS interface CLI).

Once the SID has been allocated, the entire frame is then encapsulated in the outer layer 2 header with destination MAC of the egress PE.

In this way C-MACs are mapped to either B-MACs or local attachment circuits. Most importantly however the core P routers do not need to learn all of the MAC addresses of the customers. They only deal with the MAC addresses of the PEs. This allows a PE to aggregate all of the attached C-MACs for a given customer behind its own B-MAC.

But how does a remote PE learn which C-MAC maps to which B-MAC?

In PBB learning is done in the data-plane, much like a regular layer 2 switch. When a PE receives a frame from the PBB core, it will strip off the outer layer 2 header and make a note of the source B-MAC (the ingress PE). It will map this source B-MAC to the source C-MAC found on the inner layer 2 header. When a frame arrives on a local attachment circuit, the PE will map the source C-MAC to the attachment circuit in the usual way.

PBB must deal with BUM traffic too. BUM traffic is Broadcast, Unknown Unicast or Multicast traffic. An example of BUM traffic is the arrival or frame for which the destination MAC address is unknown. Rather than broadcast like a regular layer 2 switch would, a PPB PE will set the destination MAC address of the outer layer 2 header to a special multicast MAC address that is built based on the SID and includes all the egress PEs that are part of the same bridge domain. EVPN uses a different method or handling BUM traffic but I will go into that later in the blog.

Overall, PBB is more complicated than the explanation given here, but this is the general principle (if you’re interested, see section 3 of my VPLS, PBB, EVPN and VxLAN Diagrams document that details how PBB can be combined the 802.1ad to add an aggregation layer to a provider network).

Now that we have the MAC-in-MAC features of PBB at our disposal, we can use it to solve the EVPN scaling problem and combine the two…


With the help of PBB, EVPN can be adapted so that it deals with only the B-MACs.

To accomplish this, each EVPN EVI is linked to two bridge domains. One bridge domain is dedicated to customer MAC addresses and connected to the local attachment circuits. The other is dedicated to the PE routers B-MAC addresses. Both of these bridge domains are combined under the same bridge group.


The PE devices will uses data-plane learning to build a MAC database, mapping each C-MAC to either an attachment circuit or the B-MAC of an egress PE. Source C-MAC addresses are learned and associated as traffic flows through the network just like PBB does.

The overall setup would look like this:


The only thing EVPN needs to concern itself with is advertising the B-MACs of the PE devices. EVPN uses control-plane learning and includes the B-MACs in the MP-BGP l2vpn evpn updates. For example, if you were to look at MAC address known to a particular EVI on a route-reflector, you would only see MAC address for PE routers.

Looking again at the configuration output that we saw above, we can get a better idea of how PBB-EVPN works:


NB. I have added the concept of a BVI, or Bridged Virtual Interface, to the above output. This can be used to provide a layer 3 breakout or gateway similar to how an SVI works on a L3 switch.

You can view the MAC addresses information using the following command:


Now lets look at how PBB-EVPN handles BUM traffic. Unlike PBB on its own, which just sends to a multicast MAC address, PBB-EVPN will use unicast replication and send copies of the frame to all of the remote PEs that are in the same EVI. This is an EVPN method and the PE knows which remote PEs belong to the same EVI by looking in what is called a flood list.

But how does it build this flood list? To learn that, we need to look at EVPN route-types…

MPLS L3VPN sends VPNv4 routes in its updates. But EVPN send more than one “type” of update. The type of update, or route-type as it is called, will denote what kind of information is carried in the update. The route-type is part of the EVPN NLRI.

For the purposes of this blog we will only look at two route-types.

  • Route-Type 2s, which carry MAC addresses (analogous to VPNv4 updates)
  • Route-Type 3s, which carry information on the egress PEs that belong to an EVI.

It is these Route-Type 3s (or RT-3s for short) that are used to build the flood list.

When BUM traffic is received by a PE, it will send copies of the frame to all of its attachment circuits (except the one it received the frame on) and all of the PEs for which it has received a Route-Type 3 update. In other words, it will send to everything in its flood-list.

So the overall process for a BUM packet being forwarded across a PBB-EVPN backbone will look as follows:


So that’s it, in a nutshell. In this way PBB and EVPN can work together to create an L2VPN network across a Service Provider.

There are other aspects of both PBB and EVPN, such as EVPN multi-homing using Ethernet Segment Identifiers or PBB MAC clearing with MIRP to name just a couple, but the purpose of this blog was to provide an introductory overview – specifically for those used to dealing with MPLS L3VPN. Thoughts are welcome, and as always, thank you for reading.

Multihoming without a PE-to-CE Dynamic Routing Protocol

This quirk looks at how a multihomed site without a CE-to-PE routing protocol, like eBGP, can run into failover problems when using a first hop redundancy protocol.

The setup is as follows:


The CE routers in this case are Cisco 887 routers. The WAN connections are ADSL lines. From the CE routers, PPP sessions connect to the provider LNS/BNGs routers (PE1 and PE2). These PPP sessions run over L2TP tunnels between the LAC and LNS. RADIUS is used by the LNS routers to authenticate the PPP sessions and to obtain IP and routing attributes.

CE1 and CE2 are running HSRP. CE1 is Active. The CE LAN interfaces are switchports and the IP/HSRP configurations are on SVIs for the access VLAN. Both CEs have a static default route pointing to the dialer interface for their respective WAN connections. CE1 tracks its dialer interface so that it can lower its HSRP priority if the WAN connection fails (allowing CE2 to take over).

Outbound traffic is routed via the HSRP Active router.

Inbound traffic works as follows:

When an LNS router authenticates a PPP session, it will send an Auth-Request to the RADIUS server. The RADIUS server, when sending its Access-Accept to confirm the user is valid, will also return RADIUS attributes that the LNS server parses and applies to its configuration. For example, the attributes can indicate what IP to assign to the user – a Framed-IP that will show on the dialer interface of the CE. Cisco’s Framed-Route AVP (Attribute Value Pair) can also be used to include static routes.

In this scenario Framed-IP and Framed-Route RADIUS attributes (among others not detailed here) are returned, which gives a WAN IP to the CE and installs a static route onto the LNS router. Each PPP session has one or more LAN ranges associated with it. The static route points traffic for these LAN ranges to the Framed-IP assigned for the PPP session.

The site in this scenario has a /28 network assigned to it. The primary PPP session from CE1 receives two static routes – one for each of the two /29s that the /28 is made up of. The secondary PPP session from CE2 receives a single /28 static route.

These static routes are redistributed into the iBGP running in the service provider network. In the event that a PPP session drops, the associated static routes will be removed from the LNS routers.

Under normal circumstances, incoming traffic will follow either of the two more specific /29s down the primary WAN connection.

There are other ways to prefer one WAN connection over another (using BGP attributes when redistirbuting or similar) but I’ve used this subnet splitting apporach for simplicity.

In the event that the primary WAN connection fails, the following occurs:

For outbound traffic: CE1 lowers its HSRP priority allowing CE2 to take over. Outgoing traffic now goes via CE2.

For inbound traffic: The PPP session on PE1 will drop and both of the static routes will be removed. This leaves the /28 down the secondary WAN connection for traffic to be forwarded down.


But what happens if the FastEthernet0 LAN interface on CE1 fails?

HSRP will fail over, meaning outbound traffic will leave the site via the secondary WAN connection as expected.

However because the PPP session does not drop, the two /29 static routes to CE1 remain in place. Return traffic will traverse this WAN link and end up at CE1. CE1 has no route to the destination and will send it back over its default. Traffic will then loop until the TTL decrements to zero. The site has lost connectivity.


A reconfiguration is needed in order to allow for this situation, which is sometimes called “LAN-side failover”.

The Search

The first and most obvious question might be, why not run a routing protocol, like eBGP, between the PEs and CEs? The PE router would learn about the LAN range over this protocol rather than having static routes. The CEs would use redistribute connected and in the event that the LAN failed, this advertisement would cease.

There are a couple reasons why you might not want to run a dynamic PE-to-CE routing protocol. Firstly, there could be a lot of incoming subscriber sessions on the LNS routers. The overhead involved in running so many eBGP sessions might be too much compared to simply using RADIUS Attributes. Secondly, not all CPEs can support BGP, or whatever PE-to-CE protocol you want to run. Granted, an 887 can, but not all devices have this capability.

So with that said, let’s look at some options for how to deal with this issue…

There are several options to resolve this quirk. I’ll explore two of them here, each of which takes a different approach.

The first option is to ensure that in the event that the LAN interface goes down, the CE router automatically brings down the WAN connection.

Depending on the CPE used, there can be multiple ways to do this. In the case of a Cisco 887, a good way to do this is with EEM scripting. The EEM script can be made to trigger based on a tracking object for the LAN interface. You will also need to make sure that a second EEM script is configured to bring the WAN link back up if the LAN link is restored. I will show an example of such a script below.

An alternative approach is to ensure that there is a direct link between the Active and Standby routers in addition to the regular LAN link. Both LAN connections into each CE router would be in the same VLAN, allowing connection to the SVI. This would mean that if Fa0 dropped, HSRP would not fail over. Traffic leaving the site would still go via CE1, but it would pass through CE2 first and use the direct link between them.


As a side note, it is worth mentioning that one might mistakenly think that CE2, upon receiving outbound traffic, would forward it directly out of its WAN interface in accordance with its default route (causing asymmetric routing when the traffic returns via CE1). But this doesn’t happen. What needs to be remembered is that the routers interfaces are switchports and the destination MAC address will still be 0000.0c07.acxx (where xx is the HSRP group number). CE1 still holds this MAC meaning CE2 will pass it onwards through its switchport rather than routing the traffic.

In my experience this option is preferable. A single cable run and access port configuration is all that is needed. EEM Scripts can be unreliable at times and might not trigger when they should. Having said that, if this needs to be done on the CPE after deployment and remote hands are not possible, the EEM script might be the best approach.

The Work

The general HSRP setup could be as follows:

hostname CE1
interface Vlan10
 description SVI for LAN
 ip address
 standby 10 ip
 standby 10 priority 200
 standby 10 preempt
 standby 10 track 1 decrement 150
track 1 interface Dialer0 ip routing

The EEM script described above will need to trigger when Fa0 goes down. For that, the following tracker is used:

track 2 interface FastEthernet0 line-protocol

This EEM script will shut down the WAN connection if the tracker goes down and restore it if the tracker comes back up:

event manager applet LAN_FAILOVER_DOWN
 event track 2 state down
 action 1.0 syslog msg "Fa0 down. Shutting down controller interface"
 action 2.0 cli command "enable"
 action 3.0 cli command "configure terminal"
 action 4.0 cli command "controller vdsl 0"
 action 5.0 cli command "shutdown"
 action 6.0 cli command "end"
 action 7.0 syslog msg "Controller interface shutdown complete"
event manager applet LAN_FAILOVER_UP
 event track 2 state up
 action 1.0 syslog msg "Fa0 up. Enabling controller interface."
 action 2.0 cli command "enable"
 action 3.0 cli command "configure terminal"
 action 4.0 cli command "controller vdsl 0"
 action 5.0 cli command "no shutdown"
 action 6.0 cli command "end"
 action 7.0 syslog msg "Controller interface enabled."

When Fa0 goes drops, the syslog entries look this this:

Feb 27 14:42:18 GMT: %LINEPROTO-5-UPDOWN: Line protocol on Interface 
FastEthernet0, changed state to down
Feb 27 14:42:19 GMT: %TRACKING-5-STATE: 2 interface Fa0 line-protocol 
Feb 27 14:42:19 GMT: %HA_EM-6-LOG: LAN_FAILOVER_DOWN: Fa0 down. S
hutting down controller interface
Feb 27 14:42:19 GMT: %CONTROLLER-5-UPDOWN: Controller VDSL 0, 
changed state to administratively down
Feb 27 14:42:19 GMT: %SYS-5-CONFIG_I: Configured from console by on 
Feb 27 14:42:19 GMT: %HA_EM-6-LOG: LAN_FAILOVER_DOWN: Controller 
interface shutdown complete

And when it is restored…

Feb 27 14:43:53 GMT: %LINK-3-UPDOWN: Interface FastEthernet0, changed 
state to up
Feb 27 14:43:53 GMT: %HA_EM-6-LOG: LAN_FAILOVER_UP: Fa0 up. Enabling 
controller interface.
Feb 27 14:43:54 GMT: %SYS-5-CONFIG_I: Configured from console by on 
Feb 27 14:43:54 GMT: %HA_EM-6-LOG: LAN_FAILOVER_UP: Controller 
interface enabled.
Feb 27 14:44:54 GMT: %CONTROLLER-5-UPDOWN: Controller VDSL 0, 
changed state to up

The second option is simpler and does not require much configuration at all. All we’d need to do is run a cable from Fa1 on CE1 to Fa1 on CE2 and put the following configuration under Fa1:

interface fa1
 description link to other CE for LAN failover
 switchport mode access
 switchport access vlan 10

There isn’t much else to show for this solution other than to re-iterate that with this in place, HSRP would not fail over and traffic in both direction would flow via CE2s switchports.

There are other ways to tackle this problem that I have not detailed here (using etherchannel on the LAN perhaps, or something involving floating static routes) and any alternatives ideas would be good to hear about and interesting to discuss. Thanks for reading.


MPLS Management misconfiguration

There are many different ways for ISPs to manage MPLS devices like routers and firewalls that are deployed to customer sites. This quirk explores one such solution and looks at a scenario where a misconfiguration results in VRF route leaking between customers.

The quirk

When an ISP deploys Customer Edge (CE) devices to customers sites they might, and often do, want to maintain management. For customers with a simple public internet connection this is usually straight forward – the device is reachable over the internet and  an ACL or similar policy will be configured, allowing access from only a list of approved ISP IP addresses (for extra security VPNs could be used).

However when Peer-to-Peer L3VPN MPLS is used, it is more complicated. The customer network is not directly accessible from the internet without going through some kind of a breakout site. The ISP will either need a link into their customers MPLS network or must configure access through the breakout. This can become complicated as the number of customers, and the number of sites per customer, increases.

One option, presented in this quirk, is to have all MPLS customers PE-CE WAN subnets come from a common supernet range. These WAN subnets can then be exported into a common management VRF using a specific RT. The network that will be used to demonstrate this looks as follows:


This is available for download as a GNS3 lab from here. It includes the solution to the quirk as detailed below.

The ISPs ASN is 500. The two customer have ASNs 100 and 200 (depending on the setup these would typically be private ASNs, but they have been shown here as 100 and 200 for simplicity). A management router (MGMT) in ASN 64512 has access to the PE-CE WAN ranges for all of the customers, all of which come from the supernet A special subnet within this range,, is reserved for the Management network itself. The MGMT router, or MPLS jump box as it may also be called, is connected to this range – as would any other devices requiring access to the MPLS customers devices (backup or monitoring systems for instance… not shown).

The basic idea is that each customer VRF exports their PE-CE WAN ranges with an RT of 500:501. The MGMT VRF then imports this RT.

Along side this, the MGMT VRF will exports its own routes (from the supernet) with an RT of 500:500. All of the customer VRFs import 500:500.

This has two key features:

  • Customer WAN ranges will all be from the and must not overlap between customers.
  • WAN ranges and site subnets are not, at any point, leaked between customer VRFs.

To get a better idea of how it works, take a look at the following diagram:


The CLI for each customer VRF setup looks as follows:

ip vrf CUST_1
 description Customer_1_VRF
 rd 500:1
 vpn id 500:1
 export map VRF_EXPORT_MAP
 route-target export 500:1
 route-target import 500:1
 route-target import 500:500
route-map VRF_EXPORT_MAP permit 10
 match ip address prefix-list VRF_WANS_EXCEPT_MGMT
 set extcommunity rt 500:501 additive
route-map VRF_EXPORT_MAP permit 20
ip prefix-list VRF_WANS_EXCEPT_MGMT seq 10 deny le 32
ip prefix-list VRF_WANS_EXCEPT_MGMT seq 20 permit le 32

Note that the export map used on customer VRFs makes a point to exclude the routes that the Management supernet ( This is done on the off chance that the range exists within the customers VRF table.

The VRF for the Management network is configured as follows (note this is only configured on CE3 in the above lab):

ip vrf MGMT_VRF
 description VRF for Management of Customer CEs
 rd 500:500
 vpn id 500:500
 route-target export 500:500
 route-target import 500:500
 route-target import 500:501

This results in the WAN ranges for customers being tagged with the 500:501 RT but not the LAN ranges.

PE1#sh bgp vpnv4 unicast vrf CUST_1
BGP routing table entry for 500:1:, version 9
Paths: (1 available, best #1, table CUST_1)
  Advertised to update-groups:
    1         3

  Local from (
      Origin incomplete, metric 0, localpref 100, weight 32768, valid, 
       sourced, best
      Extended Community: RT:500:1 RT:500:501
      mpls labels in/out 23/aggregate(CUST_1)

PE1#sh bgp vpnv4 unicast vrf CUST_1
BGP routing table entry for 500:1:, version 3
Paths: (1 available, best #1, table CUST_1)
  Advertised to update-groups:

  100 from (
      Origin incomplete, metric 0, localpref 100, valid, external, best
      Extended Community: RT:500:1
      mpls labels in/out 24/nolabel
PE1#, above, is a one of the LAN ranges and does not have the 500:501 RT.

Every VRF can see the management network and the management network can see all the PE-CE WAN ranges for every customer:

PE1#sh ip route vrf CUST_2

Routing Table: CUST_2
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1
       L2 - IS-IS level-2, ia - IS-IS inter area, * - candidate default
       U - per-user static route, o - ODR
       P - periodic downloaded static route

Gateway of last resort is not set

B [20/0] via, 01:32:17 is subnetted, 3 subnets
B [200/0] via, 01:32:09
B [200/0] via, 01:32:09
C is directly connected, FastEthernet1/0
B [200/0] via, 01:32:09

PE3#sh ip route vrf MGMT_VRF

Routing Table: MGMT_VRF
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1
       L2 - IS-IS level-2, ia - IS-IS inter area, * - candidate default
       U - per-user static route, o - ODR
       P - periodic downloaded static route

Gateway of last resort is not set is subnetted, 4 subnets
C is directly connected, FastEthernet0/0
B [200/0] via, 01:32:24
B [200/0] via, 01:32:24
B [200/0] via, 01:32:24


Also, note that the routing table for Customer 2 (vrf CUST_2) cannot see the WAN range for Customer 1 (vrf CUST_1).

Given the proper config, the MGMT router can access the WAN ranges for customers:

Trying ... Open

User Access Verification

NB. I’m not advocating using telnet in such an environment. Use SSH as a minimum when you can.

The quirk comes in when a simple misconfiguration introduces route leaking between customer VRFs.

Consider an engineer accidentally configuring a VRF that exports all its vpnv4 prefixes with RT 500:500 (rather than only exporting its PE-CE WAN routes with RT500:501 as described above). The mistake is easy enough to make and will cause routes from the newly configured VRF to be imported by all other customer VRFs. This will have a severe impact for any customers with the same route within their VRF.

To demonstrate this, imagine that the CUST_1 VRF is not yet configured. Pinging from site Customer 2 Site 2 (CE2-2 on the lower left side of the diagram) with a source of to Customer 2 Site 1 (CE1-2) with a destination of works fine

CE2-2#trace source lo1

Type escape sequence to abort.
Tracing the route to
 1 12 msec 24 msec 24 msec
 2 [AS 500] [MPLS: Labels 16/24 Exp 0] 92 msec 64 msec 44 msec
 3 [AS 500] [MPLS: Label 24 Exp 0] 48 msec 68 msec 52 msec
 4 [AS 500] 116 msec 88 msec 104 msec


If the CUST_1 VRF is now setup with the aforementioned misconfiguration, route leaking between CUST_1 and CUST_2 will result:

PE1(config)#ip vrf CUST_1
PE1(config-vrf)# description Customer_1_VRF
PE1(config-vrf)# rd 500:1
PE1(config-vrf)# vpn id 500:1
PE1(config-vrf)# route-target export 500:1
PE1(config-vrf)# route-target import 500:1
PE1(config-vrf)# route-target export 500:500
PE1(config-vrf)# interface FastEthernet0/1
PE1(config-if)# description Link to CE 1 for Customer 1
PE1(config-if)# ip vrf forwarding CUST_1
PE1(config-if)# ip address
PE1(config-if)# duplex auto
PE1(config-if)# speed auto
PE1(config-if)# no shut
PE1(config)#router bgp 500
PE1(config-router)# address-family ipv4 vrf CUST_1
PE1(config-router-af)# redistribute connected
PE1(config-router-af)# redistribute static
PE1(config-router-af)# neighbor remote-as 100
PE1(config-router-af)# neighbor description Customer 1 Site 1
PE1(config-router-af)# neighbor activate
PE1(config-router-af)# neighbor default-originate
PE1(config-router-af)# neighbor as-override
PE1(config-router-af)# neighbor route-map CUST_1_SITE_1_IN in
PE1(config-router-af)# no synchronization
PE1(config-router-af)# exit-address-family

VRF CUST_1 will export its routes (including from Customer 1 Site 1 – CE1-1) and the VRF CUST_2 will import these routes due to the RT of 500:500.

Looking at the BGP and routing table for the CUST_2 VRF shows that the next hop for is now the CE1-1 router.

PE1#sh ip route vrf CUST_2
Routing entry for
  Known via "bgp 500", distance 20, metric 0
  Tag 100, type external
  Last update from 00:02:45 ago
  Routing Descriptor Blocks:
  * (CUST_1), from, 00:02:45 ago
      Route metric is 0, traffic share count is 1
      AS Hops 1
      Route tag 100

PE1#sh bgp vpnv4 unicast vrf CUST_2
BGP routing table entry for 500:2:, version 21
Paths: (2 available, best #1, table CUST_2)
  Advertised to update-groups:

  100, imported path from 500:1: from (
      Origin incomplete, metric 0, localpref 100, valid, external, best
      Extended Community: RT:500:1 RT:500:500

  200 (metric 20) from (
      Origin incomplete, metric 0, localpref 100, valid, internal
      Extended Community: RT:500:2
      Originator:, Cluster list:
      mpls labels in/out nolabel/24


There are now two possible paths to reach One imported from the VRF for CUST_1 and one from its own (coming from CE1-2). The path via AS 100 is being preferred due to the lower IGP metric. Note the 500:500 RT in this path.

Once this is done CE2-2 cannot reach its 192.168.50/24 subnet on CE1-2.

CE2-2#trace source lo1
Type escape sequence to abort.

Tracing the route to
1 8 msec 12 msec 12 msec
2 * * *
3 * * *
4 * * *
...output omitted for brevity

Granted, this issue is caused by a mistake, but the difference between the correct and incorrect commands is minimal. An engineer under pressure or working quickly could potentially disrupt a massive MPLS infrastructure resulting in outages for multiple customers.

The search

As mentioned at the beginning of this blog, there are multiple ways to manage an MPLS network.

One possibility is to have a single router that, rather than import and export WAN routes based on RTs, has a single loopback address in each VRF. It is from this loopback that the router will source SSH or telnet sessions to the customer CE devices. For example:

interface loopback 1
 description Loopback source for Customer 1
 ip vrf forwarding CUST_1
 ip address
interface loopback 2
 description Loopback source for Customer 2
 ip vrf forwarding CUST_2
 ip address

MGMT# telnet /vrf CUST_1

This has a number of advantages:

  • This router acts as a single jump host (rather than a subnet), which could be considered more secure
  • There is no restriction on the WAN addresses for each customer. They can be any WAN range at all and can overlap between customers.
  • The same IP address can be used for each VRFs loopback (as long as it doesn’t clash with any existing IPs already in the customers VRF).

However there are a number of disadvantages:

  • Each VRF must be configured on this jump router
  • This jump router is a single point of failure
  • The command to log on is more complex and requires the users to know the VRFs exact name rather than just the router IP.
  • Migrating to this solution, from the aforementioned RT import/export solution, would be a cumbersome and long process.
  • Centralised MPLS backups could be complicated if there is a not a common subnet (like reachable by all CE devices.

For these reasons it was decided not to use this solution. Rather, it was decided to use import filtering, to prevent this issue from taking place even if the misconfiguration occurred. The import filtering uses a route-map that makes the followed sequential check:

    1. If a route has the RT 500:500 and is from the management range ( allow it.
    2. If any other route has the RT 500:500, deny it.
    3. Allow the import of all other routes.

Essentially, rather than just importing 500:500, this route-map checks to make sure that a vpnv4 prefix comes from the management range of The biggest issue in this scenario was the deployment of this route-map to all VRFs on all PEs. But with a little bit of scripting (I won’t go into the details here), this was far more plausible than the option of deploying a multi-VRF jump router.

The work

The route map described in the above section looks as follows:

ip extcommunity-list standard VRF_MGMT_COMMUNITY permit rt 500:500
ip prefix-list VRF_MGMT_LAN seq 5 permit le 32
route-map VRF_IMPORT_MAP permit 10
 match ip address prefix-list VRF_MGMT_LAN
 match extcommunity VRF_MGMT_COMMUNITY
route-map VRF_IMPORT_MAP deny 20
 match extcommunity VRF_MGMT_COMMUNITY
route-map VRF_IMPORT_MAP permit 30

NB. This is a good example of and/or operation in a route map. If the types differ (in this case a prefix list and an extcommunity list) the operation is treated as a conjunction (AND) operation. If the types are the same it is a disjunction (OR) operation.

This will prevent the issue from occurring as it will stop the import of any vpnv4 prefix that has an RT of 500:500 unless it is from the management range.

Here is the configuration of this import map on PE1 (the other PEs are not shown but it should be configured on them too):

PE1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
PE1(config)# ip extcommunity-list standard VRF_MGMT_COMMUNITY permit 
rt 500:500
PE1(config)#ip prefix-list VRF_MGMT_LAN seq 5 permit 
le 32
PE1(config)#route-map VRF_IMPORT_MAP permit 10
PE1(config-route-map)# match ip address prefix-list VRF_MGMT_LAN
PE1(config-route-map)# match extcommunity VRF_MGMT_COMMUNITY
PE1(config-route-map)#route-map VRF_IMPORT_MAP deny 20
PE1(config-route-map)# match extcommunity VRF_MGMT_COMMUNITY
PE1(config-route-map)#route-map VRF_IMPORT_MAP permit 30
PE1(config-route-map)#ip vrf CUST_2
PE1(config-vrf)#import map VRF_IMPORT_MAP

After this addition, in the event that the misconfiguration takes place when creating the CUST_1 VRF, the import map will block the subnet. The only path that the CUST_2 VRF has to is from CE1-2, which is correct. Here is the configuration and resulting verification:

PE1(config)#ip vrf CUST_1
PE1(config-vrf)# description Customer_1_VRF
PE1(config-vrf)# rd 500:1
PE1(config-vrf)# vpn id 500:1
PE1(config-vrf)# route-target export 500:1
PE1(config-vrf)# route-target import 500:1
PE1(config-vrf)# route-target export 500:500
PE1#sh ip route vrf CUST_2
Routing entry for
  Known via "bgp 500", distance 200, metric 0
  Tag 200, type internal
  Last update from 00:22:12 ago
  Routing Descriptor Blocks:
  * (Default-IP-Routing-Table), from, 00:22:12 ago
    Route metric is 0, traffic share count is 1
    AS Hops 1
    Route tag 200

PE1#sh bgp vpnv4 unicast vrf CUST_2
BGP routing table entry for 500:2:, version 12
Paths: (1 available, best #1, table CUST_2)
Advertised to update-groups:
  200 (metric 20) from (
      Origin incomplete, metric 0, localpref 100, valid, internal, best
      Extended Community: RT:500:2
      Originator:, Cluster list:
      mpls labels in/out nolabel/24
CE2-2#trace source lo1

Type escape sequence to abort.
Tracing the route to

 1 12 msec 24 msec 8 msec
 2 [AS 500] [MPLS: Labels 18/24 Exp 0] 60 msec 68 msec 64 msec
 3 [AS 500] [MPLS: Label 24 Exp 0] 52 msec 68 msec 44 msec
 4 [AS 500] 84 msec 56 msec 56 msec


Management of the correct WAN device is still working as well…

Trying ... Open

User Access Verification


Just for good measure, and to double check that our route-map is making a difference, let’s see what happens if we remove the import map from the CUST_2 VRF.

PE1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
PE1(config)#ip vrf CUST_2
PE1(config-vrf)#no import map VRF_IMPORT_MAP
*Mar 1 00:27:45.259: %SYS-5-CONFIG_I: Configured from console by console
PE1#sh bgp vpnv4 unicast vrf CUST_2
BGP routing table entry for 500:2:, version 22
Paths: (2 available, best #1, table CUST_2)
Flag: 0x820
  Advertised to update-groups:
  100, imported path from 500:1: from (
      Origin incomplete, metric 0, localpref 100, valid, external, best
      Extended Community: RT:500:1 RT:500:500
  200 (metric 20) from (
      Origin incomplete, metric 0, localpref 100, valid, internal
      Extended Community: RT:500:2
      Originator:, Cluster list:
      mpls labels in/out nolabel/24

The offending route is imported into the CUST_2 VRF pretty quickly, proving that our route-map works. If the route map is put back in place, and we wait for the BGP Scanner to run (after 30 seconds or less) the vpnv4 prefix is blocked again:

PE1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
PE1(config)#ip vrf CUST_2
PE1(config-vrf)#import map VRF_IMPORT_MAP
*Mar 1 00:29:51.443: %SYS-5-CONFIG_I: Configured from console by console
PE1#sh bgp vpnv4 unicast vrf CUST_2
BGP routing table entry for 500:2:, version 24
Paths: (1 available, best #1, table CUST_2)
Flag: 0x820
  Advertised to update-groups:
  200 (metric 20) from (
      Origin incomplete, metric 0, localpref 100, valid, internal, best
      Extended Community: RT:500:2
      Originator:, Cluster list:
      mpls labels in/out nolabel/24

This quirk shows just one way to successfully configure MPLS management and protect against misconfiguration. Give me a shout if anything was unclear or if you have any thoughts. As mentioned earlier, the GNS3 lab is available for download so have a tinker and see what you think.