All stitched up

Segment Routing is undoubtedly one of the most powerful tools in modern Service Provider networking. It introduces a source-based routing paradigm that allows ingress routers to stack instructions or “segments” onto packets. Using this you can steer traffic through a network without the need for the signalling and state management that comes with traditional MPLS Traffic Engineering.

This blog explores a scenario where traffic is steering into two sequential SR policies – essentially stitching them together. It assumes a solid understanding of the basic functionality of MPLS based Segment Routing.

Here is the topology we will be working with:

It’s a basic MPLS network, running ISIS + LDP in the core. VPNv4 routes are exchanged via the router reflector. CE1 and CE2 are customer devices connected via BGP to the Service Provider – placed inside VRF ACME.

I’ve built this lab in EVE-NG. If you have your own setup, you can download the lab and/or config files here to follow along:

The EVE-NG lab consists of:
11 x Cisco XRv 9k 7.9.2s and 2 x Cisco XE 17.03.02s
Login creds are user1/user123

The goal

As mentioned above, our goal here is to connect two SR policies together. The first policy will direct traffic from PE1 to PE5 using an explicit path. The second policy will direct traffic from PE5 to PE4 by dynamically avoiding red colored links.

Here it is in diagram form:

Whilst this is only a lab environment, this kind of traffic engineering could be used in larger environments to do tasks such as steering traffic towards DDoS scrubbers, avoiding maintenance links or crossing administrative boundaries.

If we were using MPLS RSVP to signal the separate paths, all the LSRs would need to reserve and maintain the required state. This would be done using RSVP Path or Resv message (more details here). Segment Routing can accomplish this much more efficiently.

We’ll begin by looking at the base state of the network, then walk through the steps to enable SR, before finally creating the policies.

The setup

Let’s start by checking out the ISIS and LDP config. Here is a sample from PE1:

PE1 has a VRF called ACME configured with a BGP session to the CE2:

The same type of session is configured between PE4 and CE2. This is all fairly stock standard for a Service Provider environment. We can demonstrate the basic MPLS network by running a traceroute from CE1 to CE2 (sourcing with CE1 Loopback 0 to emulate a LAN).

(NB. For the sake of this lab, the label ranges for each if the LSRs have been set to 24×00 – 24×99, where x is the identifier for that router – PE1 has identifier 1 etc.)

You can see the traffic is taking a standard ECMP path through the network. Now let’s look at getting SR working

Setting up SR

Enable Segment Routing

The first step is to enable SR and assign prefix SIDs (don’t forget to enable mpls traffic engineering router-id as well!).

We’ll set the SRGB base to be 16000 across all devices and give sequential IPv4 indexes to each router (PE1 will be SID index 1 etc.). The IPv6 indexes will be the same but +600.

I’ve enabled the router ID using the router-id lo0 command here, which works under ipv4 and ipv6. An alternative is to use mpls traffic-eng router-id lo0. This might already be in place if you are migrating from traditional MPLS-TE, but it’s only applicable to the ipv4 address-family.

Once we repeat this for all the Service Provider routers, we can see that the MPLS forwarding table now prefers Segment Routing:

And indeed, if we repeat our traceroute we can see that SR labels are now used:

ECMP is still in effect, but since the transport label stays as 16004 the whole way, we’d need to look at the IP addresses to determine the exact path.

Populate SRTE Database

From here we need to populate the SRTE database so that any dynamic policies (in our case the one that avoids RED links) can calculate their best path. This is done using the distribute link-state command under ISIS:

Once this is done across all the devices, we can see the topology by running the following command:

All devices within the same domain should show the same output. We are now ready to start setting up the policies.

Configure explicit SR Policy

The first thing to do is set up an explicit segment list that details the path we want the traffic to follow. In our case the path looks like this:

Here is the CLI:

With this done, we can create an SR policy to reference the explicit path:

So far so good. Let’s verify that it has come up okay:

We can see from the policy that it is up, but how do we steer traffic into it?

This is done by attaching a color community to the BGP router that matches the color of the policy. In our case, we’ll tag 192.168.2.0/24 inbound from CE2 with color 10

Before we commit, here is what the prefix currently looks like in the BGP table:

The additon of the color can be seen once we commit:

Note that this has only been applied to 192.168.2.0/24, not 192.168.3.0/24:

PE1 sees the color as well:

The idea here is for traffic to 192.168.20/24 to be directed into our color 10 tunnel. If this is working, we should see the CEF table on the ACME VRF recurving to the Binding SID for our SR Policy (if you look above, the Binding SID is 24123)…

But here, it looks like it is still just imposing 16004 (the SID for R4) and then 24407 (the VPNv4 label for 192.168.2.0/24). It’s then ECMP’ing it out of Gi0/0/0/0, Gi0/0/0/3 and Gi0/0/0/2.

So what gives? Why is it not using our SR Policy?

Well, we have to remember that the allocation of a prefix to a policy is based on the combination of the end-point and the color. Looking at the BGP route, the color is correct, but the end point (or next-hop in BGP talk) is still 10.1.1.4 – PE4. Our policy is defined as having an end-point of 10.1.1.5!

So let’s fix that:

Now we see that it is correctly steering down the SR policy:

The Binding SID has changed to 24125 since we refreshed the endpoint, but CEF is looking good.

However, whilst this is steering the traffic into the policy, it still won’t get us all the way. If trace from CE1 we can see that we just get stars:

The reason for this is fairly simple. This is the stack we are putting on the packet to CE2:

16006 (PE6 Prefix SID)
16007 (P1 Prefix SID)
16005 (PE5 Prefix SID)
24407 (VPN label)

As each segment is completed, the top label is popped. We can very quickly see that when the packet reaches PE5 the VPN label is exposed. But PE5 has no idea what to do with it! This VPN label was allocated by PE4 not PE5!

For our solution, this is okay at this point in the setup. Remember we will be wanting to push this traffic into a second policy that avoids all RED links and does end up at PE4.

For now, and just for the sake of getting our traceroute working, let’s add the PE4 label to the bottom of our explicit stack, so that PE5 can forward traffic on to PE4. We’ll remove this later:

Now a traceroute works correctly:

For the explicit path to PE5 to work, we need to make sure that a label that PE5 is going to understand is exposed. To get that, we need to configure the second policy from PE4…

Configure dynamic SR Policy

This policy isn’t going to be explicitly defined. Rather, we’re going to define the conditions of the policy (namely to avoid red links) and let the head end router figure it out. The first step in creating a dynamic policy that avoids red links is to, well, configure some RED links!

As a reminder, these are the links we want to color red:

Before going any further, let’s get some clarity on the term color and the different ways it is used within the context of this lab.

Color

This scenario uses the term color to refer to multiple different things and it can get confusing if you don’t know what you’re looking at. The two ways we’re concerned with color are as follows:

Policy Coloring

The first, is the color that we have already seen when defining an SR policy. This is an identifier for the policy. If a prefix is tagged with that color (in the case of BGP, it will be an attribute) and its next-hop matches the policy endpoint, traffic to that prefix will be steered into the policy. This is exactly how we’ve steered traffic into our explicit tunnel at PE1.

Link Coloring

The second way in which color is used is with regards to link coloring. Coloring a connection between two devices works by using something called link affinities (also called Admin group from the MPLS TE RSVP days). When we entered the distribute-link state command above, ISIS started to advertise Segment Routing details in its TLVs. This includes details about the links themselves – like metric, delay and link affinities. The link affinity is basically a string made up of ones and zeros that we can set and use how we wish. In this case, we’re using the affinity to “color” a link. I put the word “color” in air quotes, because from a CLI perspective, the word color isn’t actually referenced. We engineers use the term color because it’s easy to visualise a link that way.

This section looks at the latter of the two color definitions. Within the CLI the link-affinity is referenced as an integer number. For our scenario, let’s make 7 represent RED. Here’s how it would look:

NB. As a side note, if you are configuring colors for different Flex-Algos, that config would go under the IGP (ISIS) and not under segment-routing. I won’t detail this here, but the principle of link coloring, with or without Flex-Algo, is the same.

Now that we’ve colored the link we can verify it:

The output is a little messy, so I’ve piped the command and omitted the full result, but you can clearly see that the affinity bits (Admin groups) have been set for PE4’s links to PE5 and P2.

The next step would be to configure the policy on PE5 to avoid RED links. This looks similar to the explicit policy we used before but it instead uses the (surprise, surprise) dynamic keyword. We’ll give the policy a the color number of 20 (in the SR policy sense, not the link color sense), just to differentiate it from the PE1 – although colors will be locally significant:

Great. Now if we check the policy, we can see that it is up:

This might look to be working, but if you look at the SID list, it only appears to be adding 16004 onto the packet. This means it will simply send the traffic straight to PE4 without avoiding the RED links. To prove this, we can look at the LFIB forwarding behaviour for 16004. It just sends it out of the Gi0/0/0/1 interface (direct to PE4!)

The reason for this is simple: The end that the link is colored on matters. 

We’ve colored Gi0/0/0/0 and Gi0/0/0/3 on PE4. But nothing else. 

Gi0/0/0/1 on PE5 isn’t colored RED. This might seem like a limitation, having to color both ends, but it allows traffic in different directions to take different paths, which could be handy depending on the circumstance. To help visualise this, it might be easier to think of the color as being applied to the outbound interface rather than the link as a whole. To put this in diagram form, this is what we’ve done:

So in our case, traffic from PE5 to PE4 will not be considered to be crossing a red link (but traffic from PE4 to PE5 would). We don’t need any multi-directional differences in our lab, so to make this consistent, let’s color the interfaces on PE5 and P2 facing PE4.

With this corrected, here is what our policy on PE5 looks like:

This is looking much better! The policy is going via PE3 (10.1.1.3), through P2 (10.1.1.8). It is using the Node SID of P2 first, then the node SID of PE3.

Stitching two policies together

Now that we’ve got both policies working, we need to stitch them together at PE5. 

We’ve now got both of our policies working:

  • The first policy will use an explicit path from PE1 to PE5
  • The second policy will use a dynamic path from PE5 to PE4, avoiding red links.

We steered traffic into the first policy by tagging 192.168.2.0/24 with a color attribute 10, so that it matches the color (and endpoint) on PE1’s explicit policy.

But how do we steer traffic into our second policy. Well to understand this, we need to consider different ways that traffic is directed into SR policies:

Directing traffic into an SR Policy

An SR policy is all well and good, but it doesn’t mean much if you can’t actually steer traffic into it. We’ve already seen one way – namely by tagging a BGP route with the right color attribute. But there are other ways to accomplish this.

If the incoming packet is unlabelled you could use a static route, or some form of policy based routing – pseudowires can be configured to prefer a given SR policy etc.

But what we’re interested in here is how incoming labelled traffic enters an SR policy. Afterall, traffic coming from PE1 to PE5 on the explicit path will arrive with labels.

The way to steering labelled traffic into a SR policy is to use what is called the Binding SID of a given policy. The Binding SID is a locally significant label that instructs the router to steer any arriving traffic with that label into the SR policy. The incoming packet with the Binding SID on top, will have the Binding SID removed and then the labels associated with that policy imposed on to it.

We’ve already seen a form of this earlier when looking at the CEF entry for our first policy. The CEF entry showed the local Binding SID as being imposted. This will in turn apply the explicit segment list we specified.

So with this in mind, we need to make sure that traffic arriving at PE5 has the Binding SID for the SR policy that avoids red links. Re-checking the policy on PE5 shows that it has a Binding SID of 24529:

The Binding SID is automatically generated and comes from a random pool – typically the same pool that LDP labels are pulled from. If PE5 were to reload, this number could change, meaning we’d have to change our policy on PE1. To avoid this, we can statically set the Binding SID as follows: 

Whoops. This doesn’t seem to have work. It’s unhappy with 24500, stating that there is a conflict. If we check our MPLS configuration we can see why:

Our dynamic label range has been set to 24500 24599. This is from when we had LDP configured. We can’t set an explicit Binding SID from within a dynamic range. The Explicit binding SID should come from the SRLB which defaults to 15,000 – 15,999.

We’ll allocate 15005 as the Binding SID:

Brilliant. Now that we’ve got the Binding SID set, the final step is to change the policy from PE1 to ensure that when traffic arrives at PE5, it has SID 15005 on top.

Remember we previously added 16004 to PE1’s policy. This was just so that PE5 had something it recognised once traffic reached it and our test traceroute could work. We’ll remove that first and replace it with 15005. 

Looking good. Let’s try our traceroute from CE1 to CE2:

It works! We can see the traffic following the P6→ P1 → PE5 explicit path, before entering the P4→P5 dynamic path that avoids red using the 15005 label. The 24407 is the VPNv4 advertised from P4 for 192.168.2.0/24.

As proof of concept we can see that tracing to 192.168.3.50 (loopback1) on CE2, whose BGP route does not have a color 10 attribute, is traversing the normal ECMP path we saw at the beginning:

Here is the final diagram to visualise the result:

So that’s it! There are a lot of different options that SR allows us to use in order to steer traffic intelligently and smoothly across a network. This lab has shown us but one of the methods at our disposal. Further steps might be to implement PCEP for increased scalabilty or introduce more dynamic routing options like performance-measurement – but I’ll leave this variation for a possible later blog.

Thanks so much for reading. Let me know what you think or if you have any comments. Until next time.

The Label Switched Path Not Taken

With the increased introduction of Segment Routing as the label distribution method used by Service Providers, there will inevitably be clashes with the tried and true LDP protocol. Indeed, interoperation between SR and LDP is one of the most important features to consider when introducing SR into a network. But what happen if there is no SR-LDP interoperation to be had? This is definitely the case for IPv6, since LDP for IPv6 is, more often than not, non-existent. This is exactly what this quirk will explore. More specifically, it explores how two different vendors, namely Cisco and Arista, tackle an LSP problem involving native IPv6 MPLS2IP forwarding.

I’ll begin by showing the topology and then give an example of how basic IPv4 SR to LDP interoperation works. We’ll then look at a similar scenario using IPv6 and explore how each vendor behaves. There is no “right answer” to this situation, as neither vendor violates any RFC (at least none that I can find), but it is an interesting exploration of how each approach the same problem.

Setup

I’ll start with a disclaimer, in that this quirk applies to the following software versions in a lab environment:

  • Cisco IOS-XR 6.5.3
  • Arista 4.25.2F

There is nothing I have seen in either Release Notes, or real world deployments that would make me think that the behaviour described here wouldn’t be the same on the latest releases – but it’s worth keeping in mind. With that said, let’s look at the setup…

The topology we will look at is as follows:

An EVE-NG lab and the base configs can be downloaded here:

The EVE lab has all interfaces unshut. The goal here is for the CE subnets to reach each other. To accomplish this R1 and R6 will run BGP sessions between their loopbacks. In this state, IPv6 forwarding will be broken – but we’ll explore that as we go!

I’ve made the network fairly straightforward to allow us to focus on the quirk. Every device except for R5 runs SR with point-to-point L2 ISIS as the underlaying IGP. The SRGB base is 17000 on all devices. The IPv4 Node SID for each router is its router number. The IPv6 Node SID is the router number plus 600. R5 is an LDP only node and as such, will need a mapping server to advertise it’s node-SID throughout the network – R6 fulfils this role.

To explore the quirk, we will look at forwarding from R1 to R6, loopback-to-loopback. Notice that R3 (a Cisco) and R4 (an Arista) sit on the SR/LDP boundary. IPv4 will be looked at first, to help explain the interoperation between SR and LDP. Once that is done, I’ll demonstrate how each vendor handles IPv6 forwarding differently, which results in forwarding problems.

For now we won’t look at the PE-CE BGP sessions since without the iBGP sessions, our core control plan is broken.

To get the lay of the land let’s check the config on R6 (our destination) and make sure R1’s LFIB is configured correctly.

The same SRGB is configured on all devices. We can see that R6 has SID index 6 for its IPv4 loopback and index 606 for its IPv6 loopback.

With these two pieces of information, we’d expect the CEF table for R1 to use 17006 and 17606 to forward to R6s IPv4 and IPv6 loopbacks respectively….

So far so good. Let’s start with a full IPv4 traceroute and see how SR interoperates with LDP.

IPv4 connectivity and LDP interoperability

We’ll look at this by first examining Cisco’s behaviour, so let’s shutdown the R2 to R4 Arista link (Gi0/0/0/2)…

From here, we do a basic traceroute:

Here’s a visual diagram of what is happening:

Let’s look a bit closer at what is happening here. How does R3 program its LFIB? For segment routing the LFIB programming works like this:

  • Local label: The Node SID + the SRGB Base (in our case 6 + 17000 = 17006)
  • Outbound label: The Node SID + the SRGB Base of the next-hop for that prefix

Now if the next-hop was SR capable, and the SRGB was contiguous through the domain (e.g. it was 17000 everywhere), the outbound label would be 17006 as well. But here, R5 is not SR capable. It isn’t advertising any SR TLV information in its ISIS LDPUs. But it does have LDPs sessions with all of its neighbors, including R3.

As you might be able to predict, this is where the SR to LDP interoperation comes into play. R5 will have advertised its local label for 10.1.1.6 to R3.

So R5 will use this instead! The basic principle is as follows:

Interworking is achieved by replacing an unknown outbound label from one protocol by a valid outgoing label from another protocol.

SR is basically “inheriting” from LDP. R3’s forwarding table looks as follows:

The local label is the SR label and the outbound label is the LDP label of 18. You can see from the traceroute that this indeed the label used.

NB. I’ve missed a couple of details here, namely how LDP to SR works in the opposite direction in conjunction with mapping server statements. These don’t directly relate to our quirk here, since we’re focusing on R1 to R6 traffic, but I’d recommend reading the Segment Routing Book Series found on segment-routing.net to get the full details of SR/LDP interoperability.

Now that we’ve verified forwarding through the Cisco, we’ll switch to the Arista path to ensure that the behaviour is identical. First we’ll shutdown R2’s uplink to R3 (Gi0/0/0/1) and unshut its uplink to R4 (Gi0/0/0/2), before rerunning the traceroute:

Success! It’ using the LDP label 18 just as before and if we run some of the Arista CLI commands we see similar inheritance behaviour to the Cisco:

Here’s the diagram:

So IPv4 looks solid. If no SR, then fall back to LDP. But what happens if we use IPv6? There is no LDP for IPv6. More importantly though… what should happen? Let’s explore what both vendors do and then you can make up your own mind.

IPv6 Connectivity and LDP

Let’s flip back to Cisco and see what the traceroute looks like:

Here we see something a bit unexpected. R3 is actually popping the top label and forwarding it on natively:

But why is this? If you have an incoming label but no outgoing label isn’t that, by definition, a broken LSP? So why does Cisco forward the packet natively?

Well, for this I’m going to take a quote from the Segment Routing Part 1 book (again found on segment-routing.net). Granted this isn’t an RFC, but it does a good job of explaining the Cisco IOS-XR behaviour.

If the incoming packet has a single label … (the label has the End of Stack (EOS) bit set to indicate it is the last label), then the label is removed and the packet is forwarded as an IP packet. If the incoming packet has more than one label … then the packet is dropped and this would be the erroneous termination of the LSP that we referred to previously.

Segment Routing, Part 1 by by Clarence Filsfils , Kris Michielsen , et al.

What’s happening with R3 here is MPLS2IP behaviour (since the LSP is ending and the packet is being forwarded natively). Based on the above, I believe the rule that R3 is following when deciding how to forward the incoming packet works like this:

  • If there is one SR label with EoS bit set, then Forward on natively
  • Else, treat as broken LSP and drop

Both CEF and the LFIB reflect this behaviour with Unlabelled as the outgoing label:

But why forwarding if only one label?

I believe that Cisco is making the assumption that if there is only one label, that label is likely to be a transport label. The would imply that the underlying IPv6 address is an endpoint loopback address in the IGP, which any subsequence P router would most likely know. This allows traffic to be forwarded on in brownfield migration scenario similar to our lab.

If there is more that one label, then it would seem prudent to drop it as any underlying labels are likely to be VPN or service labels that the next P router would not understand.

I can’t be sure that this is the reasoning Cisco were going for, but it seems reasonable to me.

Now the we know how Cisco does it, let’s look at how Arista’s tackles the same scenario. Just like with IPv4 we’ll flip the path and retry the traceroute:

No Network Engineer ever likes to see a broken traceroute. But clearly something isn’t getting through.

Output when looking at the LFIB from Arista starts to give us an idea:

Unlike Cisco, there is no outgoing entry in the LFIB on the Arista for 17606.

Interestingly though, tracing does work directly from R4 to R6:

This ping is a case of IP2IP forwarding. Arista, being aware that it has no label for the next-hop, forwards it natively. It’s similar to Cisco, but Cisco have an aforementioned MPLS2IP rule that bridges the two parts.

To begin troubleshooting the Arista, let’s check the basics. We already know that there is no LFIB entry for 17606. We’d expect to see it at the bottom of this table…

Perhaps R4 is not getting the correct Segment Routing information. We know from our initial config check that R6 is configured correctly. When SR is enabled on a device (no matter the vendor) an SR-Capability sub-TLV is added under the Router Capability TLV. This essentially signals that it is SR capable as well as various other SR aspects.

We can see that R4 is aware that R6 is SR enabled and gets all of the correct Node-SID information:

So far so good. But why no LFIB entry? Well, for us to understand what is happening here, we need to understand how Arista programs its LFIB.

When Arista forwards using Segment Routing, the entry is first assigned in this SR-bindings table and only then does it enters the LFIB. We can see it makes it into the SR-bindings table:

But why does it not then enter the LFIB? I believe that what is happening here is that it fails to program the LFIB based on the rules outlined above, namely:

  • Local label: The Node SID + the SRGB Base (in our case 6 + 17000 = 17006)
  • Outbound label: The Node SID + the SRGB Base of the next-hop for that prefix

The outbound label can’t be determined since the IGP network hop to 2001:1ab::6 is R5, a device that isn’t running SR. With no LDP to inherit from (since there is not LDP for IPv6) and without a special rule to forward natively (like Cisco has) the LFIB is never programmed and the packet is dropped!

Note that in the above SR-bindings table there are Remote Bindings. But these are all from R2 (the Peer ID of 1111.1111.0002 is the ISIS the system ID of R2) which is not the IGP next hop.

NB. If you are doing packet traces on a physical appliance, the “show cpu counters queue summary” command will reveal the “CoppSystemMplsLabelMiss” Packets incrementing as traffic is dropped during the traceroute. I’ve omitted is here as the command won’t work in a virtual lab environment.

So who is correct?

The obvious question at this point becomes, who is correct? Yes the traffic gets through the Cisco, but isn’t it kind of violating the principle of a broken LSP. After-all if LDP goes down between two devices in an SP core, don’t we want to avoid using that link? Isn’t that the idea behind things like LDP-IGP Sync? What if Cisco forwards an MPLS2IP packet natively and the next hop sends the packet somewhere unintended? I imagine situations like this would be rare, but maybe Arista are right playing it safe and dropping it?

I’ve tried to find an authoritative source by going on an RFC hunt – with hope of using it to determine what behaviour ought to be followed.

Unfortunately, I couldn’t find a direct reference in any RFC. The closest reference I could get was a brief mention in RFC 8661 in the MPLS2MPLS, MPLS2IP, and IP2MPLS Coexistence section.

The same applies for the MPLS2IP forwarding entries. MPLS2IP is the forwarding behavior where a router receives a labeled IPv4/IPv6 packet with one label only, pops the label, and switches the packet out as IPv4/IPv6.

RFC 8661 Section 2.1

This does little more than reference the existence of MPLS2IP forwarding. It certainly doesn’t tell us the correct behaviour. If anyone knows of an authority to resolve this, please feel free to let me know! Unfortunately at this stage, each vendors appears free to program whatever forwarding behaviour they like.

To that end, I put it to you, what do you think is the best behaviour in scenarios like this?

My personal preference is the Cisco option, because it allows for brown field migrations like those that we encountered. Without an IPv6 label distribution tool, or without reverting to 6PE, this behaviour I believe is warranted. The most likely worst case scenario is that the next hop router will simple discard the packet due to not having a route – however I concede there might be scenarios where this could be problematic.

Until there is consistency between vendors we’ll need ways to work around scenarios like this. Let’s take a look at a few.

Solutions

Sadly most of the solutions to this are suitably dull. They either involve removing the IPv6 next hop, the label, or both…

Remove the IPv6 Node SID
This is perhaps the simplest option. By removing the IPv6 node SID from R6, R1 would have no entry for R6 in its LFIB and as a result would forward the traffic natively. We can demonstrate this by doing the following:

Weight out the Arista link
I’ll only mention this in passing, since, whilst is does allow the Arista to remain live in the network for IPv4 traffic, weighting the link out is more akin to avoiding the problem rather than solving it. Here’s an example of changing the ISIS metric.

Reverting to 6PE
You might ask why this lab choses to use native BGP IPv6 in the first place, rather than use 6PE. Other than wanting to future proof the network and utilise all the benefits that come with SR, the real world scenario upon which is blog is loosely based involved 6PE bug. Basically a 6PE router was allocating one label per IPv6 prefix rather than a null label. This resulted in label exhaustion issues on the device in question. The details are beyond the scope of this blog but provides a little context. Regardless, if we use 6PE, the next hops are now IPv4 loopbacks. This allows the normal SR/LDP interoperability to take place as outlined above. Here is the basic config and verification:

Implement Segment Routing using IPv6 Headers
I’ll only mention this is passing as I’ve not seen this implemented in the wild and am not sure if my lab environment would support it. Suffice it to say that if this could be implemented, it would remove the need for LDP labels entirely.

6PE from P router to PE
I toyed with this idea out of curiosity more than anything else. I’d not recommend this for a real world deployment, but the general idea is as follows:

  • Run 6PE between the trouble P router (in our case R4) and the destination PE router (R6)
  • The PE router would advertise its IPv6 loopback over the 6PE session.
  • The P router would filter it’s 6PE session to only accept PE loopback addresses
  • The P router would set the Administrative Distance of the IGP to be 201, so it would prefer the iBGP AD of 200 to reach the end point.

The idea is that as soon as the traffic reaches the P router, the next hop to the IPv6 end point is not seen via the IGP but rather, via the 6PE session. The result would be that the incoming IPv6 SR label is replace with two labels – the bottom label is the 6PE label for the IPv6 endpoint address, the top label is the transport address for the IPv4 address of the 6PE peer (and SR to LDP interoperability can take over here). This might be scalable if the P router we running 6PE to the all the PEs via a router reflector and an inbound route-map only allowed in their next-hop loopbacks.

To put this in diagram form, it would look like this:

Unfortunately my virtual Arista didn’t support 6PE. I did test the principle on my Cisco P device (R3) and it seemed to work. Here is the basic config and verification:

R3 now sees next hop via 6PE BGP session with 6PE label:

Traceroute confirms 6PE path:

18 is R5’s LDP label for 10.1.1.6 and 16300 is the 6PE label for 2001:1ab::6. Whilst this technically would work, it would also mean that R3 would always use 6PE to any IPv6 endpoint it received over the 6LU session – it would however, allow the IPv6 node SID and native IPv6 session to remain in place.

Again, this was a thought exercise more than anything else – I wouldn’t recommend this for a live deployment without a lot more testing.

Conclusion

Here, we’ve seen an unexpected scenario whereby Cisco forwards single labelled SR packets natively, but Arista treats it as a broken LSP, as it arguably is. This causes problem for MPLS2IP forwarding for IPv6 packets in a brownfield migration. Ultimately it comes down the way to which each vendor choses to implement their LFIBs. At time of writing, the only real solution are to remove the IPv6 next-hop, the label or both – however it will be interesting to see moving forwarding if SRv6 or perhaps even some inter-vendor consensus could resolve this interesting quirk. Thanks so much for reading. Thoughts and comments are welcome as always.

TI-LFA FTW!

Having fast convergence times is one of most important aspects of running any Service Provider network. But perhaps more important is making sure that you actual have the necessary backup paths in the first place.

This quirk explores a network within which neither Classic Loop Free Alternate (LFA) nor Remote-LFA (R-LFA) provide complete backup coverage, and how Segment Routing can solve this using a technology called Topology Independent Loop Free Alternate (or TI-LFA).

This blog comes with a downloadable EVE-NG lab that can be found here. It is configured with the final TI-LFA and SR setup, but I’ll provide the configuration examples for both Classic-LFA and R-LFA as we go. We’ll just look at link protection in this post, but the principle for node and SRLG protection is similar.

I’ll assume anyone reading is well versed in the IGP + LDP Service Provider core model but will give a whistle stop introduction to Segment Routing for those who haven’t run into it yet…

Segment Routing – A Brief Introduction

Segment Routing (or SR) is one of the most exciting new technologies in the Service Provider world. On the face of it, it looks like “just another way to communicate labels” but once you dig into it, you’ll realise how powerful it can be.

This introduction will be just enough to get you to understand this post if you’ve never used SR before. I highly recommend checking out segment-routing.net for more information.

The best way to introduce SR is to compare it to LDP. So to that end, here’s a basic diagram as a reminder of how LDP works:

R3 is advertising its 1.1.1.3/32 loopback via ISIS and LDP communicates label information. This should hopefully be very familiar.

So how does SR differ?

Well, the difference is in how the label is communicated. Instead of using an extra protocol, like LDP, the label information is carried in the IGP itself. In the case of ISIS, it uses TLVs. In the case of OSPF it uses opaque LSAs (which are essentially a mechanism for carrying TLVs in and of themselves).

This means that instead of each router allocating is own label for every single prefix in its IGP routing table and then advertising these to their neighbors using multiple LDP sessions, the router that sources the prefix advertises its label information inside its IGP update packets. Only the source router actually does any allocating.

Before I show you a diagram, let’s get slightly technical about what SR is…

Segment Routing is defined as … “a source-based routing paradigm that uses stacks of instructions – referred to as Segments – to forward packets”. This might seem confusing, but basically Segments are MPLS Labels (they can also be IPv6 headers but we’ll just deal with MPLS labels here). Each label in a stack can be thought of as an instruction that tells the router what do with the packet.

There are two types of Segments (or instructions) we need to be concerned with in this post. Node-SIDs and Adj-SIDs.

  1. A Node-SID in a Service Provider network typically refers to a loopback address of a router that something like BGP would use as the next-hop (e.g. it puts the self in next-hop-self). This is analogous to the LDP label for a given IGP prefix that each route in an ISP core would assign. The “instruction” for a Node-SID is forward this packet on to the prefix along your best ECMP path for it.
  2. An Adj-SID represents a routers IGP neighborship. An Adj-SID has the “instruction” of forwarding this packet out of the interface toward this neighbor.

So how are these allocated and adverted?

Like I said before, they use the TLVs (or opaque LSAs) of the IGPs involved. But the advertisements of Adj-SIDs and Node-SIDs do differ slightly…

Adj-SIDs are (by default) automatically allocated from the default MPLS label pool (the same pool used to allocate LDP labels) and are simply advertised inside the TLV “as is”. There is more than one type of Adj-SIDs for each IGP neighbor… they come in both protected/unprotected and IPv4/IPv6 flavours. This post will only deal with IPv4 unprotected, so don’t worry about the others for now.

The Node-SID is a little more complicated. It is statically assigned under the IGP config and comes from a reserved label range called the SRGB or Segment Routing Global Block. This is a different block from the default MPLS one and the best way to understand it is to put it in context…

Let’s say your SRGB is 17000-18999. This is a range of 2000 labels. And let’s say that each router in the network get’s a Node-SID based on its router number (e.g. R5 gets Node-SID 17005 etc..). Well when a router advertises this information inside the TLV, it breaks it up into several key bits of information:

• The Prefix itself – a /32 when dealing with IPv4.
• A globally significant Index – this is just a number. In our case R5 gets index 5 and so on…
• The locally significant SRGB Base value – the first label in the Segment Routing label range. In our case the SRGB base is 17000.
• A locally significant Range value stating how big the SRGB is – for us it’s 2000

So for any given router its overall SRGB is the Base plus the Range. And because both the Base and the Range are locally significant, so is the SRGB.

What does this local significance mean?

Well it obviously means that it can differ one device to another… so when I said above “Let’s say your SRGB is 17000-18999″ I’ve assumed that all devices in the network have the same SRGB (and by extension the same Base and Range) configured. Again to best understand this, let’s continue within our current context…

Just like LDP each router installs an In and Out label in the LFIB for each prefix:
• The In label is the Index of that prefix plus its own local SRGB.
• The Out label is the Index plus the downstream neighbor’s SRGB.

Let’s put this in diagram form to illustrate. In the below diagram we are following how R4 advertises its loopback and label information:

Looking at the LSPDU in the diagram above R4 is advertising….

  • The 1.1.1.4/32 Prefix
  • An Index of 4
  • An SRGB Base of 17000
  • A Range of 2000
  • An unprotected IPv4 Adj-SID for each of its neighbors (R3 and R5) – other Adj-SIDs have been left out for simplicity

Now let’s consider how R2 installs entries into its LFIB for 1.1.1.4/32. R2’s best path to reach 1.1.1.4/32 is via R3 so it takes the information from both R4s LSPDU and R3s LDSPU…

In Label
R2’s In label for 1.1.1.4/32 is 17000 (its own SRGB Base) plus 4 (the Index from R4.00-00) = 17004.

Out Label
R2’s Out label for 1.1.1.4/32 is 17000 (R3’s SRGB Base which it would have got from R3.00-00) plus 4 (the Index from R4.00-00) = 17004.

You can very quickly see that if every device in the network has 17000-18999 as its SRGB, then the label will remain the same as traffic is forwarded through the network, because the In and Out labels will be the same!

From here we can illustrate how SR is a source routing paradigm. Let us assume that in our test network traffic comes into R1 and is destined for R5. However for some reason, we have a desire to send traffic over the R4-R5 link even though it is not the best IGP path. R1 can do this by stacking instructions (in the form of labels) onto the packet that other routers can interpret.

Here’s how it works:

R1 has added {17004} as the top label and {16345} as the bottom of stack label. Don’t worry about how the label stack derived. There are multiple policies and protocols that can facilitate this – but that is another blog in and of itself!

If you follow the packet through the network you can see that R2 is essentially doing a swap operation to the same label. R3 is then performing standard PHP before forwarding the packet to R4 (PHP works slightly differently in Segment Routing – I won’t detail it here but for our scenario it operates the same as in LDP). R4 sees the instruction of {16345} which tells it to pop the label and send it out of the interface for its adjacency to R5 (regardless of its best path).

This illustrates a number of advantages for SR:

• The source router is doing the steering.
• There is no need to keep state. In a traditional IGP+LDP network this kind of steering is typically achieved using MPLS TE and involves RSVP signalling to the head end router and back. With SR the source simply instantiates the label stack and forwards the packet.
• No need to run LDP. You’ve now one less protocol to run and no more LDP/IGP Sync issues.
• IPv6 label forwarding: IPv6 LDP never really took off. Cisco IOS-XR routers are able to advertise node-SIDs for both IPv4 and IPv6 prefixes. This can eliminate the need for technologies like 6PE.

Before moving on I’ll briefly show how, for example, SR on R4 would be configured for IOS-XR:

There are more types of SIDs and many more applications for Segment Routing than I have shown here – but if you’re new to SR, this brief summary should be enough to help you understand TI-LFA. With that said, let’s look at how LFA and its different flavours work…

LFA Introduction

Loop Free Alternate is a Fast Reroute (FRR) technology that basically prepares a backup path ahead of a network failure. In the event of a failure, the traffic can be forwarded into the backup path immediately (the typical goal is sub 50ms) with minimal downtime. It can be thought of as roughly analogous to EIGRPs feasible successor.

The best way to learn is by example. So what I’ll do is walk through Classic-LFA, Remote-LFA and TI-LFA showing how each improves on the last. First, however, I’ll introduce some terminology before we get to the actual topologies:

PLR (Point of Local Repair) – The router doing the protection. This is the router that will watch for outages on its protected elements and initiate FRR behaviour if needed.
Release Point – The point in the network where, with respect to the destination prefix, the failure of the Protect Element makes no difference. It’s the job of the PLR to get traffic to the Release Point.
C, Protected Element – The part of the network being protected against failure. LFA can provide Link Protection, Node Protection or SRLG (Shared Risk Link Group) protection. This post only covers Link Protection, but the principle is the same.
D, Destination prefix – LFA is done with respect to a destination. So when presenting formulas and diagrams, D, will refer to the destination prefix.
N, Neighbor – A neighbor connected to the PLR that (in the case of Classic-LFA) is a possible Release Point for an FRR solution.
Post-convergence path – Refers to the best path to the destination, after the network has converged around the failure.

If a failure occurs at a given PLR router, it does the following:

  1. Assumes that the rest of the network is NOT aware of the failure yet – i.e. all the other routers think the link is still up and their CEF entries reflect that.
  2. Asks itself where it can send traffic such that it will not loop back or try cross the Protected Element that just went down – or to put it another way, where is the Release Point?
  3. If it has a directly connected Neighbor that satisfies the previous point, then send it to that Neighbor. Nice and easy, job done. This is Classic-LFA.
  4. If, however, the Release Point is not directly connected, traffic will need to be tunnelled to it somehow – this is R-LFA and TI-LFA.

Now that we’ve introduced the basic mechanism, let’s start with Classic-LFA.

Classic-LFA

Here is our starting topology:

The link we’re looking to protect is the link between R6 and R7. The prefix we will be protecting is the loopback of R3 (10.1.1.3/32). Traffic will be coming from R8. This makes R7 the PLR. We haven’t implemented SR yet so we’re just working with a standard IGP + LDP model.

Download the Classic-LFA configs here.

So, if you are protecting a single link Classic-LFA (also called local-LFA) can help. The rule for Classic-LFA is this (where Dist(x,y) is the IGP cost from x to y before the failure):

Dist(N, D) < Dist (PLR, N) + Dist (PLR, D)

If the cost for the Neighbor to reach the Destination is less than the cost of the PLR to reach the Neighbor plus the cost of the PLR to get to the Destination, the Neighbor is a valid Release Point. In short, this means that traffic won’t loop back or try to use the Protected Element.

Here’s the idea using our topology:

Prior to the failure traffic takes the R8-R7-R6-R2-R3 path.

When the R6-R7 link fails R7 must figure out where to send the traffic.

It can’t send it to R11. Remember R11 isn’t aware of the outage yet and it’s best path to R3 is back via R7 so it will simply loop it back. Or to put this in the formula:

Dist(N, D) < Dist (PLR, N) + Dist (PLR, D)
Dist(R11, R3) < Dist (R7, R11) + Dist (R7, R3)
40 < 10 + 30 FALSE!

But it can send it to R2! R2’s best path to R3’s loopback doesn’t cross the R6-R7 link:

Dist(N, D) < Dist (PLR, N) + Dist (PLR, D)
Dist(R2, R3) < Dist (R7, R2) + Dist (R7, R3)
10 < 100 + 30 TRUE!

So R2 is the valid Release Point and if R6-R7 fails R7 can forward traffic immediately over to R2.

Configuration and Verification of Classic-LFA

The below IOS-XR config shows the basic IS-IS config from R7’s point of view (I’ve left out the config for IPv6 for brevity, but all attached configs and the downloadable lab contain IPv6).

Shortcomings of Classic-LFA

There are two common shortcomings with Classic-LFA:

• The backup path is sub-optimal. The cost for R7 to reach R3 is now R7-R2-R3 = 110. It would be more efficient to go R7-R11-R10-R9-R6-R2-R3 = 70. This is indeed the Post Convergence Path shown in the diagram above.
• Coverage is not 100%. If the link to R2 was not present, there would be no Classic-LFA backup path, since nothing satisfies the formula. If there’s no directly connected neighbor that satisfied the formula, nothing can be done!

Remote-LFA can help with some of these problems. To demonstrate this let’s remove the R2-R7 link…

Remote-LFA

Our topology now looks like this:

You can see there is no Classic-LFA path here if link C goes down. But there obviously is a backup path (namely R11-R10-R9-R6-R2-R3), but if all we had was Classic-LFA we’d have to wait for the IGP to coverage, which in most networks is too long. R-LFA can step in and help but in order to explain how it does so, we first need to define a couple of terms to describe the network from the point of view of the PLR. P and Q space…

P and Q Space

The P-space and the Q-space are a collection of nodes within the network that have a specific relationship to the PLR or Destination, with respect to the Protected Element. This sounds complicated but I’ll walk through it. P and Q don’t stand for anything – they’re just arbitrary letters. Let’s start with the P-space

P-space: In our context, the definition of P-space is “The set of nodes such that, the shortest-path from the PLR to them does not cross the Protected Element.”

This basically represents the set of devices such that R7’s best path to them, doesn’t cross the R6-R7 link.

So to figure this out…

Start at R7 and for every other router in the network figure out R7’s best path to reach it.
Does its best path cross the R6-R7 link? (this includes all ECMP paths too!)
• yes? – then it is not the in the P-space
• no? – this it is in the P-space

In our network the P-space contains these routers:

Note that even though R7 could reach R10 via R11 with a total cost of 30, it also has an ECMP cost 30 path via R9-R6-R7, which disqualifies it.

So that’s the P-space, but what about the Q-space…?

Q-space: The formal definition of Q-space is “the set of nodes such that their shortest-path to the Destination does not cross the Protected Element.”

This basically represents the set of devices that can get to R3 (the Destination) without worrying about whether or not the R6-R7 link has failed. They are basically on the other side of the failure with respect to the Destination – or in other words, they are candidate Release Points.

So to figure this out…

Go to each router in the network and figure out its best path to reach R3.
Does its best path cross the R6-R7 link? (again, this includes all ECMP paths too!)
• yes? – then it is not in the Q-space
• no? – this it is in the Q-space
So in our network the Q-space contains these routers:

What we want is a place where P and Q overlap. If we can get it to that router we can avoid the downed Protected Element and get the traffic to the Destination.

But in our setup they don’t overlap!

However, we can use something called the extended P-space to increase our reach. So what is the extended P-space?

Think about the network from R7s point of view. R7 can’t control what other routers do once it sends a packet on its way. But it can decide which of its interfaces it sends the packet out of. This allows us to consider not just our own P-space, but also the P-space of any directly connected Neighbors that exist in our own P-space. Adding all these together forms what we call the extended P-space.

In short, the extended-P space from the point of view of any node (in our case R7), is its own P-space + the P-space of all of its directly connected P-space Neighbors.

So for R7, we calculate the P-space for R4, R11 and R8 (we don’t calculate the P-space of R6, since R6 is not in R7’s P-space).

The P-space for R4 and R8 are identical:

The P-space of R11 is a little bigger:

Again, to reiterate how we calculate R11’s P-space, look at each device in the network and include it if R11’s best path to reach it doesn’t cross C.

If we combine these, we get our extended P-space:

Now if we combine the extended P-space and the Q-space we have an overlap at R10!

Any nodes in the overlap are called PQ nodes – these will be valid Release Points. If there is more than one, R7 will select the nearest. But how do we get the traffic there? If the R6-R7 link failed and the PLR simply sent traffic to R11 it would send it straight back (remember R11 isn’t aware of the failure yet). Here’s where R-LFA and its tunnelling kick in…

Remote-LFA Tunnelling

Download the R-LFA configs here.

Once the PQ node is found, the PLR will prepare a backup path whereby it puts the protected traffic in an LSP that ends on the PQ node. This is done by pushing a label on top of the stack.

But before it does that it must do the regular LDP swap operation for the original LSP to the Destination (R3). Under normal LDP conditions, it would swap the incoming label with the local label of its downstream neighbor (learned via LDP). But in this case the R7 doesn’t have an LDP session to the PQ node (R10)… so it builds a targeted one!

Over this targeted LDP session the PQ node will tell the PLR what its local label is for the destination prefix. It is this label that the PLR will swap the transport label for before it pushes on the label that will forward traffic to the PQ node.

To put this in diagram form for our example:

In our example, R10 tells R7 what its local LDP label is for R3 loopbacks. R7 swaps the transport label for this tLDP-learned label and then pushes R11’s label for R10 on top and forwards it to R11.

All of these tLDP sessions and calculations are done ahead of time. So that switching to the backup tunnel is as fast as possible.

Configuration and Verification for R-LFA

Here’s the IOS-XR configuration and verification output for R-LFA:

So that’s R-LFA. It can help to reach a Release Point if it isn’t a directly connected neighbor. It’s also worth noting that R-LFA can use SR labels if they are available, rather than using tLDP.

Shortcomings of R-LFA

But there are shortcomings with R-LFA too:

• There is increased complexity with all of the tLDP sessions running everywhere.
• The backup path still might not be post-convergence path – meaning that traffic will be forwarding in a suboptimal manner while the network converges and then will need to switch to the new best path once convergence is complete.
• Coverage is still not 100% – there might not be a PQ overlap!

We’ll look at just such a case with no PQ overlap next…

TI-LFA and SR

First off, lets assume that we have removed LDP from the our network and configured SR instead. I won’t go through the process of turning LDP off and turning SR on beyond briefly showing this configuration:

The downloadable lab for this blog has both SR and LDP configured but SR is preferred.

Now we’ll make another change to our topology by increasing the R9-R10 metric as follows:

This is a subtle change. But if we go through the process of calculating the P and Q spaces we get the following:

You can see there is no PQ overlap (this includes R7’s extended P-space). Let’s try to fix this using R-LFA…

We can’t do what we did last time and use R10 as the node to build our tLDP session to. If we tunnel traffic to R10, what label to we put at the bottom of the stack?

If we use R10’s local label for R3 it will forward the traffic straight back to R11 trying to use the R7-R6 link. This is precisely what it means to not be in the Q-space! It’s worth noting here that since we’re now using SR, R10’s local label for R3 with be the globally recognised 17003 label – but the problem will be the same either way, since R10s best path to R3 crosses the R7-R6 link.

Ok but what about R9…? If we try to tunnel traffic to R9, R7 will have to send traffic traffic to R11 with a top label of R11’s local label for R9 (again an SR label, namely 17009). But what will R11 do when it get’s this packet? It will send the traffic straight back to R7 (again trying to use the R7-R6 link) – if this wasn’t the case, R11 would be in the Q-space!

So what do we do? Well, here’s where the power of Segment Routing steps in. Topology Independent Loop Free Alternate (or TI-LFA) utilizes Adj-SIDs to bridge the PQ space gap . Remember Adj-SIDs are locally generated labels communicated via the IGP TLVs that act as instructions to forward traffic to their local neighbors.

R7 does the following:
• Calculates what the best path to R3 would be if the link from R6-R7 were to go down (or in other words, it calculates the post-convergence path) – in this case it sees the best path is R7-R11-R10-R9-R6-R2-R3.
• Calculates the Segment List needed to forward traffic along this path – assuming other nodes in the network will not yet be aware of the R6-R7 failure.
• Installs this Segment List as the backup path.

The details of the algorithm used to calculate the Segment List is not publicised by Cisco, but in our case, the general principle is straight forward to grasp.

The top most label (or segment) gets the traffic to the border P node – R10.

The next label is the Adj-SID that R10 has for R9. It basically instructs R10 to pop the Adj-SID label and forward it out of its Adjacency to R9 – this is the PQ bridging in action.

The bottom of stack label is simply the Node-SID for R3. When R9 gets it, we know it will forward it on to R3 without crossing the protected link, because it is in the Q-space.

To put this in diagram form, we get the following:

Once TI-LFA is enabled all of this is calculated and installed automatically. A key thing to highlight here is that the backup path is the same as the post-convergence path. This means that traffic will not have to change its path through the network again when the IGP converges. The only thing that will change is the label stack.

Configuration and Verification of TI-LFA

TI-LFA is pretty easy to configure and verification is straight forward when you know what to look for…

Demonstration

To close off this blog I’ll give a packet capture demonstration of TI-LFA in action. In my EVE-NG lab environment, if I set a constant ping from R8 to R3 before shutting down the R6-R7 link, the IGP actually converges too fast for me capture any TI-LFA encapsulated packets. The fix this problem I updated the PLR as follows:

I’m not going to go into detail on what microloop avoidance is here. But put briefly a microloop is, as the name suggests, is very short term routing loop caused by the fact that different routers will update their forwarding tables at different rates after a network change. Microloop avoidance is a mechanism that uses Segment Routing to detect and avoid such conditions. The main take away here though, is the rib-update-delay command. This instructs the router to hold the LFA path in the RIB for a certain period regardless of whether or not it could converge quicker. In our case we’re instructing R7 to keep forwarding traffic along the TI-LFA backup path after the R6-R7 failure for 10 seconds (10,000 milliseconds).

Once this was sorted, I started a packet capture on R11’s interface facing R7 and repeated the test…

If we look at the PCAP on R11 we can see the correct labels are being used on the ICMP packets.

Checking R7 before the 10 seconds are up shows a backup TI-LFA tunnel in use:

The Prefix field shows all of the IGP destinations for which R7 will use this tunnel. Note that 10.1.1.3/32 is in that list. You can try this yourself in the downloadable lab.

Conclusions

So that’s TI-LFA using SR! I’ve tried to present this blog as a basic introduction to LFA types as well as a demonstration of how powerful SR can be. There are more nuanced scenarios involving a mix of SR-capable and LDP-only nodes, but LDP/SR interoperation is another topic entirely. We’ve seen how traditional technologies like Classic LFA and R-LFA are adequate in most circumstance but TI-LFA with the power of SR can provide complete coverage. Thank you for reading.

It’s not easy building GRE

The importance of having backup paths in a network isn’t a revelation to anyone. From HSRP on a humble pair of Cisco 887s to TI-LFA integration on an ASR9k, having a reliable backup path is a staple for all modern networks.

This quirk looks at the need for a backup path on a grand scale. We’ll look at a hypothetical scenario of a multi-national ISP losing a backup path to a whole region and how, as a rapid response solution, it builds a redundant path over a Transit Provider…

Scenario

So here is our hypothetical Tier 2 Service Provider network. It is spread across three cities in three different countries and has various Peering and Transit connections throughout:

blog12_image1_scenario

IS-IS and LDP is run internally. This includes the international links, resulting in one contiguous IGP domain. What’s important to note here is that New York only has a single link to the other countries and only has one Tier 1 Transit Provider.

The quirk

To setup this quirk, we need a link failure to take place. Let’s say a deep-sea dredger rips up the cable in the Atlantic going from New York to London.

blog12_image2_failure

New York doesn’t lose access – it still has a link to the rest of the network via Paris. However that link to Paris is now its sole connection to the rest of the network.  In other words, New York no longer has that all important backup path. The situation is exacerbated when you learn that a repair boat won’t be sent to fix the undersea cable for weeks!

So what do you do?

You could invest in more fibre and undersea cabling to connect your infrastructure – arguably you should’ve already done this! But placing an order for a Layer 2 Service or contacting a Tier 1 Provider to setup CsC takes time. By all means, place the order. But in the meantime, you’ll need to set something up quickly in case New York to Paris fails and New York becomes completely isolated.

One option, and indeed the one we’ll explore in this blog, is to reconnect New York to London through your transit provider without waiting for an order or even involving them at all…

I should preface this by stating that this solution is neither scalable nor sustainable. But it is most definitely an interesting and … well… quirky work around that can be deployed at a pinch.

With that said, how do we actually do this?

In order to connect New York to London over another network, we’ll need to implement tunnelling of some kind. Specifically, we’ll look at creating a GRE tunnel between the MSEs in New York and London using the Tier 1 Transit Provider as the underlay network.

To put it in diagram form, our goal is to have something like this:

blog12_image3_plan

To guide us through this setup, I’ll tackle the process step by step using the following sections:

  • Tunnel-end point Reachability
    • Control Plane: The IPs of endpoints of the GRE tunnel will need reachability over the Tier 1 Transit Provider.
    • Data Plane: Traffic between the endpoint must be able to flow. This section will examine what packet filtering might need adjusting.
  • GRE tunnel configuration
    • Control Plane: This covers the configuration and signalling of the GRE tunnel
    • Data Plane: We’ll need to look at MTU and account for the additional overhead added by the GRE headers.
  • Tunnel overlay protocols: Making sure IS-IS and LDP can be run over the GRE tunnel, including the proper transport addresses and metrics.
  • Link Capacity: This new tunnel will need to be able to take the same amount of traffic that typically flows to and from New York. Given that our control over this is limited, we’ll assume that there is sufficient bandwidth on these links. 
  • RTBH: Any Remote Trigger Black Holing mechanisms that have been applied to your Transit ports may need to have exceptions made so it does not mistake your own traffic for a DDoS attack.
  • Security: You could optionally encrypt the traffic transiting the Transit Providers network.

The goal of this quirk is to explore the routing and reachability side of the scenario so I will discuss the first 3 of the above points in detail and assume that link capacity, RTBH and security are already accounted for.

Downloadable Lab

This quirk considers the point of view of a large Service Provider with potentially hundreds of routers. However in order to demonstrate the configuration specifics and allow you to try the setup, I’ve built a small scale lab to emulate the solution. I’ve altered some of the output shown in this blog to make it appear more realistic. As such, the lab and output shown in this post don’t match each other verbatim. But I’ll turn to the lab towards the end in order to do a couple of traceroutes and for the most part the IP addressing and configuration match enough for you to follow along.

I built the lab in EVE-NG so it can be download as a zipped UNL file. I’ve also provided a zip file containing the configuration for each node, in case you’re using a different lab emulation program.

blog12_image4_evenglab

With that said, let’s take a look at how we’d set this up…

Tunnel-end point Reachability – Control Plane

We’ll start by putting some IP addressing on the topology:

blog12_image5_ipaddressing

(the addressing used throughout will be private, but in a real world scenario it would be public)

Now we might first try to build the tunnel directly between our routers, using 10.1.1.1 and 10.2.2.1 as the respective endpoints. But if we try to ping and trace from one to the other we see it fails:

From the traceroute we can see that traffic is entering an LSP in the Tier 1 MPLS core. The IPs we see are likely the loopback addresses of the P routers along the path. In spite of this limited visibility we can see the traffic isn’t reaching its destination – it is stopping at 10.117.23.23. But why?

Well, we don’t have full visibility of the Tier 1 Provider’s network, but they are likely restricting access to the transport subnets used to connect to their downstream customers. This is a common practice and is designed to, among other things, prevent customers from having access to networks that their Tier 1 edge devices exist on. Traffic should never go to these transport addresses, they should only go through them.

This means that when we see the traceroute stop at 10.117.23.23, this may very well be an ACL or filter on T1-MSE2 blocking traffic to 10.2.2.0/30 from an unauthorised source.

blog12_image6_t1filter

As a result of this, we’ll have to advertise a subnet to the Tier1 Provider from each side and have them act as the tunnel endpoints. Tier1 Providers typically don’t accept any prefix advertisement smaller than a /24 so in this case we’ll have no choice but to sacrifice two such ranges – one for each site. See why I mentioned this is not scalable?

Before we allocate these ranges, I’m going to assume the following points have already been fulfilled:

  • The /24 address ranges are available and unused: let’s say we have a well documented IPAM (IP Address Management) system and can find a couple of /24s.
  • The Transit Provider will accept the prefixes we advertise – meaning our RIR records are up to date and any RPKI ROAs are correctly configured so that the Transit Provider will not have any problems accepting the /24s advertised over BGP.

With these assumptions in place, we’ll start by allocating a subnet for each site (again, pretend these are public):

  • New York subnet: 172.16.10.0/24
  • London subnet: 172.16.20.0/24

At this point we need to see what the Transit Provider sees in order to make sure our routes are being received correctly. For this we’ll use a Looking Glass. To illustrate this, I’ve invented a hypothetical Transit Provider called TEAR1 Limited (a bad pun I know, but fit for our purposes 😛 ). I’ve put together images demonstrating what a Looking Glass website for TEAR1 might look like. First, we’ll specify that we want to find BGP routing information about 172.16.10.0:

blog12_image7_LGform

After clicking submit, we might get a response similar to what you see below:

blog12_image8_LGprechange

So what can we make out from the above output…? We can see, for example, that T1-LG1 is receiving the prefix from what looks like a Route Reflector (evident by the existence of a Cluster List in the BGP output) and from what is probably the T1-MSE1 Edge Router that receives the prefix from our ISP. The path to the Edge Router is being preferred, since it has a Cluster List length of zero. T1-LG1 could itself be a Reflector within TEAR1. It’s difficult to tell much more without knowing the full internal topology of TEAR1, but the main take home here is that is it sees a /16, rather than the more specific /24s. This is fine at this point – we haven’t configured anything yet. And indeed if we check NY-MSE1 we can see that we are originating a /16 as part of our normal prefix advertisements to Transit Providers:

Here we’re using a null route for the subnet. This is common within Service Providers. The null route is redistributed into BGP and advertised to TEAR1. This isn’t a problem since within our Service Provider there will be plenty of subnets with the 172.16.0.0/16 supernet – meaning there will always be more specific routes to follow. We could have alternatively used BGP aggregation.

Regardless of how this is done, we need to configure NY-MSE1 to advertise 172.16.10.0/24 to TEAR1 and LON-MSE1 to advertise 172.16.20.0/24. We could use static null routes here too, but remember the goal is to have these IPs be the endpoints of the GRE tunnel. With this in mind, we’ll use loopbacks and advertise them into BGP using the network statement. The overall configuration is as follows:

(I’ve only shown the config for the New York side but the London side is analogous – obviously replacing 172.16.10.0 with 172.16.20.0. To save showing duplicate output, I will sometimes only show the New York side, but know that in those cases, the equivalent substituted config is on the London side.)

To advertise these ranges to TEAR1 we’ll need to adjust our outbound BGP policies.

I’ll pause here to note that filtering on both the control plane and data planes at a Service Provider edge is a complex subject. The CLI I’ve shown here is grossly oversimplified just to demonstrate the parts relevant to this quirk (including a couple of hypothetical communities that could be used for various routing policies). The ACL and route policies in a real network will be more complicated and cover more aspects of routing security, including anti-spoofing and BGP hi-jack prevention. That being said, here is the config we need to apply:

And indeed if we soft clear the BGP sessions and check the Looking Glass again, we can see that TEAR1 now sees both /24 subnets.

blog12_image9_LGpostchange

You might’ve noticed in the above output the inclusion of the no-export community. This is done to make sure that TEAR1 does not advertise these /24s to any of its fellow Tier 1 Providers and pollute the internet further. By “polluting the internet” I mean introducing unnecessary prefixes into the global internet routing table. In this case, we are adding two /24s which, from the point of view of the rest of the internet, aren’t needed since we’re already advertising the /16. We can’t make TEAR1 honour the no-export community, but it is a reasonable precaution to put in place nonetheless.

This covers control plane advertisements to TEAR1. But we also need to think about how LON-MSE1 sees NY-MSE1s loopback and vice versa. Each MSE should see the GRE tunnel endpoint of the other over the TEAR1 connection. Depending on how we perform redistribution, we might see these tunnel endpoints in iBGP or in our IGP. But we don’t want them to see each over our own core or else that is the path the tunnel will take!

In addition to this, we don’t want TEAR1 to advertise our own IP addresses back to us. Most ISPs filter against this anyways, and even if we didn’t, BGP loop-prevention (seeing our own ASN in the AS_PATH attribute) would prevent our MSEs from accepting them.

The cleanest way to ensure NY-MSE1 has a best path for 172.16.20.0 via TEAR1 (and vice versa) is a static route:

This will make sure that the tunnel is built over the Transit Provider. The data plane needs to be adjusted next, to allow the actual traffic to flow…

Tunnel-end point Reachability – Data Plane

Many Service Providers will apply inbound filters on Peering and Transit ports both on the control plane and data plane. This is done to, among other things, prevent IP spoofing. For example, any outbound traffic should have a source address that is part of the Service Providers address space (or from the PI Space of any of it customers). Similarly, any inbound traffic should have a source address that is not part of their customer address space. These are the types of things we’ll need to consider when opening up the data plane for our GRE tunnel.

For this scenario, let’s assume we block inbound traffic that is sourced from our own address space. This would prevent spoofed traffic crossing our network, but it would also block traffic from one of our GRE tunnel endpoints to the other. We’ll need to adjust the inbound ACL and to make this as secure as possible we should only allowing GRE traffic from and to the relevant /32 endpoints.

I’ve omitted the full details of this ACL, since in real life it would be too long to show. However here is the addition we’d need to make (assuming no reject statements before index 10):

We’re now in a position to test connectivity – remembering to source from loopback10 so that LON-MSE1 has a return route:

Success, it works! We have loopback to loopback reachability across TEAR1 and we didn’t even need to give them so much as a phone call. To put this in diagram format, here is where we are at:

blog12_image10_beforeGRE

We’re now ready to configure the GRE tunnel.

Configuring the Tunnel – Control Plane

The GRE configuration itself is fairly straight forward. We’ll configure the tunnel endpoint on each router, specifying the loopbacks as the sources. We also need to allocate IPs on the tunnel interfaces themselves. These will form an internal point-to-point subnet that will be used to establish the IS-IS and LDP neighborships we need. Let’s allocate 192.168.1.0/30, with NY-MSE1 being .1 and LON-MSE1 being .2.

The config looks like this:

Checking the GRE tunnel shows that it is up:

So now we have the GRE tunnel up and running. Before we look at the MTU changes, I want to demonstrate how the MTU issue manifests itself by configuring the overlay protocols first…

Tunnel overlay protocols – LDP

The LDP configuration is, on the face of it, quite simple.

This will actually cause the session to come up, however on closer inspection the setup is not a typical one. (in this output NY-MSE and LON-MSE have loopback0 addresses of 2.2.2.2/32 and 22.22.22.22/32 respectively).

To explain the above output it’s worth quickly reviewing LDP (see here for a cheat sheet). LDPs Hellos, with a TTL of 1 are sent to 224.0.0.2 (the all routers multicast address) out of all interfaces with LDP enabled. This includes the GRE tunnel. This means that LON-MSE1 and NY-MSE1 establish an Hello Adjacency over the tunnel. Once this Adjacency is up, the router with the lower LDP Router ID will establish a TCP Session to the transport address of the other (from its own transport address). The transport address is included in the LDP Hellos. This address defaults to the LDP Router ID which in turn defaults to the highest numbered loopback. This allocation of the Router ID only occurs once (on IOS-XR) when LDP is first initialised and from then on, only when the existing Router ID is changed. As a result, depending on the order in which loopbacks and LDP are introduced, the Router ID might not necessarily be the current highest loopback address. In the output above we can see that the TCP Session is established between the loopback0s of each router. This is the loopback used to identify the node and is used for things like the source address of iBGP sessions. What’s key here is that each routers path to the other routers loopback0 address is internal – over the IGP. This means the TCP Session is established over our own core.

blog12_image11_LDPthoughParis

This isn’t a problem initially – LDP would come up and labels would be exchanged. But if our last link to New York via Paris goes down, this will cause the TCP Session to drop. It should come back up with IS-IS configured over the GRE tunnel, but this kind of disruption, combined with delays associated with LDP-IGP synchronisation, could result in significant downtime.

The wiser option is to configure the transport addresses to the be the GRE tunnel endpoints. This will ensure the TCP Session will be established over the GRE tunnel from the start…

Once this is done we can see the transport address changes accordingly:

It’s also a good idea to manually configure the LDP Router ID to ensure that transport address connectivity is not reliant on an automatic process (I found when labbing this scenario that the LDP Router ID was, at times, defaulting to loopback10 used for the GRE Tunnel. Since I was not redistributing this loopback into IS-IS, this resulted in the neighboring P router not having a route to this address to establish the TCP session. Hardcoding the LDP Router ID to loopback0s address solved this).

Now that IS-IS is up, we can move on to the IGP configuration.

Tunnel overlay protocols – IS-IS

The IS-IS configuration across the GRE tunnel is done just like any interface. The only thing to remember here is to set the metric to be high enough such that under normal circumstances traffic will go via Paris (output from here on is taken from the downloadable lab, in case you notice subtle differences from the output given so far).

Once IS-IS comes up, you might notice the following log message:

This leads us to the last topic we need to explore, MTU…

MTU

To consider MTU, let’s first see what the MTU currently is:

In IOS-XR the 14 bytes of the Layer 2 header needs to be accounted for (6 bytes for Source MAC + 6 bytes for Destination MAC + 2 Bytes for the Ethertype) so the 1514 bytes in the above output equates to a Layer 3 MTU of 1500.

If we look at the GRE tunnel MTU it too shows 1500 and unlike the physical interface MTU it doesn’t need account for the layer 2 header.

But what the MTU in the above output doesn’t show is that the tunnel has to add a 24 byte GRE header, reducing the effective MTU of the link. If we use the following command we can see that the Layer 3 IPv4 MTU is 1476 (1500 -24):

This explains the IS-IS log message we were seeing:

IS-IS is trying to send a packet that is bigger than 1476 bytes and so it must reduce the MTU size. This doesn’t have an impact on the IS-IS session in our case, but is worth noting.

To better understand the exact MTU breakdown, let’s visualise what happens when a packet arrives at NY-MSE1 headed for the GRE tunnel.

The packet arrives looking like this:

blog12_image12_incomingPacket

The MSE will then add the 24 bytes of GRE headers before sending it the Transit Provider, making the packet look like this:

blog12_image13_greFormat

So what is the impact of this? Will traffic be able to flow over the link?

Well in short, yes. But hosts will tend to sent packets of 1500 bytes (which includes the IP Header). With the additional Label and GRE headers in place, the packet will be fragmented as it is sent over TEAR1.

To illustrate this, I’ll turn to a pcap on EVE lab used to simulate this scenario. We’ll look at what happens if the link to Paris actual fails and our GRE tunnel is brought into action!

The lab contains a customer VRF with two sites, A and B. Loopback1 on the CE at each site is used to represent a LAN range. We can ping or trace from one site to another and watch its behaviour across the core. The lab includes a single link between the London and New York parts of the network used to represent the path via Paris. If we bring this link down, traffic will start to flow over the GRE tunnel. We can then do a pcap on NY-MSE1 to T1-MSE1 link to see the fragmentation. Here’s the setup:

blog12_image14_labCapture

First bring down our link to London:

Then do a trace and ping while pcap’ing the outbound interface to T1-MSE1:

The PCAP shows fragmentation taking place on the ICMP packets:

blog12_image15_pcap

Indeed if we ping with the df bit set we see it doesn’t get through:

Fragmentation should generally be avoided where possible. To that end, we’ll need to adjust the MTU to allow our end host to sent at its usual 1500 bytes without fragmentation.

We will need to add 32 bytes to the already 1514 MTU. The breakdown is as follows:

  • Original IP Header and Data – 1500
  • VPN Label – 4
  • Transport Label – 4
  • GRE Header – 24
  • Layer 2 Headers – 14
    • TOTAL: 1546

So of we set the MTU to 1546 we’ll be able to send a packet of 1500 bytes across our core. Remember for the GRE tunnel we don’t need bother accounting for the 14 bytes of layer 2 overhead:

Once this done we can see that sending exactly 1500 bytes from our customer site works:

One final point to note here is that the Layer 3 MTU of the interface on the other end of the Transit link must be at least 1532 otherwise IS-IS will not come up across the GRE tunnel. This is unfortunately something that, as the Tier 2 Provider, we don’t control. In fact, any MTU along the part of the path that we don’t control could result in fragmentation. The best we can do in this case is to configure it and see. If IS-IS isn’t up after the MTU changes, we would have to revert and simply put up with the fragmentation.

So that’s it! An interesting solution to a somewhat rare and bespoke problem. Hopefully this blog has provided some insight into how Service Providers operate and how various technologies interrelate with one another to reach an end goal of maintaining redundancy. Again, I stress that this solution is not scalable, but I think it’s an entertaining look into what can be accomplished if you think outside the box. Feel free to download the lab and have a play around. Maybe you can see a different way to approach the scenario? Thoughts and feedback are welcome as always.

Routing loop shambles

Hey everyone! It’s been a while since I posted anything, but I’ve come across this interesting quirk in my studies which I think would be of interest for anyone studying OSPF, BGP and how they work together. Comments and thoughts are welcome as always.

This blog introduces the concept of OSPF sham-links and how they can be used to influence OSPF routes across an MPLS core. It also explores how, if not used carefully, routing loops could occur with disastrous effects. 

As a reminder, once I’ve set up the scenario, I’ll go through the quirk (explaining the problem), the search (finding a solution) and the work (implementing the solution) as usual.

Scenario

This scenario looks at a standard MPLS customer with two sites. These sites use OSPF as the PE-CE routing protocol and have a backdoor link between them over which OSPF is run – joining both sites into area 0.

The diagram looks like this:

blog11_image1_initial_scenario

I’ve labbed this in GNS3 and all routers are IOS-XE devices except for XR1 and XR2 which, as the names suggest, are IOS-XR boxes.

LAN ranges have been simulated using loopbacks. Each PE is doing redistribution from OSPF into MP-BGP (internal, external 1 and external 2) and from MP-BGP into OSPF.

The design goal here is to have both sites connected in OSPF area 0 using the backdoor link as a backup – with traffic normally preferring to go over the MPLS network (or OSPF super backbone). XR1 and R1 should back each other up. Only if both of these are down should traffic traverse the backdoor link.

I’ll first introduce the problems inherent in the default behaviour as shown in the diagram above – focusing on how R4 and R5 would reach LAN1 (192.168.70.0/24) on R7. I’ll then go into how a sham-link can help solve these problems. However, as we will see in the quirk, if sham-links aren’t applied correctly some problems could appear.

OSPF and MPLS

We’ll start by looking at how OSPF and MPLS interact. For now, let’s assume the backdoor link is shutdown.

OSPF is being used between the PEs and CEs. So the PEs find themselves redistributing from OSPF into MP-BGP. When this is done, MP-BGP will set these OSPF specific community/values into the resulting VPNv4 prefix:

  • The domain ID – this is an extended community taken from the process ID on the router and is considered when redistributing back into OSPF (more on that below).
  • The route-type – an extended community broken up into 3 parts: the area, the LSA type and an additional option.
  • The OSPF router id – another extended community representing the router sourcing this VPNv4 prefix.
  • The OSPF cost is copied to the MED value.

Here we can see the output from R3 as it has redistributed the OSPF route for LAN 1 into BGP:

You can see the Domain ID field is set to 0x0005:0x000000010200. The 00000001 section represents process ID 1. MED is 2 – this represents the OSPF cost of 2 to reach LAN1. The RT is  0.0.0.0:2:0 and router-ID is 3.3.3.3:0.

NB. IOS-XR doesn’t encode the domain ID by default. For this scenario we will assume it has been configured on XR1 using the following commands:

What’s important to consider here is how the PEs on the other end of the MPLS network redistribute this back into OSPF on the other side.

When the MP-BGP prefix is redistributed back into OSPF by either R1 or XR1, it uses the domain ID to determine if the route should appear as inter-area or external (I’m using colour coding here to help with differentiating between area descriptions… and because trying to read inter and intra when they occur in the same sentence makes my head hurt). If the Process ID section of the Domain ID in the VPNv4 prefix matches the local OSPF process ID on the PE doing the redistribution, then the prefix will be sent into OSPF using an inter-area Type 3 LSA. If it doesn’t, it will be an external Type 5 LSA.

In our setup, the Domain ID and Process ID all match – so when R4 and R5 receive the Type 3 LSA they see it as inter-area:

This all looks well and good. It’s worth pointing out here, that OSPF has a preference for which path to select based on the route types. The order of preference is as follows*:

  • Intra-Area (O)
  • Inter-Area (O IA)
  • External Type 1 (E1)
  • NSSA Type 1 (N1)
  • External Type 2 (E2)
  • NSSA Type 2 (N2)
(* This is for Cisco IOS software older than 15.1(2)S. During and after 15.1(2)S sees the E and N orders reversed. This isn’t relevant to this blog but worth noting)

It doesn’t matter what the OSPF cost is. If OSPF has the option of an intra-area route over an inter-area or external route, it will pick the intra-area option every time. Keeping that in mind, let’s bring up the backdoor link and see what happens…

The backdoor link

You might already be able to predict that as soon as we bring up the backdoor link, R4 and R5 will immediately see LAN1 as an intra-area route:

You may also have spotted that the previous Type 3 LSA is no longer present. This is because the PE routers that were doing the redistribution from MP-BGP now prefer the local OSPF path. MP-BGP (iBGP from the reflectors in this case) has an administrative distance of 200. OSPF has an administrative distance of 110. OSPF wins and since redistribution takes place from the RIB, there are no MP-BGP routes to redistribute into OSPF:

Now you might be asking why I bothered to outline the difference between the PE redistributing the BGP prefix as inter-area versus external, if the R4 and R5 are just going to pick the intra-area route regardless. Well this becomes relevant when we consider how we are going to make the MPLS core the preferred path to reach LAN1.

As it stands at the moment, no matter how high we set the metric on the link between R5 and R7, traffic from Site 2 to LAN1 will always go over the backdoor link. In short, we need a way to make an intra-area route appear over the MPLS core. Here’s were sham-links come in.

Sham-Links

A sham-link is similar to an OSPF Virtual-Link but it can be run as any area and is designed for just these types of scenarios.  Essentially, the PEs at either end establish an OSPF neighborship and consider themselves to be directly connected within the same area. This will all allow Type 1 and Type 2 LSAs to appear over MPLS – simulating a point-to-point connection between PEs.  Let’s look at how this is setup…

Each PE creates a new loopback and puts it into vrf A. The sham-link is configured between these loopbacks.

Here’s the diagram and config for the setup:

blog11_image2_sham_link_initial

Now it’s important to pause there and highlight a key requirement: We need to make sure that each PE has reachability to the others sham-link loopback over MPLS but not over OSPF. To that end, we should not enable OSPF on the PEs new loopbacks.

But why is this?

To answer this, consider how R3 learns about 111.11.11.11/32. If XR1 were to enable OSPF on this loopback, it would include it as a connected network in its Type 1 LSA. This would be then be communicated throughout the OSPF area, across the backdoor link and arrive at R3. All devices are in the same area so their view of the LSDB would be the same. Assuming loopback111 is also redistributed into BGP, R3 would now have two options to reach it – one via OSPF with administrative distance or 110 and one via iBGP with an administrative distance of 200.

blog11_image3_redistributing_loopbacks

OSPF would naturally win and the sham-link would be built over the backdoor link, which defeats the very goal we are trying to achieve! As such, we have to make sure that OSPF is not enabled on loopback 111 or loopback 33.

But, I hear you ask, what if we are still redistributing from MP-BGP into OSPF? Won’t R3 still see the path to loopback 111 via an external Type 5 LSA, which will still have a lower AD than iBGP’s 200?

Well, yes, but OSPF has a loop prevention mechanism built into it to prevent just such a thing…

When an LSA is created from redistributing from MP-BGP to OSPF, an OSPF feature called the down-bit is set in the resulting LSA. The down-bit ensures that any prefixes that are redistributed from MP-BGP into OSPF are not then redistributed back into MP-BGP. So whist R3 will see the Type 5 LSA in its LSDB it will not consider it as a valid route since it is already getting the prefix via MP-BGP and the down-bit indicates that it came from MP-BGP.

blog11_image4_down_bit

Here is the LSA as seen in the LSDB.

And if we check, we find that R3’s best path is via MP-BGP.

This loop prevention mechanism isn’t crucial to understanding the operation of the sham-link but it will come into play later on when we look at a potential routing loop.

Getting back to the sham-link, once we configure everything as outlined above the link comes up:

Both routers establish an OSPF adjacency and see each other as connected over a point-to-point link:

What’s interesting here is how XR1 sees the path to LAN1 over the sham-link:

It sees it as a BGP route and not an OSPF route! If we look at its BGP entry we see this:

It is clearly an OSPF based route. The OSPF attributes are all present. But how can an OSPF path over the sham-link appear as a BGP route?

Remember that in order to send traffic across the MPLS core two labels will be needed. The top label represents the next-hop PE. This will typically be repeatedly swapped as the packet crosses the core (unless we’re using segment routing but that’s a whole other story). The second and bottom label is the VPN label used to represent this customers prefix or VRF. This label is needed since the core P routers won’t know anything of the customer subnets. This label is communicated in the VPNv4 update from R3 as it redistributes LAN1 into MP-BGP.

Here is the logical process that XR1 is follows:

  • XR1 runs the Dijkstra algorithm to find LAN1, taking the sham-link into account as a point-to-point link.
  • If the sham-link wins, XR1 will then use a VPNv4 route for LAN1, which in this case is being redistributed by R3. The best VPNv4 route will be used and placed in the BGP RIB instead of an OSPF route.

This is logic is due to the recursion that is taking place over the sham-link:

So R3’s redistribution of LAN1 is needed so that XR1 has a VPN label to send traffic across the MPLS core. Here label 24 is the VPN label assigned by R3 and 16 and 24000 are the transport labels for the next hop of R3 via ECMP through Gi0/0/0/0.211 and Gi0/0/0/0.1112 respectively.

If we verify the source of the VPN label we can see that R3 is indeed assigning label 24:

As a side note, remember that the MP-BGP prefix that XR1 recursively uses is still in competition with any other VPNv4 route to the same destination (this becomes important later).

As a result of all of this, XR1 will not redistribute any OSPF routes into MP-BGP that it prefers over the sham-link. Redistribution takes place from the global RIB (or vrf RIB in this case) and there is no OSPF prefix in the RIB for LAN1 due to this recursive process.

Looking back at our communication between sites, we can now see that if the OSPF cost is lower across this sham-link when R4 and R5 run their Dijkstra algorithms, they will prefer this path as an intra-area link.

The below output shows that after increasing the metric on the backdoor link, a trace from the loopback of R5 to LAN1 goes via R4 to XR1 and over the MPLS core:

Success! You can even see the correct label stack in the trace. Traffic will now traverse the MPLS core as its primary path. Now let’s take a look at how, if you’re not careful how you add new subnets into OSPF, connectivity problems can pop up…

The quirk

Let’s pretend an engineer is tasked with configuring a new interface on R7 to be in LAN2 with a subnet of 192.168.71.0/24. Now let’s suppose that instead of enabling OSPF on the interface, the engineer uses the redistribute connected subnets command under the OSPF process:

blog11_image5_adding_second_lan

Site 2 immediately reports issues reaching this new subnet and if we repeat a traceroute from R5 we can confirm it:

Visually it looks like this:

blog11_image6_looping_trace

It looks to be headed in the right direction to begin with, but XR1 is sending it over to R1 for some reason.  LAN1 still seems to work though:

Let’s start by looking at how R5 sees the path to LAN2 compared to LAN1:

The main difference here is that R5 sees this as an external E2 route. There is an external Type 5 LSA referencing LAN2 due to it being redistributed rather than having OSPF enabled on it:

The metric is 20 and the type is E2. This is the default for OSPF when redistributing connected routes. When an E2 route is used, the intra-area cost to the ASBR that originated the LSA (which in this case is R7) is not taken into consideration (outside of a tie-breaker scenario between two E2 routes). So, the metric is 20 and will stay 20. Also, note the down-bit is not set…

Looking at the next hop, R4, we see it has the same preference for an E2 route and it is still sending traffic in the right direction:

The point where the loop seems to start is XR1. Again, let’s compare how it reaches LAN2 compared to LAN1:

Both are preferring MP-BGP but LAN2 is unexpectedly advertised and preferred via R1….

Both paths from the reflectors are pointing to R1. Let’s take a look at R1 and see what’s going on.

Looks like R1 is using OSPF to reach LAN2.

This is simply an administrative distance decision from R1’s point of view. One path from iBGP, one from OSPF. OSPF wins. The Type 5 LSA is being seen over the backdoor link or over the sham-link. It hasn’t been through any redistribution. As such, no down-bit is being set and R1 has no reason not to redistribute it into MP-BGP as normal.

Now we are in a position to look at why XR1 sends the traffic to R1. Remember when the sham-link is the best OSPF path, the resulting route is a VPNv4 MP-BGP route to that destination, with the sham-link destination as the next-hop. This MP-BGP route must compete with all other MP-BGP routes using the best path selection algorithm.

To look at this process we can turn to one of the reflectors:

R2 is choosing the prefix advertised by R1 as the best path. It will then reflect this on and at the same time withdraw any previous best paths – this includes the path via 3.3.3.3 which XR1 should be using to reach the other end of the sham-link. XR1, still needing to use a VPNv4 prefix, falls back to its only available option, namely the VPNv4 prefix via R1.

You might think that it would fall back to another OSPF prefix, but remember, OSPF will simply run Dijkstra’s algorithm again and see the sham-link as the best path. The sham-link would still recurse to a MP-BGP VPNv4 prefix – and the R3-originated one has lost out to the R1-originated one. The sham-link can’t detect that an OSPF path using the sham-link has an VPNv4 prefix that avoids looping back into the same site. It just tells OSPF to use a VPNv4 prefix.  It’s simulating running OSPF over the MPLS core – hence the term sham. 

So now we know why XR1 is looping the traffic… but why are the reflectors preferring the path that R1 advertises? For that, we can run through the BGP best path selection algorithm:

blog11_image7_BGP_analysis1

The BGP Router ID is determining the best path! This is far from ideal. We can test this by actually changing R1s Router ID and clearing BGP (obviously never do this in a live environment):

It’s not a good thing if the communication between sites depends on the luck of the draw on how Router IDs are assigned. For consistency I’ll move the Router ID back to its default (in this case it will just use the highest numbered loopback).

You might also ask at this stage why LAN1 doesn’t suffer from this same problem. If we take a quick look at the reflectors, we can see that R1 is redistributing LAN1 just like LAN2 but the VPNv4 route from R3 is being preferred:

If we do the BGP best path calculation again we can see why:

blog11_image8_BGP_analysis1 2

The reason why LAN1 doesn’t loop is because of the MED (the cluster list might be the ultimate reason but the prefix from R1 is eliminated due to MED).

Remember when OSPF is redistributed into MP-BGP the OSPF cost is set to the MED value. When LAN2 was redistributed into MP-BGP by R1, it was an E2 route, meaning the intra-area cost to the ASBR was not taken into consideration. It stayed as 20 and thus MED was not a tie breaker.

LAN1 however is learned via R7’s intra-area Type1 LSA. When R1 redistributes this into MP-BGP it will take into consideration the cost to the ASBR. In this case it is 6 (assuming each OSPF link is cost 1 since the reference-bandwidth hasn’t been changed):

  1. Link to R5
  2. Link to R4
  3. Link to XR1
  4. Cost of the sham-link
  5. Link to R7
  6. Link to the loopback

R3 will redistribute it into MP-BGP after only two of those hops, hence the lower MED.

Whilst this technically does work for LAN1, it is arguably not the wisest solution to the problem. Even if the engineer had enabled OSPF on the interface rather than using redistribution we could have run into problems. Maybe there’s a better solution…

The Search

When it comes to searching for a solution to this quirk we have to keep in mind what we are trying to achieve as an end goal.

Perhaps one of the simplest solutions on the face of it is to make sure that the PE for the site that the network in question comes from, sets a higher local preference when redistributing into MP-BGP:

blog11_image9_redist

This would ensure that the reflectors would pick the correct VPNv4 route. And indeed if we configure it like that, it does appear to work:

It’s worth pointing out here that even though the backdoor link is also advertising an E2 Type 5 LSA, for which the intra-area cost is not taken into consideration, if two E2 routes have the same lowest cost, the intra-area cost to the ASBR is taken into consideration as a tie breaker. In this case, it is quicker to get to R7 going over the sham-link.

However we have to think about how this design is intended to work. On the one hand we want the backdoor link to be used as a backup link, but we also want Site 2 to be dual-homed. This means that if XR1 somehow becomes unavailable (perhaps because R4 or its uplink to XR1 goes down) we want R1 to be the primary path out of the site. But as things stand, if XR1 goes down we will end up using the backdoor link. This is because R1 doesn’t have a sham-link. It will prefer its local OSPF route over MP-BGP as we saw earlier.

We can simulate just such as scenario by shutting down R4’s uplink and tracing to LAN2 before bringing it back up so traffic goes back over the sham-link.

You could potentially run a different protocol across the backdoor link and rely on redistribution manipulation, but that could introduce more issues – I will leave those options open to discussion.

Possibly the best solution, in order to maintain OSPF as a contiguous area 0 running between both sites, is to give R1 a sham-link as well. This will allow R1 to form an adjacency with R3 and will prevent the redistribution of any OSPF routes into MP-BGP that would be preferred over the sham-link.

The Work

The work involved in configuration of the sham-link from R1 to R3 is analogous to what we saw on the R3 to XR1 link – the only difference being that both ends are IOS-XE routers.

blog11_image10_dual sham links
We can now test to see that if XR1 is lost, traffic will still follow the same path.

R1 is now acting as a redundant link out of Site 2. Depending the LSA types, you could even adjust which of XR1 or R1 is the primary exit for Site 2 by adjusting the costs of the sham links! As with nearly anything that requires a full-mesh, scalability could become an issue but for our purposes here it works well. 

Sham-links aren’t the most widely used tools across Service Providers but hopefully this blog has given some insight into how they work and what to consider to avoid some possible pitfalls. Are there any alternate solution you can see that might work? I’m always keen to hear alternate ideas or comments. I came across this scenario whilst working through an INE lab, so if you haven’t seen ine.com you should definitely check them out! Thank you for reading and until next time.

IVE ARP’d on for too long

The purpose of this blog is to highlight how different platforms respond to ARP requests and to explore some strange default operations on Juniper IVE VPN platforms. This quirk was found during a datacentre migration, during which the top-of-rack/first-hop device changed from a Cisco IOS 6500 environment to a Nexus Switching environment. The general setup looks like this and follows an example customer with a Shared IVS setup:

blog10_diagram1_setup

In order to understand this scenario, it’s important to know what the Juniper IVE platform is and how it provides its VPN services.  To that end, I’ll give a brief overview of the platform before looking at the quirk.

IVE Platform

The Juniper 6500 IVE (Instant Virtual Extranet) platform, is a physical appliance that offers customers a unique VPN solution linking to their MPLS network. Once connected, a home worker will be connected to their corporate MPLS network just as if they were at a Branch Office.

(In order to avoid confusion between the Juniper 6500 IVE and the Cisco 6500 L3 switch -which also plays an important role in this setup but is a very different kind of device – I will just use the term IVE to refer to the Juniper platform)

IVE Ports

As you can see from the digram above, an IVE appliance has an external port and an internal port.

The external port, as its name implies, is typically assigned a public IP address. It also has virtual ports, which are analogous to sub-interfaces, each with their own IPs. Each of these virtual ports links to an individual customers VPN platform, or a shared VPN platform that holds multiple customer solutions. A common design involves placing a firewall in between the external interface and the internet. This allows the virtual interfaces to share the same subnet as the main external interface. Customer public IPs are destination NAT’d inbound (or MIP’d if you’re using a Juniper firewall) to their corresponding virtual IPs.

The internal port, similarly services multiple customers. This port can be thought of as a trunk port, whereby each VLAN links to an individual customers VRF, typically with an SVI as the gateway – sometimes used with HSRP or other FHRP.

Shared or Dedicated

Customers can have either a Shared or Dedicated VPN solution. These solutions are called IVS’s (or Instant Virtual Systems). You can have multiple IVS’s on a single IVE appliance.

Shared IVS Solutions represent a single multi-tenant IVS. Basically, multiple customers connect to the same IVS and are segmented by allocating them different sign-in pages and connection policies. Options are more limited than having a Dedicated IVS but can be more cost effective.

Dedicated IVS solutions give customers more flexibility. They can have more connected users and added customisation such as 2FA and multiple realms.

When an IVS is created it needs to link to the internal port. To do this one or more VLANs can be assigned. If the platform is Dedicated, only a single VLAN needs to be assigned – namely that of the customer. This VLAN will link to an SVI in the customers VRF. If the platform is Shared, multiple the VLANs are assigned – one per customer. However in this case, a default VLAN will need to be assigned for when the IVS needs to communicate on a network that is independent from any of its individual customers. Typically the Shared Authentication VLAN is used for this.

But what is the Shared Authentication VLAN? This leads to the next part of the setup… how users authenticate.

Authentication

When a VPN user logins in from home and authenticates, the credentials they enter on the sign-in page with need to be… well… authenticated. Much like the IVS solutions themselves, there are both Shared and Dedicated options.

Customers can have their own LDAP or RADIUS servers within their MPLS networks. In this case the IVE will make a request to this LDAP when a user connects. This is called Dedicated Authentication.

Alternatively, the Service Provider can offer a Shared Authentication solution. This alleviates the customer from having to build and maintain their own LDAP servers by utilising a multi-tenant platform managed by the Provider. The customer supplies the user details, and the Service Provider handles the rest. 

Shared Authentication is typically used for Shared IVS’s. In order to connect to the Shared Authentication Server, a Shared IVS will allocate a VLAN – alongside all of its customer VLANs – on the internal trunk port. This links to the Providers network (for example an internal VRF or VLAN) where the Shared Authentication servers reside. It is this VLAN that is assigned as the default VLAN for the Shared IVS. 

The below screenshot is taken from the Web UI of the IVE platform. It shows some of the configuration for a Shared IVS (namely IVS123).  It uses a default VLAN called Shared_Auth_Network as noted by the asterisk in the bottom right table:

blog10_image1_default_vlan

We’re nearly ready to look at the quirk. There is just one last thing to note regarding how a Shared IVS Platform, like IVS123, communicates with one of its customers Authentication Servers.

Here is the key sentence to remember: When a Shared IVS platform communicates with any authentication server (shared or dedicated), it will use its Shared Auth VLAN IP as the source address in the IP packet.

This behaviour seems very counterintuitive and I’m not sure why the IVS wouldn’t use the source IP of the VLAN for that customer IVS.

Whatever the reason for this behaviour, the result is that packets sourced from a Shared IVS Platform communicating to one of its customer’s Dedicated authentication servers, will be sending packets with a source IP of the Shared Auth VLAN. But such a customer isn’t using Shared Auth. Their network doesn’t know or care about the Shared Auth environment.  So when their Dedicated LDAP server receives an authentication request from the IVE, it sees the source IP address as being from this Shared Auth VLAN.

The solution, however, is easy enough (barring any IP overlaps)… The customer simply places a redistributed static route into its VRF pointing any traffic to this Shared Auth subnet back to their internal port of the IVE.

To understand this better, let’s take a look at a diagram of the setup as a user attempts to connect:

blog10_diagram2_authentication

Now we are equipped to investigate the quirk, which looks at a customer on a Shared IVS platform, but with Dedicated LDAP Authentication Servers.

The quirk

As mentioned earlier, this quirk follows a migration of an IVE platform from an environment using Cisco IOS 6500s switches to an environment using Cisco Nexus switches.

In the both environments, trunk ports connect to the internal IVE ports with SVIs acting as gateways. The difference comes in the control and dataplane that were used. The original IOS environment was a standard MPLS L3VPN network. The Nexus environment was part of a hierarchical VxLAN DC Fabric. Leaf switches connected direct to the IVEs and implemented anycast gateway on the SVIs. Prefix and MAC information was communicated over the EVPN BGP address family and ASR9k DCIs acted as border-leaves terminating the VTEPs, which were then stitched into the MPLS core.

The key difference however, isn’t in the overlays or dataplane protocols being used. The key is how each ToR device responds to ARP…

Once the move was completed and the IVE was connected to the Nexus switches everything seemed fine at first glance. Users with Dedicated IVS’s worked. Users on Shared IVS’s who utilised the Shared Auth server could also login and authenticate correctly. However a problem was found when checking any customer who had a VPN solution configured on a Shared IVS platform with Dedicated Authentication. Despite the customer login page showing up (implying that the public facing external side was working), authentication requests to their Dedicated Auth Servers were failing.

Below shows the Web UI output of a test to connect to our example customers LDAP servers at 192.168.10.10.

blog10_image2_ldap_failure

As we searched for a solution to this problem, we had to keep in mind how a Shared IVS Platform makes Auth Server requests…

The search

Focusing on just one of the customers on the Shared platform, we first checked how far a trace would get from the IVE to the Dedicated Auth Server. We found pretty quickly that the trace would not even reach the first hop – that is, the anycast gateway IP that was on the SVI of the Nexus leaf switch.

blog10_image3_trace_fail

However when checking from the Nexus, both routing and tracing, we saw we could reach the Dedicated Auth Server fine – as long as we sourced from the right VRF.

This led us to check the Layer 2 between the switch and the IVE. We did this by checking the ARP table entries on the IVE. We immediately found that there were no ARP entries to be found for the ToR SVI for any customer on a Shared Platform with a Dedicated Authentication setup.

The output below shows the ARP table as seen from the console of the IVE. Note the incomplete ARP entry for 172.16.20.33, the SVI on the Nexus for our example customer.

(As a quick aside, you may notice that the HWAddress of the Nexus is showing as 11:11:22:22:33:33. This is due to the fabric forwarding anycast-gateway-mac 1111.2222.3333 command being configured.)

So there is no ARP entry. But logically this appears to be more or less the same layer 2 segment when it connected to the 6500. So what gives?

It turns out that 6500s and Nexus switches respond to ARP requests in different ways. The process on the 6500 is fairly standard and works as follows:

blog10_diagram3_6500_arp

But a Nexus will not respond to an ARP request if the source IP is from a subnet that it doesn’t recognise:

blog10_diagram4_nexus_arp

In our example case, the Nexus switch does not recognise 10.10.10.10 as a valid source IP for the receiving interfaces (which has IP 172.16.20.33). It sees it as off-net. We could also see the ARP check failing by using debug ip arp packet on the switch.

So what’s the solution? There are a couple of ways to tackle this. We could add a static ARP entry on the IVE, but this could be cumbersome if new needed to add it for each Shared IVS. Alternatively, we could add a secondary IP to the subnet on the SVI…

The Work

Adding a secondary IP is fairly straight forward. The config would be as follows:

A /31 works well in this case, encompassing only the IPs that are needed (namely 10.10.10.10 and 10.10.10.11) . This allows the ARP request to pass the aforementioned check that the Nexus performs. From here the MAC entries began to show up and connectivity to the Shared Auth Server began to work.

blog10_image4_ldap_success

So this raises the question of whether or not this behaviour is desired. Should a device responding to an ARP request, check the source IP? I’d tend to lean in favour of this type of behaviour. It adds extra security and besides, it’s actually the behaviour of the IVE that is strange in this case. One would think that the IVS would use the source IP of the connecting customers subnet, instead of that of the Shared Auth VLAN. The behaviour certainly is unorthodox but finding a solution to this problem highlights some of the interesting scenarios that can arise when working with different vendors and operating systems.

I hope you’ve enjoyed the read. I’m always open to alternate ideas or general discussion so if you have any thoughts, let me know.

Peering into the Future

Network automation is becoming more and more ubiquitous these days. Configuration generation is a good example of this – why spend time copy and pasting from prepared templates if a script can do it for you?

This small blog introduces the first python script to be released on netquirks. The script is called PeerPal and it automates the creation of Cisco eBGP peering configuration by referencing input from both a config file and details gather by utilising the peeringdb.com API. This serves as a good example of how network automation can make performing regular tasks faster, with fewer errors and more consistency.

The GitHub repo be found here.

It works by taking in a potential peers autonomous system number and checking with Peering DB to find which Internet Exchanges both your ASN and theirs have common presence. A list is then presented, one for IPv4 then one for IPv6, allowing you to select which locations to generate the peering config for. It can do this for either IOS or XR format. It reads the neighbors IP, prefix limits and even IRR descriptions from Peering DB and integrates them into the final output.

Other specifics of the peering, like your ASN, neighbor groups, MD5 passwords, ttl-security or what the operating system format should be, are all stored in a local config file. This can be customised per Internet Exchange.

The best way to demonstrate the script is to give a quick example. Let’s say the ISP netquirks (ASN 1234) wants to peer with ACME (ASN 5678). The script is run like this:

myhost:peerpal Steve$ python3 ./peerpal.py -p 5678
The following are the locations where Netquirks and ACME have 
common IPv4 presence:
(IPs for ACME are displayed)
1: LINX LON1 - 192.168.101.1
2: CATNIX - 10.10.1.50
3: DE-CIX Frankfurt - 172.16.1.90,172.16.1.95
4: IXManchester - 10.11.11.25
5: France-IX Paris - 172.16.31.1,172.16.31.2
6: DE-CIX_Madrid - 192.168.7.7
Please enter comma-seperated list of desired peerings (e.g. 1,3,5) 
or enter 'n' not to peer over IPv4: 

The script first lists the Exchange names and their IPv4 IPs. Enter the Exchanges you want to peer at, and then do the same for IPv6…

myhost:peerpal Steve$ python3 ./peerpal.py -p 5678
The following are the locations where Netquirks and ACME have 
common IPv4 presence:
(IPs for ACME are displayed)
1: LINX LON1 - 192.168.101.1
2: CATNIX - 10.10.1.50
3: DE-CIX Frankfurt - 172.16.1.90,172.16.1.95
4: IXManchester - 10.11.11.25
5: France-IX Paris - 172.16.31.1,172.16.31.2
6: DE-CIX_Madrid - 192.168.7.7
Please enter comma-separated list of desired peerings (e.g. 1,3,5) 
or enter 'n' not to peer over IPv4: 2,4

The following are the locations where Netquirks and ACME have 
common IPv6 presence:
(IPs for ACME are displayed)
1: LINX LON1 - 2001:1111:1::50
2: CATNIX - 2001:2345:6789::ca7
3: DE-CIX Frankfurt - 2001:abc:123::1,2001:abc:123::2
4: IXManchester - 2001:7ff:2:2::ea:1
5: France-IX Paris - 2001:abab:1aaa::60,2001:abab:1aaa::61
6: DE-CIX_Madrid - 2001:7f9:e12::fa:0:1
Please enter comma-separated list of desired peerings (e.g. 1,3,5) 
or enter 'n' not to peer over IPv6: 6

The output produced looks like this:

IPv4 Peerings:
****************
The CATNIX IPv4 peerings are as follows:
=============================================================
Enter the following config onto these routers:
cat-rtr1.netquirks.co.uk

IOS CONFIG
----------
router bgp 5678
 neighbor 10.10.1.50 remote-as 1234
 neighbor 10.10.1.50 description AS-ACME
 neighbor 10.10.1.50 inherit peer-session EXTERNAL
 address-family ipv4 unicast
  neighbor 10.10.1.50 activate
  neighbor 10.10.1.50 maximum-prefix 800 90 restart 60
  neighbor 10.10.1.50 inherit peer-policy CATNIX

The IXManchester IPv4 peerings are as follows:
=============================================================
Enter the following config onto these routers:
mchr-rtr1.netquirks.co.uk
mchr-rtr3.netquirks.co.uk

XR CONFIG
----------
router bgp 5678
 neighbor 10.11.11.25
  remote-as 1234
  use neighbor-group default_v4_neigh_group
  ttl-security
  description AS-ACME
  address-family ipv4 unicast
   maximum-prefix 800 90 restart 60

IOS CONFIG
----------
router bgp 5678
 neighbor 10.11.11.25 remote-as 1234
 neighbor 10.11.11.25 description AS-ACME
 neighbor 10.11.11.25 inherit peer-session peer-sess-mchr4
 neighbor 10.11.11.25 ttl-security hops 1
 address-family ipv4 unicast
  neighbor 10.11.11.25 activate
  neighbor 10.11.11.25 maximum-prefix 800 90 restart 60
  neighbor 10.11.11.25 inherit peer-policy peer-pol-mchr4

IPv6 Peerings:
****************

The DE-CIX_Madrid IPv6 peerings are as follows:
=============================================================

IOS CONFIG
----------
router bgp 1042
 neighbor 2001:7f9:e12::fa:0:1 remote-as 1234
 neighbor 2001:7f9:e12::fa:0:1 description AS-ACME
 neighbor 2001:7f9:e12::fa:0:1 peer-group Mad1-6
 neighbor 2001:7f9:e12::fa:0:1 ttl-security hops 1
 address-family ipv6 unicast
  neighbor 2001:7f9:e12::fa:0:1 activate
  neighbor 2001:7f9:e12::fa:0:1 maximum-prefix 40 90 restart 60

From the output you can see that there are different specifics based on the internet exchange. Madrid uses ttl-security and peer-groups, whereas CATNIX doesn’t have ttl-security and uses peer session and policy templates. All of these specifics are stored in a local config file:

[DEFAULT]
as = 1234
op_sys = xr
ttl_sec = true
xr_neigh_grp_v4 = default_v4_neigh_group
xr_neigh_grp_v6 = default_v6_neigh_group
ios_neigh_grp_v4 = default_v4_peer_group
ios_neigh_grp_v6 = default_v6_peer_group

[CATNIX]
routers = cat-rtr1.netquirks.co.uk
op_sys = ios
ios_neigh_grp_v4 = EXTERNAL,CATNIX
ios_neigh_grp_v6 = EXTERNAL,CATNIX6
ttl_sec = false
                     
[IXManchester]
routers = mchr-rtr1.netquirks.co.uk,mchr-rtr3.netquirks.co.uk
op_sys = both
ios_neigh_grp_v4 = peer-sess-mchr4,peer-pol-mchr4
ios_neigh_grp_v6 = peer-sess-mchr6,peer-pol-mchr6

[France-IX Paris]
xr_neigh_grp_v4 = FRANCE-NEIGH-IX
xr_neigh_grp_v6 = FRANCE-NEIGH-IXv6
ttl_sec = false

[Exchange_Number_1250]
as = 1042
op_sys = ios
ios_neigh_grp_v4 = Mad1-4
ios_neigh_grp_v6 = Mad1-6
correction = DE-CIX_Madrid

The script generally follows the structure of reading from the more specific sections first. If an IX section contains a characteristic like ttl-security, the config for that exchange will use that characteristic. If it is absent, the config will fall back on the DEFAULT section. There are a couple of exceptions to this and full details can be found in the README file on the repo. The script can also specify the routers to put the config onto and show the name of an Internet Exchange if Peering DB doesn’t have one set (DE-CIX_Madrid is an example of this as shown above). Again, full details are in the README.

This gives a brief introduction to PeerPal. It’s not a revolutionary script by any means but will hopefully come in handy for anyone working on peering or BGP configurations on a regular basis. Future planned features include pushing the actual config to the routers and conducting automated checks to make sure that prefixes and traffic levels adhere to your peering policy – watch this space.

So feel free to clone the repo and give it a go. Thoughts and comments welcome as always.

The A to Zabbix of Trapping & Polling

Monitoring is one of the most crucial parts to running any network. There are many tools available to perform network monitoring, some of which are more flexible than others. This quirk looks at the Zabbix monitoring platform – more specifically, how you use combined SNMP polling and trapping triggers to monitor an IP network, based on Zabbix version 3.2.

The blog assumes you’re already familiar with the workings of Zabbix. However if you aren’t, the follow section gives a whistle-stop tour, from the perspective of discovering and monitoring network devices using SNMP. If you are already familiar with Zabbix, skip to The quirk section below.

Zabbix –  SNMP Monitoring Overview

Zabbix can do much (much) more than I’ll outline here, but if you’re not familiar with it, I’ll describe roughly how it works in relation to this quirk.

The Zabbix application is installed on a central server with the option of having one or more proxy servers that relay information back to the central server. Zabbix has the capability to monitor a wide range of environments from cloud storage platforms to LAN switching. It uses a variety tools to accomplish this but here I’ll focus on its use of SNMP.

Anything that can be exposed in an SNMP MIB can be detected and monitored by Zabbix. Examples of metrics or values that you might want to monitor in a networking environment include:

  • Interfaces states
  • Memory and CPU levels
  • Protocol information (neighbors IPs, neighborship status etc)
  • System uptime
  • Spanning-Tree events
  • HA failover events

In Zabbix these metrics/values are called items. A device that is being monitored is referred to as a host.

Zabbix monitors items on hosts by both SNMP polling and trapping. It can, for example, poll a switch’s interfaces every 5 minutes and alert if a poll response comes back stating the interface is down (the ifOperStatus OID is good for this). Alternatively an item can be configured to listen for traps. If a switch interface drops, and that switch sends an SNMP trap (either to the central server or one of its proxies), Zabbix can pick this up and trigger an alert.

So how is it actually configured and setup?

The configuration of Zabbix to monitor SNMP follows these basic steps. Zabbix specific terms have been coloured red:

  • Add a new host into Zabbix – including its IP, SNMP community and name. The device in question will need to have the appropriate read-only SNMP community configured and have trapping/polling allowed to/from the Zabbix address.
blog8_image1_hostconfig

  • Configure items for that host – An item can reference a poll (e.g. poll this device for its CPU usage) or a trap (e.g. listen for an ‘interface up/down’ trap).
blog8_image2_itemconfig

  • Configure triggers that match particular expressions relating one or more items. For example a trigger could be configured to match against the ‘CPU usage’ item receiving a value (though polling) of 90 or more (e.g. 90% CPU). The trigger will then move from an OK state to a PROBLEM state. When the trigger clears (more on that below) it will move from a PROBLEM state back to an OK state.
blog8_image3_triggerconfig

  • Configure actions that correspond triggers moving to a PROBLEM state – options depend on the severity level of the trigger but could be something like sending an email or integrating with the API of something like PagerDuty to send an SMS

This process is pretty simple on the face of things, but what happens if you have 30 switches with 48 interfaces each? You couldn’t very well configure 30×48 items that monitor interfaces states. That’s a lot of copy and pasting!

Thankfully, Zabbix has two features that allow for large scale deployments like this: 

Templates – Templates allow you to configure what are called prototype items and triggers. These prototypes are bundled all into one common template. You can then apply that template to multiple devices and they will all inherit the items and triggers without them needing to be configured individually.

Low Level Discovery LLD allows you to discover multiple items based on SNMP tables. For example if you create an LLD rule with the SNMP OID ifIndex (1.3.6.1.2.1.2.2.1.1) as the Key, Zabbix will walk that table and discover all of its interfaces. You can then take the index of each row in the table and use it to create items and triggers based on other SNMP tables. For example after discovering all the rows of the ifIndex table you could use the SNMP Index in each row to find the ifOperStatus of each of those interfaces. It doesn’t matter if the host has 48 or 8 interfaces, they will all be added using this LLD. Here’s an example of the principle using snmpwalk:

blog8_image4_snmpsample

Now this is a very high level overview of Zabbix. I’m just giving a brief snapshot for those who haven’t worked with Zabbix.

Before mention the specifics of this quirk, I’ll go into a little more detail on how triggers work, since it plays a crucial role …

A trigger is an expression that is applied to an item and, as you might expect, is used to detect when a problem occurs. A trigger has two states: OK or PROBLEM. To detect when a problem occurs, a trigger uses an aptly named problem expression. The problem expression is basically a statement that describes the conditions under which the trigger should go off (e.g. move from OK to PROBLEM).

Examples of a problem expression could be “the last poll of interface x on switch y indicates it is down” or “the last trap received from switch y indicates interface x is down”.

Triggers also have a recovery expression. This is sort of the opposite of a problem expression. Once a trigger goes off, it will remain in the PROBLEM state until such time as the problem expression is no longer true. If the problem expression suddenly evaluates to false, the trigger will move to looking at the recovery expression (if one exists). At this point, the trigger will stay in a PROBLEM state until the recovery expression becomes true. The distinction to pay attention to here is that the even though the original condition that caused the trigger to go off is no longer true, the trigger remains in a PROBLEM state until the recovery expression is true. Most importantly, the recovery expression is not evaluated until the problem expression is false. Remember this for later.

So with all of that said. Let’s take a look at the quirk.

The quirk

This quirk explores how to configure triggers within Zabbix to use both polling and trapping to monitor a network device such as a router or switch.

To illustrate the idea I will keep it simple – interface states. Imagine a template applied to a switch that uses LLD to discover all of the interfaces using the ifIndex table.

Two items prototypes are created:

One that polls the interface state (ifOperStatus) every 5 minutes

and

One that listens for traps about interface states – either going down (for example listening for 1.3.6.1.6.3.1.1.5.3 linkDown traps) or coming up (for example listening for 1.3.6.1.6.3.1.1.5.4 linkUp traps)

The question is, how should the trigger be configured? We do not want to miss an interface that flaps. If an interface drops, we want the trigger to move to a PROBLEM state. But if our trigger is just monitoring the polling item and the interface goes down and comes back up within a polling cycle then Zabbix won’t see the flap.

To illustrate these concepts, I’ll use a diagram that shows a timeline together with what polling and trapping information is received by Zabbix. It uses the following legend:

blog8_image5_legend

This first diagram illustrates how Zabbix could “miss” an interface flap, if it occurs between polling responses:

blog8_image6_diagram1

You can see here, that without trapping, as far as Zabbix is concerned the interface never drops.

So what if we just make our trigger monitor traps?

This also runs into trouble when you consider that SNMP runs over UDP and there is no guarantee that a trap will get through (especially if the interface drop affects routing or forwarding). Worse still, if the trap stating that the interface is down (the DOWN trap) makes it to Zabbix but the recovery trap (the UP trap) doesn’t make it to Zabbix then the trigger will never recover!

blog8_image7_diagram2

It appears that both approaches on their own have setbacks. The logical next step would be to look at combining the best of both worlds – i.e. configure a trigger that will move to a PROBLEM state if it receives a DOWN trap or a poll sees the interface as down. That way, one backs the other up. The idea looks like this:

blog8_image8_diagram3

Seems simple enough. However, the quirk arises when you realise there is still a problem with this approach …. namely, if the UP trap is missed, the trigger will still not recover.

To understand why, we’ll look at the logic of the trigger expression. The trigger expression is a disjunction – an or statement. The two parts of this or statement are:

The last poll states the interface is down

OR

The last trap received indicates the interface is down

A disjunction only requires one of the parts to be true for the whole expression to be true.

Consider this scenario: A DOWN trap is received, making the second part of the expression true. The trigger moves to a PROBLEM state. So far so good. Now image a few minutes later the interface comes back up but the UP trap is never received by Zabbix. Due to the fact that this is a disjunction, even if the last poll shows the interfaces as up, the second half of the expression is still true – as far as Zabbix is concerned that last trap it received showed the interface is down. As a result the alert will never clear (meaning the trigger will never move from PROBLEM back to OK).

blog8_image9_diagram4

There needs to be some way to configure the combination of the two that doesn’t leave the trigger in a PROBLEM state. When searching for a solution, the Recovery Condition comes into play…

The search

To focus on finding a solution we will first look at solving the missing UP trap problem. For now, don’t worry about polling.

Let’s say we a have trigger with the following trigger expression:

The last trap received indicates the interface is down

Then clearly if the trigger has gone off and we miss the UP trap when the interface recovers, this alert will never clear. So what if we combine this, using an and statement, with something else. Something else that will, no matter what, eventually become false. Since an and statement is a conjunction, both parts will need to be true. We can then use the recovery condition to control when the trigger moves back to an OK state.

We can leverage polling for this since, if the interface is down, polling will eventually detect it. So our trigger expression changes to this:

The last trap received indicates the interface is down

AND

The last poll states the interface is up

At first this might seem counter intuitive to what we looked at above, but consider that when an interface drops and the switch sends a trap to Zabbix, stating that the interface is down, the last poll that Zabbix made to the switch should have shown the interface as up –  hence both statements are true and the trigger correctly moves to a PROBLEM state.

But as soon as polling catches up and detects that the interface is down, the second part of our trigger expression with become false. This makes the whole trigger expression false (since it is a conjunction) and the trigger will recover and move back to an OK state.

blog8_image10_diagram5

Now this is obviously not good. The interface is, afterall, still down! But we can use the recovery expression to control when the trigger recovers.

Remember from earlier that if a recovery expression exists, it will be looked at once the problem expression becomes false.

We can’t just configure a recovery expression on its own, until we made the above tweak, since as long as the problem expression says true the recovery expression will still be ignored.

From here the solution is simple. Our recovery expression simply states

The last two polls that we received stated the interface was up.

This means that as soon as polling detects that the interface is down, the problem statement becomes false and the recovery expression is looked at. Now, until two polls in a row detect that the interface is up, the trigger will stay in a PROBLEM state.

blog8_image11_diagram6

Interestingly, what we’ve essentially done is solve the missing UP trap problem, by removing the need to rely on UP traps at all! After two UP polls the trigger recovers (note the blue line of the timeline in the above diagram). You could optionally include an …or an UP trap is received to the recovery statement to make the recovery time quicker.

But there is a caveat to this case…

Consider what happens if an interface flaps within a polling cycle, meaning as far as polling is concerned, the interface never goes down. This would mean that, in the event that UP trap is missed, the problem statement will never become false. This means the trigger will never recover and we’re back to square one…

blog8_image12_diagram7

What we need is something that will inevitably cause the trigger statement to become false. Using polling doesn’t work because as we have seen, it can “miss” an interface flap.

Fortunately Zabbix has a function called nodata which can help us. The function can be found here and works as follows:

nodata(x) = 1(true) or 0(false), where x is the number of seconds where the referenced function has (true) or has not (false) received data.

To better understand this, let’s see what happens is we remove the statement The last poll states the interface is up, and replace it with one that implements this function. Our trigger statement would then become the following:

The last trap received indicates the interface is down

AND

There has been some trap data received in last x seconds (where x > bigger than the polling interval)

The second part of this conjunction is represented by trap.nodata(350) = 0 (e.g. “It is false that there has been no trap information received in the last 350 seconds” which basically means “you have received some trap information in the last 350 seconds”).

Once the 350 seconds expires that statement becomes false and the trigger moves to looking at the recovery expression. Remember our polling interval was 5 minutes, or 300 seconds. 

The value x, must be at least as long as a polling interval, this will give the polling a chance to catch up as it were. Consider a scenario where x is less than a single polling interval and the interface drops just after the last poll. The nodata(x) expression will expire before the next poll comes through. When this happens, the trigger statement is false, so Zabbix will move to look at the Recovery Expression (which states that the last two polls are up). Zabbix will see the last two polls as up and trigger will recover when the interface is still down!

blog8_image13_diagram8

If x is bigger than the polling interval, polling can catch up and the trigger behaves correctly.

blog8_image14_diagram9

Now that we have solved this we can reintroduce polling into the trigger. Remember that the initial DOWN trap could still be missed. We saw that there were problems when trying to integrate polling and trapping together into a trigger’s Problem Expression, but we can easily create a single poll-based trigger.

This trigger can be relatively simple. The Problem Expression simply states that the last two polls show the interface as down. There doesn’t need to be a Recovery Expression, since when the trigger sees two UP polls it can recover without problems.

Now we’ve got another problem though. We don’t want two triggers to go off for just one event. Thankfully Zabbix has the feature of dependence. If we configure the poll-based trigger to only move to a PROBLEM state if the trap-based trigger is not in a PROBLEM state, then this poll-based trigger effectively acts as a backup to the trap-based one. I’ll explore the exact configuration of this in the work section.

Once this has been configured you’ll have a working solution that supports both polling and trapping without having to worry about alerts not triggering or clearing when they should. Let’s take a look at how this configured on the Zabbix UI.

The Work

In this section I will show screenshots of the triggers that are used in the aforementioned solution. I haven’t shown the configuration of the LLD or of any corresponding Actions (that will result in email or text messages being sent), but Zabbix has excellent documentation on the how to configure these features.

First we’ll look at the trapping configuration:

blog8_image15_traptrigger

The Name field can use variables based on OIDs (like ifDesc and ifAlias) that are defined in the Low Level Discovery rule to make the trigger contain meaningful information of the affected interface. The trigger expression references the trap item that listens for interface down traps.

The trap item itself will look at the log output produced by the Zabbix snmptrapd process passing traps through an SNMPTT config file. This process parses incoming traps and creates log entries. Trap items can then match against these logs.

In this case, the item matches against log entries containing the string

“Link up on interface {#SNMPINDEX}” – which is produced when a linkup trap is received

Or

“Link down on interface {#SNMPINDEX}”}” – which is produced when a linkdown trap is received

where {#SNMPINDEX} is the index of the table entry for the ifIndex table.

In this trigger expression the trap item is referenced twice. Firstly, it matches a trap item that has the “link down” substring in it (i.e. if a down trap is received for that ifIndex). Secondly, it uses the noData = 0 (false) function – this means that “some trap data has been received in the past 350 seconds”.

This matches the pseudo-expression we have above:

The last trap received indicates the interface is down

AND

There has been some trap data received in last x seconds (where x > bigger than the polling interval).

If a trap is received stating the interface is up, the trap item will no longer contain the string “link down” – rather it will contain “link up”, so the first part will become false.

Alternative, if no trap is received in 350 second (either UP or DOWN) the second half of the AND statement will become false. The polling interval is less that 350 seconds so if the up trap is missed polling will have the chance to catch up.

Either way, the trigger will eventually look at the recovery expression. The recovery expression references the ifOperStatus item and the ifAdminStatus item.

The recovery expression basically states:

IF

The last two polls of the interface operational state is up

OR

The last poll of the administrative state of the interface is down (i.e. someone has issued ‘shutdown’ on the interface, if it’s an interface on a Cisco device)

THEN recover.

The second half of the disjunction is used to account for scenarios where an engineer deliberately shut down an interface – in which case you would not want the alert to persist.

Next we’ll look at the polling trigger:

blog8_image16_polltrigger

This one is much simpler. The trigger will go off if the last two polls of the interface indicate that the operational state is down (2) AND the admin state is up (1) – meaning that it hasn’t been manually shutdown by an engineer.

Finally, the last trick to making this solution works is in the dependencies tab of this trigger prototype:

blog8_image17_dependency

In this screen, the trap-based trigger has been selected as a dependency for the poll-based trigger. This means that the poll-based trigger will only go off if the trap-based trigger hasn’t gone of. 

So that’s the work involved in configuring the actual triggers and it brings us to the end of this quirk. It demonstrates how to combine polling and trapping into Zabbix triggers to allow for consistent and correct alerting.

Zabbix has a wide range of functions and capabilities – far more than what I’ve outlined there. There may very well be another way to accomplish the same goal so as usual, any thoughts or idea are welcome. 

The Friend of my Friend is my Enemy

Imagine you’re a provider routing a PI space prefix for one of your customers. Now imagine that one of your IX peers started to advertise a more specific subnet of that customer network to you. How would and how should you forward traffic destined for that prefix? This quirk looks at just a such a scenario from the point of view of an ISP that adheres to BCP38 best practice filtering policies…

The quirk

So here’s the scenario:

Blog7_image1_setup

In this setup Xellent IT Ltd is both a customer and a provider. It provides transit for ACME Consulting but it is a customer of Provider A. ACME owns PI space and choses to implement some traffic engineering. It advertises a /23 to Xellent IT and a /24 to Provider B.

Now Provider B just happens to peer with Provider A over a public internet exchange. The quirk appears when traffic from the internet, destined to 1.1.1.1/32, enters Provider A’s network, especially when you consider that Provider A implements routing policies that adhere to BCP38.

But first, what is BCP38?

You can read it yourself here, but in short, it is a Best Current Practice document that advocates for prefix filtering to minimise threats like DDoS attacks. It does this by proposing inbound PE filtering on customer connections that block traffic whose source address does not match that of a known downstream customer network. DDoS attacks have spoofed source addresses. So if every Provider filtered traffic from their customers, to make sure that the source address was from the right subnet (and not spoofed) then these kinds of DoS attacks would disappear overnight.

To quote the BCP directly:

In other words, if an ISP is aggregating routing announcements for multiple downstream networks, strict traffic filtering should be used to prohibit traffic which claims to have originated from outside of these aggregated announcements.
BCP38 – P. Ferguson, D. Senie

To put it in diagram form, the basic idea is as follows:

Blog7_image3_BCP38_inbound

A provider can also implement outbound filtering to achieve the same result. That is to say, outbound filters can be applied at peering and transit points to ensure that the source addresses of any packets sent out come from within the customer cone of the provider (a customer cone is the set of prefixes sourced by a provider, either as PI or PA space, that makes up the address space for is customer base). This can be done in conjunction with, or instead of, the inbound filtering approach.

Blog7_image4_BCP38_outbound

There are multiple ways a provider can build their network to adhere to BCP38. As an example, an automated tool could be built that references an RIR database like RIPE. This tool could perform recursive route object lookups on all autonomous systems listed in the providers AS-SET and build an ACL that blocks all outbound border traffic whose source address is not in that list.

Regardless of the method used, this quirk assumes that Provider A is using both inbound and outbound filtering. But as we’ll see, it is the outbound filtering that causes all the trouble… here’s the traffic flow:

Blog7_image2_traffic_blackholing

Now you might ask why the packet would follow this particular path. Isn’t Provider B advertising the more specific /24 it receives from ACME? How come the router that sent the packet to Provider A over the transit link can’t see the /24?

There are a number of reason for this and it depends on how the network of each Autonomous System along the way is designed. However, one common reason could be due to a traffic engineering service offered but Internet Providers call prefix scoping.


Prefix scoping allows a customer to essentially tell its provider how to advertise its prefix to the rest of the internet. This is done by including predetermined BGP communities in the prefix advertisements. The provider will recognise these communities and alter how they advertise that prefix to the wider internet. This could be done through something like route-map filtering on these communities.

In this scenario, perhaps Provider B is offering such a service. ACME may have chosen to attach the ‘do not advertise this prefix to your transit provider x’ community to its BGP advertisement to Provider B. As a result, the /24 prefix doesn’t reach the router connecting to Provider A over its transit link, so it forwards according to the /23.

This is just one example of how traffic can end up at Provider A. For now, let’s get back to the life of this packet as it enters Provider A.

Upon receipt of the packet destined for 1.1.1.1/32, Provider A’s border router will look in its routing table to determine the next hop. Because it is more specific, the 1.1.1.0/24 learned over peering will be seen in the RIB as the best path, not the /23 from the Xellent IT link. The packet is placed in an LSP (assuming an MPLS core) with a next hop of the border router that peers with Provider B at the Internet Exchange.

You can probably see what’s going to happen. When Provider A’s border router at the Internet Exchange tries to forward the packet to Provider B it has to pass through an outbound ACL. This ACL has been built in accordance with BCP38. The ACL simply checks the source address to make sure it is from with the customer cone of Provider A. Since the source address is an unknown public address sourced from off-net, the packet is dropped.

Now this is inherently a good thing isn’t it? Without this filtering, Provider A would be providing transit for free! However, it does pose a problem after all, since traffic for one of its customers subnets is being blackholed.

From here, ACME Consulting gets complaints from its customers that they can’t access their webserver. ACME contacts its transit providers and before you know it, an engineer at Provider B has done a traceroute and calls Provider A to ask why the final hop in the failed trace ends in Provider As network.

So where to from here? What should Provider A do? It doesn’t want to provide transit for free, and its policy states that BCP38 filtering must be in place. Let’s explore the options.

The Search

Before I look at the options available, it worth pausing here to reference an excellent paper by Pierre Francois of the Universite catholique de Louvain entitled Exploiting BGP Scoping Services to Violate Internet Transit Policies. It can be read here and describes the principles underlying what is happening in this quirk in a more high level logistical way that sheds light on why this is happening. I won’t go into exhaustive detail, I highly recommend reading the paper yourself, but to summarise, there are 3 conditions that come together to cause this problem.

  1. The victim Provider whose policy is violated (Provider A) receives the more specific prefix from only peers or transit providers.
  2. The victim Provider also has a customer path towards the less specific prefix.
  3. Some of the victims Providers peer or transit providers did not receive the more specific path.

This is certainly what is happening here. Provider A sees a /24 from its peer (condition 1), a /23 from its customer (condition 2) and the Transit router that forwards the packet to Provider A cannot see the /24 (condition 3). The result of these conditions is that the packet is being forwarded from AS to AS based on a combination of the more specific route and the less specific route. To quote directly from Francois’ paper:

The scoping being performed on a more specific prefix might no longer let routing information for the specific prefix be spread to all ASes of the routing system. In such cases, some ASes will route traffic falling to the range of the more specific prefix, p, according to the routing information obtained for the larger range covering it, P.
Exploiting BGP Scoping Services to Violate
Internet Transit Policies – Pierre Francois

So what options does Provider A have? How can it ensure that traffic isn’t dropped, but at the same time, make sure it can’t be abused into providing free transit for off-net traffic? Well there’s no easy answer but there are several solutions that I’ll consider:

  • Blocking the more specific route from the peer
  • Asking Xellent IT Ltd to advertise the more specific
  • Allowing the transit traffic, but with some conditions

I’ll try to argue that allowing the transit traffic but only as an exception, is the best course of action. But before that, let’s look at the first two options.

Let’s say Provider A applies an inbound route-map on its peering with Provider B (and all other peers and transits for that matter) to block any advertised prefixes that come from its own customer cone (basically, stopping its own prefixes being advertise towards itself from a non-customer). So Provider A would see Provider B advertising 1.1.1.0/24 and recognise that it as part of Xellent ITs supernet and block it.

This would certainly solve the problem of attempting to forward the traffic out of the Internet Exchange. Unfortunately, there are two crushing flaws with this approach.

Firstly, it undermines the intended traffic engineering employed by ACME and comes will all the inherent problems that asymmetric routing holds. For example, traffic ingressing back into ACME via Xellent IT could get dropped by a session-based firewall that it didn’t go through on its way out. Asymmetric routing is a perfect example of the problems than can result from some ASes forwarding on the more specific route and others forwarding on the less specific route.

Second, consider what happens if the link to Xellent IT goes down, or if Xellent IT stops advertising the /23. Suddenly Provider A has no access to the /24 network. Provider A is, in essence, relying on a customer to access part of the internet (this is of course assuming Provider A is not relying on any default routing). This would not only undermine the dual homing of Customer B, but would also stop Provider A’s other customers reaching ACMEs services.

Blog7_image5_block_24

Clearly forwarding the traffic based on the less specific by blocking the more specific from the peer doesn’t solve anything. It might get through Provider A, but traffic is still being forwarding on a combination of prefix lengths and Provider A could end up denying traffic from its other customers reaching a part of the internet. Not a good look for an internet provider.

What about asking Xellent IT to advertise the more specific? Provider A could then simply prefer the /24 from Xellent IT using local preference. This approach has problems too. ACME isn’t actually advertising the /24 to Xellent IT. Xellent IT would need to ask ACME to do so, however they may not wish to impose such a restriction on their customer. The question then becomes, does Provider A have the right to make such a request? They certainly can’t enforce it.

There is perhaps a legal argument to be made that by not advertising the more specific Provider A is losing revenue. This will be illustrated when we look at the third option of allowing off-net traffic. I won’t broach the topic of whether or not Provider could approach Xellent IT and ask for advertisement of the more specific due to revenue loss, but it is certainly food for thought. For now though, asking Xellent IT to advertise the more specific is perhaps not the preferred approach.

Let’s turn to the third option, which sees Provider A adjust its border policies by adding to its BCP38 ACL. Not only should this ACL permit traffic with source addresses from its customer cone, it should also permit traffic that is destined to prefixes in its customer cone. The idea looks like this:

Blog7_image6_allow_offnet

Now this might look ok. Off-net transit traffic to random public address (outside of Provider As customer cone) is still blocked, and ACMEs traffic isn’t. But this special case of off-net transit opens the door for abuse in a way that could cause Provider A to lose money.

Here’s how it works. For the sake of this explanation, I’ve removed Xellent IT and made ACME a direct customer of Provider A. I’ve also introduced a third service provider.

Blog7_image7_abuse_potential
  • ACME dual homes itself by buying transit from Provider’s A and B. Provider A happens to charge more.
  • ACME advertises its /23 PI space to Provider A
  • It’s /24 is then advertised to Provider B, with a prefix scoping attribute that tells provider B not to advertise the /24 on to any transit providers.
  • As a result of this, Provider C cannot see the more specific /24. Traffic from Provider C traverses Provider A, then Provider B before arriving at ACME.
Blog7_image7_abuse_potential_2

As we’ve already discussed, this violates BCP38 principles and turns Provider A into free transit for off-net traffic. But of perhaps greater importance is the loss of revenue that Provider A experiences. No one is paying for the increased traffic volume across Provider A’s core and Provider A gains no revenue from the increase – since it only crosses free peering boundaries. Provider B benefits as it sees more chargeable bandwidth used on its downstream link to ACME. ACME Ltd benefits since it can use the cheaper connection and utilize Provider A’s peering and transit relationships for free. If ACME had a remote site connecting to Provider C, GRE tunnels across Provider A’s core could further complicate things.

If ACME was clever enough and used looking glasses and other tools to discover the forwarding path, then there clearly is potential for abuse.

Having said all of that, I would argue that if this is done on a case by case basis, in a reactionary way, it would be an acceptable solution.

For example, in this scenario, as long as traffic flows don’t reach too high a volume (something that can be monitored using something like netflow) and only this single subnet is permitted, then for a sake of maintaining network reachability, this is a reasonable exception. It is not likely the ACME is being deliberately malicious, and as long as this exception is monitored, then the revenue loss would be miniscule and allowing a one-off policy violation would seem to be acceptable.

Rather than try and account for these scenarios beforehand, the goal would be to add exceptions and monitor them as they crop up. There are a number of way to detect when these policy violations occur. In this case, the phone call and traceroute from Provider B is a good way to spot the problem. Regrettably that does require something to go wrong for it be found and fixed (meaning a disrupted service for the customer). There are ways to detect these violation apriori, but I won’t detail them here. Francois’ paper presents the option of using an open-source IP management tool like pmacct which is worth reading about.

If off-net transit traffic levels increase, or more policy violations started to appear, more aggressive tactics might need to be looked at. Though for this particular quirk, allowing the transit traffic as an exception and monitoring its throughout seems to me to be a prudent approach.

Because I’ve spoken about this at a very high level, I won’t include a work section with CLI output. I could show an ACL permitting 1.1.1.0/24 outbound but this quirk doesn’t need that level of detail to understand the concepts.

So that’s it! A really fascinating conundrum that is as interesting to figure out as it is to troubleshoot. I’d love to hear if anyone has any thoughts or possible alternatives. I toyed with the idea of using static routing at the PE facing the customer or assigning a community to routes received from peering that are in your customer cone and reacting to that somehow, but both those ideas ran into similar problems to the ones I’ve outlined above. Let me if you have any other ideas. Thanks for reading.

From MPLS L3VPN to PBB-EVPN

This blog introduces PBB-EVPN over an MPLS network. But rather than just describe the technology from scratch, I have tried to structure the explanation assuming the reader is familiar with plain old MPLS L3VPN and is new to PBB and/or EVPN. This was certainly the case with me when I first studied this topic and I’m hoping others in a similar position will find this approach insightful.

I won’t be exploring a specific quirk or scenario – rather I will look at EVPN followed by PBB, giving analogies and comparisons to MPLS L3VPN as I go, before combining them into PBB-EVPN. I will focus on how traffic is identified, learned and forwarded in each section.

So what is PBB-EVPN? Well, besides being hard to say 3 times fast, it is essentially an L2VPN technology. It enables a Layer 2 bridge domain to be stretched across a Service Provider core while utilizing MAC aggregation to deal with scaling issues.

Let’s look at EVPN first.

EVPN

EVPN, or Ethernet VPN, over an MPLS network works on a similar principle to MPLS L3VPN. The best way to conceptualize the difference is to draw an analogy (colour coded to highlight points of comparison)…

MPLS L3VPN assigns PE interfaces to VRFs. It then uses MP-BGP (with the vpnv4 unicast address family) to advertise customer IP Subnets as VPNv4 routes to Route Reflectors or other PEs. Remote PEs that have a VRF configured to import the correct route targets, accept the MP-BGP update and install an ipv4 route into the routing table for that VRF.

EVPN uses PE interfaces linked to bridge-domains with an EVI. It then uses MP-BGP (with the l2vpn evpn address family) to advertise customer MAC addresses as EVPN routes to Route Reflectors or other PEs. Remote PEs that have an EVI configured to import the correct route target, accept the MP-BGP update and install a MAC address into the bridge domain for that EVI.

This analogy is a little crude, but in both cases packets or frames destined for a given subnet or MAC will be imposed with two labels – an inner VPN label and an outer Transport label. The Transport label is typical communicated via something like LDP and will correspond to the next hop loopback of the egress PE. The VPN label is communicated in the MP-BGP updates.

These diagrams illustrate the comparison:

Blog6_image1a_and_b

In EVPN, customer devices tend to be switches rather than routers. PE-CE routing protocols, like eBGP, aren’t used since it operates over layer 2. The Service Provider appears as one big switch. In this sense, it accomplishes the same as VPLS but (among other differences) uses BGP to distribute MAC address information, rather than using a full mesh of pseudowires.

EVPN uses an EVI, or Ethernet Virtual Identifier, to identify a specific instance of EVPN as it maps to a bridge domain. For the purposes of this overview, you can think of an EVI as being quasi-equivalent to a VRF. A customer facing interface will be put into a bridge domain (layer 2 broadcast domain), which will have an EVI identifier associated with it.

The MAC address learning that EVPN utilizes what is called control-plane learning, since it is BGP (a control-plane routing protocol) that distributes the MAC address information. This is in contrast to data-plane learning, which is how a standard switch learns MAC addresses – by associating the source MAC address of a frame to the receiving interface.

The following Cisco IOS-XR config shows an EVPN bridge domain and edge interface setup, side by side with a MPLS L3VPN setup for comparison:

Blog6_output1a_and_b

NB. For MPLS L3VPN config  the RD config (which is usually configured under CE-PE eBGP config) is not shown. PBB config is shown in the EVPN Bridge domain, this will be explained further into the blog.

EVPN seems simple enough at first glance, but it has a scaling problem, which PBB can ultimately help with…

Any given customer site can have hundreds or even thousands of MAC addresses, as opposed to just one subnet (as in an MPLS L3VPN environment). The number of updates and withdrawals that BGP would have to send could be overwhelming if it needed to make adjustments for MAC addresses appearing and disappearing – not to mention the memory requirements. And you can’t summarise MAC addresses like you can IP ranges. It would be like an MPLS L3VPN environment advertising /32 prefixes for every host rather than just one prefix for the subnet. We need a way to summarise or aggregate the MAC addresses.

Here’s where PBB comes in…

PBB – Provider Backbone Bridging (802.1ah)

PBB can help solve the EVPN scaling issue by performing one key function – it maps each customer MAC address to the MAC address of the attaching PE. Customer MAC addresses are called C-MACs. The PE MAC addresses are call B-MACs (or Bridge MACs).

This works by adding an extra layer 2 header to frame as it is forwarded from one site to another across the provider core. The outer layer 2 header has a destination B-MAC address of the PE device that the inner frames destination C-MAC is associated with.  As a result, PBB is often called MAC-in-MAC. This diagram illustrates the concept:

Blog6_image2_pbb

NB. In PBB terminology the provider devices are called Bridges. So a BEB (Backbone Edge Bridge) is a PE and a BCB (Backbone Core Bridge) is a P. For sake of simplicity, I will continue to use PE/P terminology. Also worth noting is that PBB diagrams often show service provider devices as switches, to illustrate the layer 2 nature of the technology – which I’ve done above.

In the above diagram the SID (or Service ID) represents a layer 2 broadcast domain similar to what an EVI represents in EVPN.

Frames arriving on a PE interface will be inspected and, based on certain characteristics, it will be mapped or assigned to a particular Service ID (SID).

The characteristics that determine what SID a frame belongs to can be a number of things:

  • The customer assigned VLAN
  • The Service Provider assigned VLAN
  • Existing SID identifiers
  • The interface it arrives on
  • A combination of the above or other factors

To draw an analogy to MPLS L3VPN – the VRF that an incoming packet is assigned to is determined by whatever VRF is configured on the receiving interface (using ip vrf forwarding CUST_1 in Cisco IOS interface CLI).

Once the SID has been allocated, the entire frame is then encapsulated in the outer layer 2 header with destination MAC of the egress PE.

In this way C-MACs are mapped to either B-MACs or local attachment circuits. Most importantly however the core P routers do not need to learn all of the MAC addresses of the customers. They only deal with the MAC addresses of the PEs. This allows a PE to aggregate all of the attached C-MACs for a given customer behind its own B-MAC.

But how does a remote PE learn which C-MAC maps to which B-MAC?

In PBB learning is done in the data-plane, much like a regular layer 2 switch. When a PE receives a frame from the PBB core, it will strip off the outer layer 2 header and make a note of the source B-MAC (the ingress PE). It will map this source B-MAC to the source C-MAC found on the inner layer 2 header. When a frame arrives on a local attachment circuit, the PE will map the source C-MAC to the attachment circuit in the usual way.

PBB must deal with BUM traffic too. BUM traffic is Broadcast, Unknown Unicast or Multicast traffic. An example of BUM traffic is the arrival or frame for which the destination MAC address is unknown. Rather than broadcast like a regular layer 2 switch would, a PPB PE will set the destination MAC address of the outer layer 2 header to a special multicast MAC address that is built based on the SID and includes all the egress PEs that are part of the same bridge domain. EVPN uses a different method or handling BUM traffic but I will go into that later in the blog.

Overall, PBB is more complicated than the explanation given here, but this is the general principle (if you’re interested, see this doc that details how PBB can be combined with 802.1ad to add an aggregation layer to a provider network).

Now that we have the MAC-in-MAC features of PBB at our disposal, we can use it to solve the EVPN scaling problem and combine the two…

PBB-EVPN

With the help of PBB, EVPN can be adapted so that it deals with only the B-MACs.

To accomplish this, each EVPN EVI is linked to two bridge domains. One bridge domain is dedicated to customer MAC addresses and connected to the local attachment circuits. The other is dedicated to the PE routers B-MAC addresses. Both of these bridge domains are combined under the same bridge group.

Blog6_image3_bridge_domains

The PE devices will uses data-plane learning to build a MAC database, mapping each C-MAC to either an attachment circuit or the B-MAC of an egress PE. Source C-MAC addresses are learned and associated as traffic flows through the network just like PBB does.

The overall setup would look like this:

Blog6_image4_pbb_evpn_overview

The only thing EVPN needs to concern itself with is advertising the B-MACs of the PE devices. EVPN uses control-plane learning and includes the B-MACs in the MP-BGP l2vpn evpn updates. For example, if you were to look at MAC address known to a particular EVI on a route-reflector, you would only see MAC address for PE routers.

Looking again at the configuration output that we saw above, we can get a better idea of how PBB-EVPN works:

Blog6_output2_pbb_evpn_detail

NB. I have added the concept of a BVI, or Bridged Virtual Interface, to the above output. This can be used to provide a layer 3 breakout or gateway similar to how an SVI works on a L3 switch.

You can view the MAC addresses information using the following command:

Blog6_output3_macs

Now lets look at how PBB-EVPN handles BUM traffic. Unlike PBB on its own, which just sends to a multicast MAC address, PBB-EVPN will use unicast replication and send copies of the frame to all of the remote PEs that are in the same EVI. This is an EVPN method and the PE knows which remote PEs belong to the same EVI by looking in what is called a flood list.

But how does it build this flood list? To learn that, we need to look at EVPN route-types…

MPLS L3VPN sends VPNv4 routes in its updates. But EVPN send more than one “type” of update. The type of update, or route-type as it is called, will denote what kind of information is carried in the update. The route-type is part of the EVPN NLRI.

For the purposes of this blog we will only look at two route-types.

  • Route-Type 2s, which carry MAC addresses (analogous to VPNv4 updates)
  • Route-Type 3s, which carry information on the egress PEs that belong to an EVI.

It is these Route-Type 3s (or RT-3s for short) that are used to build the flood list.

When BUM traffic is received by a PE, it will send copies of the frame to all of its attachment circuits (except the one it received the frame on) and all of the PEs for which it has received a Route-Type 3 update. In other words, it will send to everything in its flood-list.

So the overall process for a BUM packet being forwarded across a PBB-EVPN backbone will look as follows:

Blog6_image5_bum_traffic

So that’s it, in a nutshell. In this way PBB and EVPN can work together to create an L2VPN network across a Service Provider.

There are other aspects of both PBB and EVPN, such as EVPN multi-homing using Ethernet Segment Identifiers or PBB MAC clearing with MIRP to name just a couple, but the purpose of this blog was to provide an introductory overview – specifically for those used to dealing with MPLS L3VPN. Thoughts are welcome, and as always, thank you for reading.