00:00 Intro y musiquita
00:36 Bienvenida
01:20 Porqué aprovechamiento IPv4
04:00 Big flat network
06:46 Layer 3 en todos lados
08:50 Extensión de capa 2
09:33 VPLS
10:55 VXLAN
12:18 EVPN con MPLS o VXLAN
14:22 BGP Anycast Gateway
17:30 Outro y musiquita
Or to deliver public IPv4 addressing, without breaking the bank and your routing tables.
Years passed by, technology became cheaper and more powerful, making the Internet bigger and bigger. The cloud emerged, and we saw the rise and fall of many service providers all around the globe. Now your CPE had a public IP address, maybe your printer too, someone thought it was a good idea that your fridge could have a public address to, and maybe your home cameras and your microwave and your mobile and your cat and your toilet and so on.
The network became so big, that we saw an enormous increase on the number service providers and AS numbers. Everyone needed public addresses, maybe a /18, or a /20. Some poor bastards were only delivered a /22.
At this point, people started talking about IPv4 exhaustion.
Maybe you had a small network, emailed ARIN/LACNIC asking for moar addresses!, and they told you no.
Maybe you asked your upstream for a /26 in your DIA, and they told you no.
Maybe your customers asked for public addressing, and you realized that it was just wasteful to assign a /30 to anyone. Who can afford such a waste!?
IP addresses brokers are making a shitload of money out of this. Small operators need more addresses in order to achieve sustainable growth. Most RIRs will just tell you “Dude, there is nothing left!”
On this post we are not considering standard routed customers or where we can provision a a /30 or similar into the PE equipment, and let a routing protocol do its thing.
The question is, how do you provision an interface to your residential customers, in such a way that they can have a routable public IP address on their CPE, while keeping separate broadcast domains or VLANs for customer blocks, and doing all of this without wasting addresses in the process.
You probably have a router somewhere, with an IP address which serves as a default gateway for the entire segment. Maybe this router also serves DHCP, or acts a a PPPoE server, or any IP addresses provisioning method. How do achieve an efficient IP addressing schema, efficient route aggregation, and efficient layer 2 segmentation.
Yeah it would be easy to take a /21 of publics and putting them all on a VLAN. We’ll cover this option on next articles.
Many roads to the same place
For us small and medium operators, most typical efforts in IP addresses saving involves some sort of layer 2 extension, or subnetting into smaller blocks. Let’s look at some of these alternatives.
Big subnet, VLAN your way out, single access router
Simple enough. Put an entire /24, you will lose 3 addresses on the network, broadcast, and default gateway. Extend your customer VLAN over all the required switches in between.
The good part about this is that you will make an efficient usage of your addresses by only losing 3 out of 255, which I guess is a decent tradeoff.
Of course, this is an administration nightmare. Huge broadcast domain, VLANs that split over multiple switches in several locations, a strong requirement for DHCP snooping to prevent rouge servers, and a big STP tree to take care of.
Small subnets, VLANs everywhere, multiple access router
The obvious alternative is the exact opposite to the one we just saw. Let’s take a /24, split it into decent chunks, and put them into their own VLAN, which will be targeted into a separate access router.
This approach allows to segment the huge broadcast domain into several smaller VLANs, enabling us to keep possible broadcasts isolated into their own domain. You can also run MST (or PVST maybe) on top of it, to isolate loops into single instances of STP, instead of having a big spanning tree covering everything.
Even if this looks better, there is an obvious tradeoff. For every subnet, we still lose 3 addresses.
3 in 255 is not so much. If you split that into, let’s say /26s, you will need 3 for every network, broadcast and gateway address in every subnet. 3 addresses for 4 /26s subnet, is 12 wasted addresses. And 12 in 255 is not a minor thing. Specially once you remember that you have to pay a fee to your RIR for every address, every year.
Big subnet, VPLS your way out, single access router
This is a similar approach as the first one, where we used a big single layer 2 domain. We can make this layer 2 segment over VPLS tunnels, which will extend layer 2 using a MPLS overlay.
The IP addressing here can be the same used on the first scenario. A big /24 with a default gateway, thus losing 3 addresses out of 25.
This is a commonly used alternative which works, although I am not a fan of it. On this topology can run everything over a single (and different) transit VLANs between every router, but there are some requirements.
You will need to put layer 3 capable CPEs on every customer. Those CPEs will need to talk extended MTUs, be able to run MPLS/VPLS, and a routing protocol to readvertise their loopbacks into the aggregator router, to be able to terminate VPLS tunnels. And of course as we are making a big layer 2 domain, you still have to consider DHCP snooping, and possible loops.
An alternative without reinventing the wheel
The main focus is to avoid wasting IP addresses. Readers of this humble blog are medium size operators, and every penny spent on IP addressing can make a huge difference on a long term. Hopefully, proving that we are capable of making a difference on making a smart usage of addresses, can help us to present a business case to ARIN/LACNIC/whoever, to successfully request and get additional blocks of public addresses.
MPLS/VPLS solutions can work on top of this, but many operators do not have gear capable of talking such protocols, either because they are offloading their access layer into feature-limited software solutions, or because their hardware needs to be licensed to be MPLS capable. Also, many operators don’t have skilled MPLS engineers to design and support such network topologies.
Is there a way we can accomplish this in an easier way?
Anycast at rescue
Anycast is a network addressing and routing methodology in which a single destination IP address is shared by devices in multiple locations. Anycast usually comes to mind when we think about CDNs, DNS servers, and any destination that has to be present on multiple locations at the same time, on a same IP address.
For our purposes, our destination will be our default gateway.
An anycast IP address is not related at all with the kind of service present on the upper layers, so we can easily provide network services over an anycast address.
Surely you are familiar with CloudFlare 1.1.1.1 or Google 8.8.8.8, which are anycast DNS services. The 8.8.8.8 I ping from home is probably not the same 8.8.8.8 you can ping from your end.
Think twice about this. Anycast means we will have duplicate IP addresses in our network, by design. This is not VRRP or any kind of HSRP where you can have active/passive addresses. This is in fact, using the same IP address over many devices, on active interfaces.
Our new topology
On this scenario, we will share a default gateway on multiple devices.
The 10.10.10.254/24 address will be present and active on two routers, which face different layer 2 segments on the inside. For this example, I’m using a PE simulating DIA services on one side, and a BNG or PPPoE concentrator, to simulate residential services like FTTH or unlicensed wireless access.
Our objective here is to able to handoff /24 addressing to customers, so they can use the 10.10.10.254 address as their default gateway.
This address won’t be installed on any routing protocols, and instead, we are looking to advertise /32 (or bigger summaries) v4 prefixes over BGP, install them on the core router, and be able to advertise a single /24 summary from the core to the rest of the network. This core will be in fact, an aggregator too.
By doing this, the entire /24 can be subnetted up /32 advertisements for single hosts, or any other subnet like /25 or /26 for access subnets, while all the addresses usable inside those subnets. We can think of them as pools of addresses instead of subnets.
For example, 10.10.10.0-10.10.10.61 is the first /24, but customers inside will use a /24 submask, to be able to reach 10.10.10.254 on the AGG/BNG.
I will illustrate examples on both IOS and MikroTik platforms on the following steps.
GNS3 Topology
Our lab topology looks as follows. CORE, AGG and PE are all Cisco CSR10000v 16.12.03, and the BNG is Mikrotik CHR 6.48.6.
Core router
This will act as BGP RR for AS 65000. For sake of simplicity, all interfaces are on OSPF area 0 to reditribute loopbacks.
interface Loopback0
ip address 10.1.1.1 255.255.255.255
!
interface GigabitEthernet1
ip address 10.255.255.5 255.255.255.252
negotiation auto
no mop enabled
no mop sysid
!
interface GigabitEthernet2
ip address 10.255.255.1 255.255.255.252
negotiation auto
no mop enabled
no mop sysid
!
router ospf 1
router-id 10.1.1.1
passive-interface default
no passive-interface GigabitEthernet1
no passive-interface GigabitEthernet2
network 10.1.1.1 0.0.0.0 area 0
network 10.255.255.0 0.0.0.3 area 0
network 10.255.255.4 0.0.0.3 area 0
!
router bgp 65000
bgp router-id 10.1.1.1
bgp log-neighbor-changes
neighbor 10.10.1.2 remote-as 65000
neighbor 10.10.1.2 update-source Loopback0
neighbor 10.10.1.3 remote-as 65000
neighbor 10.10.1.3 update-source Loopback0
!
BNG Aggregation Router
This node will act as BGP RR client for AS 65000. As with the core, we are peering between loopbacks. I highlighted the default 10.10.10.254 gateway on ether2, which faces the VPCS host behind.
At this point, we should have a full neighborship relation on OSPF, sucessful loopbacks redistribution, and established peerings between CORE and BNG.
CORE#sh ip ospf neighbor
Neighbor ID Pri State Dead Time Address Interface
10.10.1.3 1 FULL/BDR 00:00:34 10.255.255.2 GigabitEthernet2
10.10.1.2 1 FULL/BDR 00:00:37 10.255.255.6 GigabitEthernet1
CORE#sh ip bgp summ
CORE#sh ip bgp summary
BGP router identifier 10.1.1.1, local AS number 65000
BGP table version is 2, main routing table version 2
1 network entries using 248 bytes of memory
1 path entries using 136 bytes of memory
1/1 BGP path/bestpath attribute entries using 288 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 672 total bytes of memory
BGP activity 1/0 prefixes, 1/0 paths, scan interval 60 secs
1 networks peaked at 23:11:52 Aug 20 2022 UTC (00:09:10.360 ago)
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
10.10.1.2 4 65000 107 108 2 0 0 01:33:52 0
10.10.1.3 4 65000 100 96 2 0 0 01:25:23 1
CORE#sh ip ro
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2, m - OMP
n - NAT, Ni - NAT inside, No - NAT outside, Nd - NAT DIA
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
H - NHRP, G - NHRP registered, g - NHRP registration summary
o - ODR, P - periodic downloaded static route, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR
Gateway of last resort is not set
10.0.0.0/8 is variably subnetted, 8 subnets, 2 masks
C 10.1.1.1/32 is directly connected, Loopback0
O 10.10.1.2/32 [110/2] via 10.255.255.6, 01:34:00, GigabitEthernet1
O 10.10.1.3/32 [110/11] via 10.255.255.2, 01:25:28, GigabitEthernet2
B 10.10.10.2/32 [200/0] via 10.10.1.3, 00:09:13
C 10.255.255.0/30 is directly connected, GigabitEthernet2
L 10.255.255.1/32 is directly connected, GigabitEthernet2
C 10.255.255.4/30 is directly connected, GigabitEthernet1
L 10.255.255.5/32 is directly connected, GigabitEthernet1
BNG Route Advertisements
We will handle advertisements with static routes using the interface as the gateway for the desired hosts. Other methods are valid, like redistributing connected routes for the case of PPPoE interface sessions.
On the CORE side, this static route advertisement will look like this.
CORE#sh ip bgp
BGP table version is 4, local router ID is 10.1.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
t secondary path, L long-lived-stale,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
Network Next Hop Metric LocPrf Weight Path
*>i 10.10.10.2/32 10.10.1.3 100 0 ?
PC2 VPCS Config
I’ll assign 10.10.10.2/24 to this host.
PC2> show ip
NAME : PC2[1]
IP/MASK : 10.10.10.2/24
GATEWAY : 10.10.10.254
DNS :
MAC : 00:50:79:66:68:01
LPORT : 20088
RHOST:PORT : 127.0.0.1:20089
MTU : 1500
Do we have a sucessful routing to the outside at this point? Let’s run a traceroute.
PC2> trace 10.1.1.1 -P 1
trace to 10.1.1.1, 8 hops max (ICMP), press Ctrl+C to stop
1 10.10.10.254 0.941 ms 0.750 ms 0.828 ms
2 10.1.1.1 2.662 ms 1.719 ms 1.811 ms
Awesome! At this point we have built a sucessful routed network – altough – this is nothing out of the ordinary.
Reusing default gateways
The VPCS host used 10.10.10.2/24 on a single layer 2 segment. Let’s consider the PE scenario, where we will asign 10.10.10.1/24, to another host, behind a PE, behing a totally different aggregator router. We want to keep 10.10.10.254/24 as the default gateway here.
AGG Config
The AGG router config is almost the same as the BNG. We are adding a vlan77 to manage the CPE behind the AGG.
interface Loopback0
ip address 10.10.1.2 255.255.255.0
!
interface GigabitEthernet1
ip address 10.255.255.6 255.255.255.252
negotiation auto
!
router ospf 1
router-id 10.10.1.2
passive-interface default
no passive-interface GigabitEthernet1
network 10.10.1.2 0.0.0.0 area 0
network 10.255.255.4 0.0.0.3 area 0
!
router bgp 65000
bgp router-id 10.10.1.2
bgp log-neighbor-changes
neighbor 10.1.1.1 remote-as 65000
neighbor 10.1.1.1 update-source Loopback0
!
PE Config
PE configuration is dead simple. An address on vlan16 for management, and a bridge-domain gi1 and gi4.
bridge-domain 1
member GigabitEthernet1 service-instance 1
member GigabitEthernet4 service-instance 1
!
interface GigabitEthernet1
no ip address
service instance 1 ethernet
encapsulation untagged
!
!
interface GigabitEthernet1.16
encapsulation dot1Q 16
ip address 172.16.100.100 255.255.0.0
!
interface GigabitEthernet4
no ip address
service instance 1 ethernet
encapsulation untagged
!
!
AGG BGP Advertisements
Here we are doing the same we did on the BNG; adding a static route for the destination host, and static redistribution under BGP.
At this point, we should see sucessful routing from the AGG to the CORE.
CORE#sh ip os neighbor
Neighbor ID Pri State Dead Time Address Interface
10.10.1.3 1 FULL/BDR 00:00:33 10.255.255.2 GigabitEthernet2
10.10.1.2 1 FULL/BDR 00:00:37 10.255.255.6 GigabitEthernet1
CORE#sh ip bgp summ
CORE#sh ip bgp summary
BGP router identifier 10.1.1.1, local AS number 65000
BGP table version is 5, main routing table version 5
2 network entries using 496 bytes of memory
2 path entries using 272 bytes of memory
2/2 BGP path/bestpath attribute entries using 576 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 1344 total bytes of memory
BGP activity 2/0 prefixes, 2/0 paths, scan interval 60 secs
2 networks peaked at 19:14:11 Aug 21 2022 UTC (00:05:04.304 ago)
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
10.10.1.2 4 65000 1427 1425 5 0 0 21:32:05 1
10.10.1.3 4 65000 1465 1416 5 0 0 21:23:36 1
CORE#sh ip bgp nei
CORE#sh ip bgp neighbors 10.10.1.2 re
CORE#sh ip bgp neighbors 10.10.1.2 rou
CORE#sh ip bgp neighbors 10.10.1.2 routes
BGP table version is 5, local router ID is 10.1.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
t secondary path, L long-lived-stale,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
Network Next Hop Metric LocPrf Weight Path
*>i 10.10.10.1/32 10.10.1.2 0 100 0 ?
Total number of prefixes 1
CORE#sh ip ro
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2, m - OMP
n - NAT, Ni - NAT inside, No - NAT outside, Nd - NAT DIA
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
H - NHRP, G - NHRP registered, g - NHRP registration summary
o - ODR, P - periodic downloaded static route, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR
Gateway of last resort is not set
10.0.0.0/8 is variably subnetted, 9 subnets, 2 masks
C 10.1.1.1/32 is directly connected, Loopback0
O 10.10.1.2/32 [110/2] via 10.255.255.6, 21:32:25, GigabitEthernet1
O 10.10.1.3/32 [110/11] via 10.255.255.2, 19:55:51, GigabitEthernet2
B 10.10.10.1/32 [200/0] via 10.10.1.2, 00:05:19
B 10.10.10.2/32 [200/0] via 10.10.1.3, 19:55:46
C 10.255.255.0/30 is directly connected, GigabitEthernet2
L 10.255.255.1/32 is directly connected, GigabitEthernet2
C 10.255.255.4/30 is directly connected, GigabitEthernet1
L 10.255.255.5/32 is directly connected, GigabitEthernet1
Ok, routes are there, how about reachability?
CORE#traceroute 10.10.10.1 source lo0
Type escape sequence to abort.
Tracing the route to 10.10.10.1
VRF info: (vrf in name/id, vrf out name/id)
1 10.255.255.6 4 msec 2 msec 1 msec
2 10.10.10.1 14 msec 4 msec 3 msec
CORE#
Looks good, and from the end host?
PC1> trace 10.1.1.1 -P 1 trace to 10.1.1.1, 8 hops max (ICMP), press Ctrl+C to stop 1 10.10.10.254 2.303 ms 1.750 ms 1.438 ms 2 10.1.1.1 2.495 ms 2.000 ms 1.838 ms
The first hop in path is the same as before, 10.10.10.254. We are resuing the same address on different routers, while keeping routing intact.
Stay tuned for the upcoming post where the BNG will act as a proper PPPoE termination point, and doing all of this dynamically without operator intervention.