In other words,
PE / CE will essentially be the same equipment.
CE will be housed in it’s own VRF and I would like to have that VRF communicate with another VRF for the same customer in a different DC.
I am currently route-leaking all subnets into the main routing table and then route-leaking the other direction at the other data center.
As you can tell, this is…not scaleable and a PITA management wise.
I thought I could use MPLS between my cores and get around any subnet overlap, etc but now i’m realizing I might have a problem?
“PE / CE will essentially be the same equipment.”"CE will be housed in it’s own VRF"These statements are throwing me a bit. Does the customer need configuration access to the VRF? If not, then it sounds like you’re talking about some sort of wires only connection? Customer kit connects straight in to a VRF at both ends of a DC.
MPLS and L3VPNs are absolutely the correct way to solve this. The reason youll likely want to use MPLS is because L3VPNs use an MPLS label anyway (often called a VPN label). You need a way to transport the label across your network. You can either use GRE (dont do this) or just run MPLS for transport too. If its just 2 PEs then LDP will be totally fine.
With L3VPN, you only configure the PE device at each end and BGP + MPLS takes care of the rest.
Happy to go in to more detail or help you “PoC” it in eve-ng or something. I work for a SP so L3VPNs are my bread and butter, it’ll take me a few mins to configure something like this.
I’m a Juniper guy though, so you’ll have to translate in to IOS/XR/NXOS/Whatever you’re using.
EDIT: Being more specific regarding “VRF access”
Why two VRFs? Put this customer into an L3VPN (the same VRF) at all your DCs. If you’re not speaking BGP to the customer, or even managing the endpoint directly to their subnet, just drop an interface into the same VRF and number it as appropriate. It will work as you expect it would.
Don’t get too hung up on the definitions of PE & CE. These terms really come from service provider land, where there is always a provider-managed box to terminate the MPLS and hand off IP to the customer’s IP-speaking box. In real life, it’s not uncommon for there to ‘not be a CE’ (ie. the PE and CE roles are collapsed) if you are providing managed services to the customer on top of the L3VPN, or using MPLS internally. As long as the label-popped traffic ends up in the correct routing instance on the device for the routing you want, you’re fine.
Customer Edge does not necessarily participate in MPLS network it can be a laptop but In practice you may find BGP Session between PE and CE to exchange routes mostly the ones to reach other CEs.
Vrfs can be used to manage overlapping ip ranges only for seperate customer networks that will not need to communicate with each other.
To have conflicting subnets within the same customer network communicate will require NAT or to change the addressing scheme to remove the overlap.
Mpls is fancy switching for your core, I don’t think it will help you here.
PE by definition is there to connect multiple customers to your MPLS network. Why would you give someone else access to this device and other customers’ data?
If you’re using it for only one customer, then don’t extend the MPLS to it, and make it an actual CE.
VRFs are for isolating things.
Leaking is a sometimes-useful but messy and dangerous hack. And if you leak all routes then why even have VRFs??
MPLS (with BGP VPN 4/EVPN), is useful if you have multiple VRFs, and need to exchange the routes for them across many devices. One BGP session does all the VRFs, MPLS takes care of marking each VRF on the wire.
I did this before. There was no P core, no CE. We just had a mesh of ASR’s and ISR’s running MPLS and VRF’s for traffic segmentation.
That said, if you are using Cisco, if you can avoid route-leaking I would, depending on the model. For example, N7K’s will blow up your TCAM space with host routes in the RIB for each arp entry. If you export/import into a lot of VRF’s, this can exponentially grow.
ETA: See you mention, QFX, Juniper might be different, engage your SE to find out how leaking is handled.
Sorry.
I have a hard time explaining things. Let me try again.
“CE in it’s own VRF” just means the irb.interface for customer A will be in it’s own routing-instance that lives on a QFX.
That QFX is also the equipment that connects our DCs to each other.
From your reply, it sounds like all I need is LDP?
OP hasnt mentioned having customers with overlapping space interact with each other, unless the post was edited? What they have described is a perfect use case for L3VPN, which uses VRFs. I imagine the idea is to keep overlapping space separate from each customer and from the main routing table. Happy to be corrected.
It’s the same customer. They need to talk to each other.
We are a hosted solution. Sorry that probably makes more sense lol.
In other words, vrf a from DC1 needs to talk to vrf a in DC2
Are you using virtual-router routing instances, or VRF routing instances? I imagine virtual routers. To do it with VRFs you’ll need iBGP + LDP.
Why IRB? Is it just to do SVIs? Or are you actually routing/bridging between things on the QFX? (Like L3VPN + EVPN, for example)
Ah thats how I read that last sentence in the OP.
Why not make it all 1 VRF? Like they need to talk so… they need to be in the same VRF??
My current config is on a different platform and this is my upgrade plan, so I’d like to greenfield.
Yep, SVIs. I am new to junos, so I didn’t know there was another option for L3 on vlans? edit: looking into this, it is the only option to route vlans?
if I can do this with virtual-routers, that’s perfectly fine but i imagine I wont be able to, as that’s just vrf-lite if you will.
Well, why cant the port be a routed port with a /31, /30 or /24 on it, rather than ethernet-switching and IRB? So customer facing ports are routed and placed in to a routing instance.
You wont be able to do L3VPN with virtual-routers, as you mentioned VRs are basically VRF lite. A proper VRF will require route targets (import / export routes in/out of the VRF), route distinguishers (put something in front of the route so that overlapping space looks unique to BGP. Prevents BGP performing path selection on overlapping routes and choosing one 1). I also recommend vrf-table-label, which allocates a VPN label for the entire VRF (per next hop is junos default).
You can do a poor mans L3VPN with “Back to back VRFs”. This means youll do virtual routers on each PE, then have sub interfaces on core facing ports, (ports that face other PEs) that run a routing protocol between them to advertise customer routes within each VRF. This really doesnt scale beyond a few devices ( and I really do mean 2-3 ).
Doing L3VPN with MPLS also opens up other interesting possibilities for you/customers. Some might want a layer2 p2p circuit between DCs and you can do that over MPLS. So your network stays routed, but the customer traffic is transported over the top at layer2 using the same MPLS network.
On QFX do BGP EVPN / VXLAN I would say.
You need to think out your VRF strategy I think.
Whether MPLS or VXLAN encap you need to decide what goes in your underlay, what needs to talk to what, what VRFs are thus required.
I thought you meant there was another way to do L3 virtually, haha.
There is no CE equipment that we will hand off too. It’s a hosted platform with virtualization. All traffic is north-south for the customer over WAN. (It’s mainly phones)
Sure, but _something_ is plugging in to the QFX, right? So why cant that port have multiple units on it with layer3 addressing on each, with vlan tags? So sub interfaces in cisco terminology
Ah.
I have that as a routed ports, connecting to the other DC.
Other physical ports are just Trunks to handoff switches for customers that buy their own DIA, etc.