nanog mailing list archives

Re: Linux BNG


From: Ahad Aboss <ahad () swiftelnetworks com>
Date: Mon, 16 Jul 2018 01:31:07 +1000

Hi Baldur,



Based on the information you provided, CPE connects to the POI via
different service provider (access network provider / middle man) before it
reaches your network/POP.



With this construct, you are typically responsible for IP allocation and
session authentication via DHCP (option 82) with AAA or via Radius for
PPPoE.  You may also have to deal with the S-TAG and C-TAG  at BNG level.
Here are some options to consider;



*Option 1.*



Use Radius for session authentication and IP/DNS allocation to CPE. You can
configure BBA-GROUP on the BNG to overcome the 409x vlan limitation as well
as the S-TAG and C-TAG. BBA-GROUP can handle multiple session. BBA-GROUP is
also a well-supported feature.



Here is an example of the config for your BNG (Cisco router)  ;

===============================================

*bba-group pppoe NAME -1*

 virtual-template 1

 sessions per-mac limit 2

!

*bba-group pppoe NAME -2*

 virtual-template 2

 sessions per-mac limit 2

!

interface GigabitEthernet1/3.100

* encapsulation dot1Q 100 second-dot1q 500-4094*

 no ip redirects

 no ip unreachables

 no ip proxy-arp

 ip flow ingress

 ip flow egress

 ip multicast boundary 30

 *pppoe enable group NAME -1*

 no cdp enable

!

interface GigabitEthernet1/3.200

 encapsulation dot1Q 200 second-dot1q 200-300

 no ip redirects

 no ip unreachables

 no ip proxy-arp

 ip flow ingress

 ip flow egress

 ip multicast boundary 30

 *pppoe enable group NAME -2*

 no cdp enable



Configure Virtual templates too.

===============================================

*Option 2.*



You can deploy a DHCP server using DHCP option 82 to handle all IP or IPoE
sessions.

DHCP option 82 provides you with additional flexibility that can scale as
your customer base grows. You can perform authentication using a
combination of CircuitID, RemoteID or CPE MAC-ADD etc.



I hope this information helps.



Cheers,

Ahad


On Sat, Jul 14, 2018 at 10:13 PM, Baldur Norddahl <baldur.norddahl () gmail com
wrote:

Hello

I am investigating Linux as a BNG. The BNG (Broadband Network Gateway)
being the thing that acts as default gateway for our customers.

The setup is one VLAN per customer. Because 4095 VLANs is not enough, we
have QinQ with double VLAN tagging on the customers. The customers can use
DHCP or static configuration. DHCP packets need to be option82 tagged and
forwarded to a DHCP server. Every customer has one or more static IP
addresses.

IPv4 subnets need to be shared among multiple customers to conserve
address space. We are currently using /26 IPv4 subnets with 60 customers
sharing the same default gateway and netmask. In Linux terms this means 60
VLAN interfaces per bridge interface.

However Linux is not quite ready for the task. The primary problem being
that the system does not scale to thousands of VLAN interfaces.

We do not want customers to be able to send non routed packets directly to
each other (needs proxy arp). Also customers should not be able to steal
another customers IP address. We want to hard code the relation between IP
address and VLAN tagging. This can be implemented using ebtables, but we
are unsure that it could scale to thousands of customers.

I am considering writing a small program or kernel module. This would
create two TAP devices (tap0 and tap1). Traffic received on tap0 with VLAN
tagging, will be stripped of VLAN tagging and delivered on tap1. Traffic
received on tap1 without VLAN tagging, will be tagged according to a lookup
table using the destination IP address and then delivered on tap0. ARP and
DHCP would need some special handling.

This would be completely stateless for the IPv4 implementation. The IPv6
implementation would be harder, because Link Local addressing needs to be
supported and that can not be stateless. The customer CPE will make up its
own Link Local address based on its MAC address and we do not know what
that is in advance.

The goal is to support traffic of minimum of 10 Gbit/s per server. Ideally
I would have a server with 4x 10 Gbit/s interfaces combined into two 20
Gbit/s channels using bonding (LACP). One channel each for upstream and
downstream (customer facing). The upstream would be layer 3 untagged and
routed traffic to our transit routers.

I am looking for comments, ideas or alternatives. Right now I am
considering what kind of CPU would be best for this. Unless I take steps to
mitigate, the workload would probably go to one CPU core only and be
limited to things like CPU cache and PCI bus bandwidth.

Regards,

Baldur




-- 
Regards,

Ahad
Swiftel Networks
"*Where the best is good enough*"


Current thread: