Jump to: navigation, search

Neutron/BGP MPLS VPN

< Neutron
Revision as of 23:05, 27 March 2013 by Nachi Ueno (talk | contribs) (Configuration variables:)

Quantum BGP/MPLS IP Virtual Private Networks (VPNs) support on Logical Router

Scope:

This BP implements partial RFC4364 BGP/MPLS IP Virtual Private Networks (VPNs) support for interconnecting existing network and OpenStack cloud, or inter connection of OpenStack Cloud.

Use Cases:

Use case 1: Connect Quantum Virtual Network with Existing VPN site via Virtual Router In this case, we connect virtual networks in the quantum to the existing VPN site via Virtual Router. Each route will be dynamically exchanged using BGP.


Use case 2: Interconnect multiple OpenStack regions There are 3 regions which is interconnected via some VPN method (eg, IPSec) in the diagram below. All regions are managed by only one operator. 10.0.0.0/24 is on Region1 and 20.0.0.0/24,30.0.0.0/24 are so on. Using ibgp, we can connect virtual networks on any cluster on efficient manner.

The diagram below shows encapsulation of packet on this usecase. Each region connected with IPSec tunnel w/ Encryption. Each packet will labeled with MPLS.

Data Model Changes:

Router (attribute extension)

Admin only parameter

  1. route_distinguisher = type:administrator:assigned
  • type=0 administrator=16bit : assigned = 32bit
  • type=1 administrator=IPaddress: assigned = 16bit
  • type=2 administrator=4octet AS Number: assigend = 16 bit

Note Default value automatically assigned with type=0 format

Tenant parameter

  1. local_prefixes = [10.0.0.0/24] list of prefix which will be exposed
  2. remote_prefixes =[20.0.0.0/24] list of imported prefix ( if this value is null, l3-agent setup all recieved routes
  3. connect_from = [VPNG_ID1,VPNG_ID2,VPNG_ID3] (list of vpn group )
  4. connect_to = [VPNG_ID1, VGPN_ID2] (list of vpn group )

Note: default value of connect_from/connet_to route target will all vpns in the list on create

Network (extended attribute)

  1. bgp_mpls_vpn=true|false
  2. pe_agent

User can add interface of the Network of which bgp_mpls_vpn enabled.

BGPVPNGroup (new resource)

shows list of route targets assigned for each tenant

  1. id:
  2. name:
  3. tenant_id:
  4. route targets(*) asn16bit :32bit integer or ipv4:16bit integer (admin only)

(*) Only admin user can specify route target, otherwise it will be automatically assigned.

BGP Routes (new resource) Admin Only

  1. prefix
  2. nexthop
  3. rt
  4. rd
  5. router_id
  6. paths (list of paths sorted by score)

Sub attribute of paths

  1. nexthop,
  2. as_path,
  3. multi_exit_disc,
  4. local_pref,
  5. origin,
  6. labels,
  7. client,
  8. local

BGP Peers (new resource) Admin only

  1. id
  2. neighbor_ip
  3. remote_as
  4. status = (ACTIVE|DOWN)
  5. admin_status=True|False
  6. route_refrector
  7. pe_agent

Model of BGPVPNGroup

BGPVPNGroup correspond to Route target internally. (see RFC4364 4.3.5. Building VPNs Using Route Targets). In this diagram, VPN site1 and Site2 can talk each other.


You can also support hub-spoke model, shown in the diagram. In this case Hub Site and Spoke Site1 and Site2 can reach each other, however Spoke Site1 and Site2 is unreach.

BGPVPNGroup will be managed by Network provider. We will share one BGPVPNGroup server in different data centers. In the diagram below, Region2, and Region3 uses BGPVPNGroup DB in Region1, thus they can assign BGPVPNGroup by consistent manner between several regions.


Note, this BGPVPNGroup assignment has driver architecture. For same case, Network operator have already managing mapping to BGPVPNGroup (RT) and tenant. Fot that case,the Network operator can implement a driver which provides there own BGPVPNGroup (RT) and tenant mappings.

Implementation Overview:

The diagram below shows implementation overview. Quantum Server and BGP Peer will be connected via BGP speaker process. There are no Open Source project which support BGP/MPLS vpn yet, we will also open our erlang version implementation of bgp speaker and PE router which will implemented partial RFC. But this design is not limited to our implementation, and API itself can be connected any MPLS/PE router implementation. BGP Speaker send update of Router configuration for BGP peer, and get routes from Peer. Our PE also set up vlan to mpls transformation and adding or removing mpls labels for each outgoing packet. Note that we are not talking about transport labels and that transport will be either p2p (option B ASBR) or GRE encapsulation will be used.


The sequence below shows how each component will talk each other.


Scheduling / HA

PE Agent is scheduled by PE Manager. When agent downs, PE Agent scheduler will reschedule bgp network for another agent. PE Scheduler itself has bulid-in HA (ACT - Sby ) support for managing PE schedule. PE Manager also dispatch message from Quantum to specific PE-Agent.

Configuration variables:

quantum server configuration

  1. my_as
  2. bgp_speaker ip and port: specifies bgp speaker which will be interconnected
  3. route_target_range: route target range for rt assignment
  4. route_distinguisher_range: rd range for rd assignment
  5. bgpvpngroup_driver: driver for managing route target
  6. bgpspeaker_driver (*)
  7. driver to configure bgp speaker

You can use db or quantum value for bgpvpngroup_driver

  • bgpvpngroup_driver=db using local db
  • bgpvpngroup_driver=quantum use another quantum server

bgpspeaker configuration

  1. my_as
  2. bgp_identifier
  3. neighbors

API's:

CRUD rest api for each new resources

Plugin Interface: Implement above data model

Required Plugin support: yes

Dependencies: BGPSpeaker

CLI Requirements:

show dynamic routes

Horizon Requirements:

  1. Extended attribute configuration page
  2. show dynamic routes

Usage Example:

TBD

Test Cases:

Connect two different openstack cluster (devstack) with BGP/MPLS vpn