Jump to: navigation, search

Neutron/BGP MPLS VPN

< Neutron
Revision as of 17:11, 11 April 2013 by Nachi (talk | contribs) (CLI Requirements:)

Scope:

This BP implements partial RFC4364 BGP/MPLS IP Virtual Private Networks (VPNs) support for interconnecting existing network and OpenStack cloud, or inter connection of OpenStack Cloud.

Use Cases:

Use case 1: Connect Quantum Virtual Network with Existing VPN site via Virtual Router In this case, we connect virtual networks in the quantum to the existing VPN site via Virtual Router. Each route will be dynamically exchanged using BGP.

Usecase 1.png

Use case 2: Interconnect multiple OpenStack regions There are 3 regions which is interconnected via some VPN method (eg, IPSec) in the diagram below. All regions are managed by only one operator. 10.0.0.0/24 is on Region1 and 20.0.0.0/24,30.0.0.0/24 are so on. Using ibgp, we can connect virtual networks on any cluster on efficient manner.

Usecase 2.png

The diagram below shows encapsulation of packet on this usecase. Each region connected with IPSec tunnel w/ Encryption. Each packet will labeled with MPLS.

Ip cap usecase2.png

Data Model Changes:

Router (attribute extension)

Admin only parameter

  1. bgpmplsvpn:route_distinguisher = type:administrator:assigned
  • type=0 administrator=16bit : assigned = 32bit
  • type=1 administrator=IPaddress: assigned = 16bit
  • type=2 administrator=4octet AS Number: assigend = 16 bit

Note Default value automatically assigned with type=0 format

Tenant parameter

  1. bgpmplsvpn:local_prefixes = [10.0.0.0/24] list of prefix which will be exposed
  2. bgpmplsvpn:remote_prefixes =[20.0.0.0/24] list of imported prefix ( if this value is null, l3-agent setup all recieved routes
  3. bgpmplsvpn:connect_from = [VPNG_ID1,VPNG_ID2,VPNG_ID3] (list of vpn group )
  4. bgpmplsvpn:connect_to = [VPNG_ID1, VGPN_ID2] (list of vpn group )

Note: default value of connect_from/connet_to route target will all vpns in the list on create

Network (extended attribute)

  1. bgpmplsvpn:enabled=true|false
  2. pe_agent

User can add interface of the Network of which bgp_mpls_vpn enabled.

BGPVPNGroup (new resource)

shows list of route targets assigned for each tenant

  1. id:
  2. name:
  3. tenant_id:
  4. route_target (*) asn16bit :32bit integer or ipv4:16bit integer (admin only)

(*) Only admin user can specify route target, otherwise it will be automatically assigned.

BGP Routes (new resource) Admin Only

  1. prefix
  2. nexthop
  3. rt
  4. rd
  5. router_id
  6. paths (list of paths sorted by score)

Sub attribute of paths

  1. nexthop,
  2. as_path,
  3. multi_exit_disc,
  4. local_pref,
  5. origin,
  6. labels,
  7. client,
  8. local

BGP Peers (new resource) Admin only

  1. id
  2. neighbor_ip
  3. remote_as
  4. status = (ACTIVE|DOWN)
  5. admin_status=True|False
  6. pe_agent

Model of BGPVPNGroup

BGPVPNGroup correspond to Route target internally. (see RFC4364 4.3.5. Building VPNs Using Route Targets). In this diagram, VPN site1 and Site2 can talk each other.

Rt model1.png

You can also support hub-spoke model, shown in the diagram. In this case Hub Site and Spoke Site1 and Site2 can reach each other, however Spoke Site1 and Site2 is unreach.

Rt model2.png

BGPVPNGroup will be managed by Network provider. We will share one BGPVPNGroup server in different data centers. In the diagram below, Region2, and Region3 uses BGPVPNGroup DB in Region1, thus they can assign BGPVPNGroup by consistent manner between several regions.

Rt model3.png

Note, this BGPVPNGroup assignment has driver architecture. For same case, Network operator have already managing mapping to BGPVPNGroup (RT) and tenant. Fot that case,the Network operator can implement a driver which provides there own BGPVPNGroup (RT) and tenant mappings.

Implementation Overview:

The diagram below shows implementation overview. Quantum Server and BGP Peer will be connected via BGP speaker process. There are no Open Source project which support BGP/MPLS vpn yet, we will also open our erlang version implementation of bgp speaker and PE router which will implemented partial RFC. But this design is not limited to our implementation, and API itself can be connected any MPLS/PE router implementation. BGP Speaker send update of Router configuration for BGP peer, and get routes from Peer. Our PE also set up vlan to mpls transformation and adding or removing mpls labels for each outgoing packet. Note that we are not talking about transport labels and that transport will be either p2p (option B ASBR) or GRE encapsulation will be used.

Impl1.png

The sequence below shows how each component will talk each other.

Impl2.png

Scheduling / HA

PE Agent is scheduled by PE Manager. When agent downs, PE Agent scheduler will reschedule bgp network for another agent. PE Scheduler itself has bulid-in HA (ACT - Sby ) support for managing PE schedule. PE Manager also dispatch message from Quantum to specific PE-Agent.

Configuration variables:

quantum server configuration

  1. my_as
  2. bgp_speaker ip and port: specifies bgp speaker which will be interconnected
  3. route_target_range: route target range for rt assignment
  4. route_distinguisher_range: rd range for rd assignment
  5. bgpvpngroup_driver: driver for managing route target
  6. bgpspeaker_driver (*)
  7. driver to configure bgp speaker

You can use db or quantum value for bgpvpngroup_driver

  • bgpvpngroup_driver=db using local db
  • bgpvpngroup_driver=quantum use another quantum server

bgpspeaker configuration

  1. my_as
  2. bgp_identifier
  3. neighbors

API's:

CRUD rest api for each new resources

Plugin Interface: Implement above data model

Required Plugin support: yes

Dependencies: BGPSpeaker

CLI Requirements:

Create bgpvpngroup

  quantum bgpvpngroup-create --name test --route-target 6800:1

List bgpvpngroup

 quantum bgpvpngroup-list -c id -c name -c route_target

Delete bgpvpngroup

  quantum bgpvpngroup-delete <id>

Create bgpmplsvpn-enabled network

  quantum net-create mpls -- --bgpmplsvpn:enable=True

Create router with bgpmplsvpn option

  quantum router-create mpls --bgpmplsvpn:route_distinguisher=6800:1 --bgpmplsvpn:local_prefixes list=true 10.0.0.0/24,20.0.0.0/24 --bgpmplsvpn:remote_prefixes list=true 30.0.0.0/24,40.0.0.0/24 --bgpmplsvpn:connect_to list=true $GROUP --bgpmplsvpn:connect_from list=true $GROUP

Horizon Requirements:

  1. Extended attribute configuration page
  2. show dynamic routes

Usage Example:

TBD

Test Cases:

Connect two different openstack cluster (devstack) with BGP/MPLS vpn