Jump to: navigation, search

Neutron/BGP MPLS VPN

< Neutron
Revision as of 00:32, 27 April 2013 by Nachi Ueno (talk | contribs) (BGPMPLSVPN (new resource))

Scope:

This BP implements partial RFC4364 BGP/MPLS IP Virtual Private Networks (VPNs) support for interconnecting existing network and OpenStack cloud, or inter connection of OpenStack Cloud.

Use Cases:

Use case 1: Connect Quantum Virtual Network with Existing VPN site via Virtual Router In this case, we connect virtual networks in the quantum to the existing VPN site via Virtual Router. Each route will be dynamically exchanged using BGP.

Usecase 1.png

Use case 2: Interconnect multiple OpenStack regions There are 3 regions which is interconnected via some VPN method (eg, IPSec) in the diagram below. All regions are managed by only one operator. 10.0.0.0/24 is on Region1 and 20.0.0.0/24,30.0.0.0/24 are so on. Using ibgp, we can connect virtual networks on any cluster on efficient manner.

Usecase 2.png

The diagram below shows encapsulation of packet on this usecase. Each region connected with IPSec tunnel w/ Encryption. Each packet will labeled with MPLS.

Ip cap usecase2.png

Data Model Changes:

BGPMPLSVPN (new resource)

based on service insertion model

Admin only parameter

  1. route_distinguisher = type:administrator:assigned
  • type=0 administrator=16bit : assigned = 32bit
  • type=1 administrator=IPaddress: assigned = 16bit
  • type=2 administrator=4octet AS Number: assigend = 16 bit

Note Default value automatically assigned with type=0 format

Tenant parameter

  1. router_id: ID of the router which will be connected with VPN.
  2. subnet_id: ID of the Subnet which will connect virtual router and PE router.
  3. local_prefixes = [10.0.0.0/24] list of prefix which will be exposed
  4. remote_prefixes =[20.0.0.0/24] list of imported prefix ( if this value is null, l3-agent setup all recieved routes
  5. connect_from = [VPNG_ID1,VPNG_ID2,VPNG_ID3] (list of vpn group )
  6. connect_to = [VPNG_ID1, VGPN_ID2] (list of vpn group )

Note: default value of connect_from/connet_to route target will all vpns in the list on create

BGPMPLSVPNGroup (new resource)

shows list of route targets assigned for each tenant

  1. id:
  2. name:
  3. tenant_id:
  4. route_target (*) asn16bit :32bit integer or ipv4:16bit integer (admin only)

(*) Only admin user can specify route target, otherwise it will be automatically assigned.

Model of BGPMPLSVPNGroup

BGPMPLSVPNGroup correspond to Route target internally. (see RFC4364 4.3.5. Building VPNs Using Route Targets). In this diagram, VPN site1 and Site2 can talk each other.

Rt model1.png

You can also support hub-spoke model, shown in the diagram. In this case Hub Site and Spoke Site1 and Site2 can reach each other, however Spoke Site1 and Site2 is unreach.

Rt model2.png

BGPMPLSVPNGroup will be managed by Network provider. We will share one BGPMPLSVPNGroup server in different data centers. In the diagram below, Region2, and Region3 uses BGPMPLSVPNGroup DB in Region1, thus they can assign BGPMPLSVPNGroup by consistent manner between several regions.

Rt model3.png

Note, this BGPMPLSVPNGroup assignment has driver architecture. For same case, Network operator have already managing mapping to BGPMPLSVPNGroup (RT) and tenant. Fot that case,the Network operator can implement a driver which provides there own BGPMPLSVPNGroup (RT) and tenant mappings.

Implementation Overview:

The diagram below shows implementation overview. Quantum Server and BGP Peer will be connected via BGP speaker process. Quantum calls BGPMplsVPNDriver to realize BGP/MPLS services. Different driver provider will provide different way to implement this.

-GeneralBGPMPLSImplementation.png

Configuration variables:

quantum server configuration

  1. my_as
  2. bgp_speaker ip and port: specifies bgp speaker which will be interconnected
  3. route_target_range: route target range for rt assignment
  4. route_distinguisher_range: rd range for rd assignment
  5. bgpvpngroup_driver: driver for managing route target
  6. bgpspeaker_driver (*)
  7. driver to configure bgp speaker

You can use db or quantum value for bgpvpngroup_driver

  • bgpvpngroup_driver=db using local db
  • bgpvpngroup_driver=quantum use another quantum server

bgpspeaker configuration

  1. my_as
  2. bgp_identifier
  3. neighbors

API's:

CRUD rest api for each new resources

Plugin Interface: Implement above data model

Required Plugin support: yes

Dependencies: BGPSpeaker

CLI Requirements:

Create bgpmplsvpngroup

  quantum bgpmplsvpngroup-create --name test --route-target 6800:1

List bgpmplsvpngroup

 quantum bgpmplsvpngroup-list -c id -c name -c route_target

Set bgpmplsvpngroup id to env variable

  GROUP=`quantum bgpmplsvpngroup-list -c id -c name -c route_target  | awk '/test/{print $2}'`

Delete bgpmplsvpngroup

  quantum bgpmplsvpngroup-delete <id>

Create virtual router

  quantum router-create router1

Create network which will connect virtual router and PE router

  quantum net-create net-connect

Create subnet in the above network

  quantum subnet-create net-connect 50.0.0.0/24 --name=subnet-connect

Create BGP MPLS VPN

  quantum bgpmplsvpn-create --name=vpn1 --route_distinguisher=6800:1 --connected_subnet=subnet-connect --local_prefixes list=true 10.0.0.0/24 20.0.0.0/24 --remote_prefixes list=true 30.0.0.0/24 40.0.0.0/24 --connect_to list=true $GROUP --connect_from list=true $GROUP

List BGP MPLS VPN

  quantum bgpmplsvpn-list

Delete BGP MPLS VPN

  quantum bgpmplsvpn-delete vpn1

Insert VPN service to the router

  quantum router-service-insert router1 vpn1

Horizon Requirements:

  1. Extended attribute configuration page
  2. show dynamic routes

Usage Example:

TBD

Test Cases:

Connect two different openstack cluster (devstack) with BGP/MPLS vpn