Jump to: navigation, search



This BP implements partial RFC4364 BGP/MPLS IP Virtual Private Networks (VPNs) support for interconnecting existing network and OpenStack cloud, or inter connection of OpenStack Cloud.

Use Cases:

Use case 1: Connect Neutron Virtual Network with Existing VPN site via Virtual Router In this case, we connect virtual networks in the neutron to the existing VPN site via Virtual Router. Each route will be dynamically exchanged using BGP.

Usecase 1.png

Use case 2: Interconnect multiple OpenStack regions There are 3 regions which is interconnected via some VPN method (eg, IPSec) in the diagram below. All regions are managed by only one operator. is on Region1 and, are so on. Using ibgp, we can connect virtual networks on any cluster on efficient manner.

Usecase 2.png

The diagram below shows encapsulation of packet on this usecase. Each region connected with IPSec tunnel w/ Encryption. Each packet will labeled with MPLS.

Ip cap usecase2.png

Data Model Changes:

BGPMPLSVPN (new resource)

based on service insertion model

Admin only parameter

  1. route_distinguisher = type:administrator:assigned
  • type=0 administrator=16bit : assigned = 32bit
  • type=1 administrator=IPaddress: assigned = 16bit
  • type=2 administrator=4octet AS Number: assigend = 16 bit

Note Default value automatically assigned with type=0 format

Tenant parameter

  1. subnet_id: ID of the Subnet which will connect virtual router and PE router.
  2. routes = routes of PE. If router_id is specified, the vpn service will configure routes in the virtual router also automatically

The format of routes is as follows.

   [ {destination:[cider,...],
       nexthop: IP(optional),
       router_id:XXX(optional)} ,
  1. remote_prefixes =[] list of imported prefix ( if this value is null, l3-agent setup all recieved routes
  2. connect_from = [VPNG_ID1,VPNG_ID2,VPNG_ID3] (list of vpn group )
  3. connect_to = [VPNG_ID1, VGPN_ID2] (list of vpn group )

Note: default value of connect_from/connet_to route target will all vpns in the list on create

BGPMPLSVPNGroup (new resource)

shows list of route targets assigned for each tenant

  1. id:
  2. name:
  3. tenant_id:
  4. route_target (*) asn16bit :32bit integer or ipv4:16bit integer (admin only)

(*) Only admin user can specify route target, otherwise it will be automatically assigned.

Model of BGPMPLSVPNGroup

BGPMPLSVPNGroup correspond to Route target internally. (see RFC4364 4.3.5. Building VPNs Using Route Targets). In this diagram, VPN site1 and Site2 can talk each other.

Rt model1.png

You can also support hub-spoke model, shown in the diagram. In this case Hub Site and Spoke Site1 and Site2 can reach each other, however Spoke Site1 and Site2 is unreach.

Rt model2.png

BGPMPLSVPNGroup will be managed by Network provider. We will share one BGPMPLSVPNGroup server in different data centers. In the diagram below, Region2, and Region3 uses BGPMPLSVPNGroup DB in Region1, thus they can assign BGPMPLSVPNGroup by consistent manner between several regions.

Rt model3.png

Note, this BGPMPLSVPNGroup assignment has driver architecture. For same case, Network operator have already managing mapping to BGPMPLSVPNGroup (RT) and tenant. Fot that case,the Network operator can implement a driver which provides there own BGPMPLSVPNGroup (RT) and tenant mappings.

Implementation Overview:

The diagram below shows implementation overview. Neutron Server and BGP Peer will be connected via BGP speaker process. Neutron calls BGPMplsVPNDriver to realize BGP/MPLS services. Different driver provider will provide different way to implement this.


Configuration variables:

neutron server configuration

  1. my_as
  2. bgp_speaker ip and port: specifies bgp speaker which will be interconnected
  3. route_target_range: route target range for rt assignment
  4. route_distinguisher_range: rd range for rd assignment
  5. bgpvpngroup_driver: driver for managing route target
  6. bgpspeaker_driver (*)
  7. driver to configure bgp speaker

You can use db or neutron value for bgpvpngroup_driver

  • bgpvpngroup_driver=db using local db
  • bgpvpngroup_driver=neutron use another neutron server

bgpspeaker configuration

  1. my_as
  2. bgp_identifier
  3. neighbors


CRUD rest api for each new resources

Plugin Interface: Implement above data model

Required Plugin support: yes

Dependencies: BGPSpeaker

CLI Requirements:

Create bgpmplsvpngroup

  neutron bgpmplsvpngroup-create --name test --route-target 6800:1

List bgpmplsvpngroup

 neutron bgpmplsvpngroup-list -c id -c name -c route_target

Set bgpmplsvpngroup id to env variable

  GROUP=`neutron bgpmplsvpngroup-list -c id -c name -c route_target  | awk '/test/{print $2}'`

Delete bgpmplsvpngroup

  neutron bgpmplsvpngroup-delete <id>

Create virtual router

  neutron router-create router1

Create network which will connect virtual router and PE router

  neutron net-create net-connect

Create subnet in the above network

  neutron subnet-create net-connect --name=subnet-connect


  neutron bgpmplsvpn-create --name=vpn1 --route_distinguisher=6800:1 --connected_subnet=subnet-connect --routes '[{"destination": ["", ""], "router_id": "router1"}]' --remote_prefixes list=true --connect_to list=true $GROUP --connect_from list=true $GROUP


  neutron bgpmplsvpn-list


  neutron bgpmplsvpn-delete vpn1

Insert VPN service to the router

  neutron router-service-insert router1 vpn1

Horizon Requirements:

  1. Extended attribute configuration page
  2. show dynamic routes

Usage Example:


Test Cases:

Connect two different openstack cluster (devstack) with BGP/MPLS vpn