Jump to: navigation, search

Difference between revisions of "Neutron/VPNaaS/HowToInstall"

(OpenSwan Support)
m (Fix typo. Quantum => Neutron)
 
(33 intermediate revisions by 6 users not shown)
Line 1: Line 1:
  
 
== Installation ==
 
== Installation ==
use devstack using this localrc ( q-vpn is added)
 
  
    DEST=/opt/stack
+
In order to use Neutron-VPNaaS with devstack (http://devstack.org) a single node setup, you'll need the following settings in your local.conf (NEW: neutron-vpnaas plugin is added).
    disable_service n-net
+
 
    enable_service q-svc
+
<pre><nowiki>
    enable_service q-agt
+
[[local|localrc]]
    enable_service q-dhcp
 
    enable_service q-l3
 
    enable_service q-meta
 
    enable_service neutron
 
    enable_service tempest
 
    enable_service q-vpn
 
    API_RATE_LIMIT=False
 
    VOLUME_BACKING_FILE_SIZE=4G
 
    FIXED_RANGE=10.1.0.0/24
 
    FIXED_NETWORK_SIZE=256
 
    VIRT_DRIVER=libvirt
 
    SWIFT_REPLICAS=1
 
    export OS_NO_CACHE=True
 
    SCREEN_LOGDIR=/opt/stack/screen-logs
 
    SYSLOG=True
 
    SKIP_EXERCISES=boot_from_volume,client-env
 
    ROOTSLEEP=0
 
    ACTIVE_TIMEOUT=60
 
    Q_USE_SECGROUP=True
 
    BOOT_TIMEOUT=90
 
    ASSOCIATE_TIMEOUT=60
 
    ADMIN_PASSWORD=openstack
 
    MYSQL_PASSWORD=openstack
 
    RABBIT_PASSWORD=openstack
 
    SERVICE_PASSWORD=openstack
 
    SERVICE_TOKEN=tokentoken
 
    Q_PLUGIN=openvswitch
 
    Q_USE_DEBUG_COMMAND=True
 
    NETWORK_GATEWAY=10.1.0.1
 
  
* Checkout Test branches
+
enable_plugin neutron-vpnaas https://git.openstack.org/openstack/neutron-vpnaas
  
Neutron : https://review.openstack.org/#/c/33148/
+
disable_service n-net
 +
enable_service q-svc
 +
enable_service q-agt
 +
enable_service q-dhcp
 +
enable_service q-l3
 +
enable_service q-meta
 +
# Optional, to enable tempest configuration as part of devstack
 +
enable_service tempest
  
Neutron client : https://review.openstack.org/#/c/29811/
+
# IPSec driver to use. Optional, defaults to OpenSwan.
 +
IPSEC_PACKAGE="openswan"
 +
</nowiki></pre>
  
* Run Devstack
+
== Quick Test Script ==
   
 
    ./stack.sh
 
  
* Install neutron client code (devstack installes package version of clients)
+
http://paste.openstack.org/raw/44702/
  
    cd /opt/stack/python-neutronclient
+
This quick test script create two site with a router,a network and a subnet connected with
    sudo python setup.py develop
+
public network. Then, connect both site via VPN.
  
 
== Using Two DevStack Nodes for Testing ==
 
== Using Two DevStack Nodes for Testing ==
 
You can use two DevStack nodes connected by a common "public" network to test VPNaaS. The second node can be set up with the same public network as the first node, except it will use a different gateway IP (and hence router IP). In this example, we'll assume we have two DevStack nodes (East and West), each running on hardware (you can do the same thing with multiple VM guests, if desired).
 
You can use two DevStack nodes connected by a common "public" network to test VPNaaS. The second node can be set up with the same public network as the first node, except it will use a different gateway IP (and hence router IP). In this example, we'll assume we have two DevStack nodes (East and West), each running on hardware (you can do the same thing with multiple VM guests, if desired).
 +
(Note: you can also create similar topology using two virtual routers with one devstack)
  
 
==== Example Topology ====
 
==== Example Topology ====
Line 61: Line 39:
 
               |
 
               |
 
               |  10.1.0.1
 
               |  10.1.0.1
       [Quantum Router]
+
       [Neutron Router]
 
               |  172.24.4.226
 
               |  172.24.4.226
 
               |
 
               |
Line 72: Line 50:
 
               |
 
               |
 
               | 172.24.4.233
 
               | 172.24.4.233
       [Quantum Router]
+
       [Neutron Router]
 
               |  10.2.0.1
 
               |  10.2.0.1
 
               |
 
               |
Line 79: Line 57:
 
==== DevStack Configuration ====
 
==== DevStack Configuration ====
  
For '''East''' you can use a stock localrc configuration, which will give you a private net of 10.1.0.0/24 and public network of 172.24.4.0/24. Just to make configuration easier, you can name the subnets as follows.
+
For '''East''' you can append these lines to the localrc, which will give you a private net of 10.1.0.0/24 and public network of 172.24.4.0/24
  
 
  PUBLIC_SUBNET_NAME=yoursubnet
 
  PUBLIC_SUBNET_NAME=yoursubnet
 
  PRIVATE_SUBNET_NAME=mysubnet
 
  PRIVATE_SUBNET_NAME=mysubnet
  Q_FLOATING_ALLOCATION_POOL=start=172.24.4.225,end=172.24.4.231
+
FIXED_RANGE=10.1.0.0/24
 +
NETWORK_GATEWAY=10.1.0.1
 +
PUBLIC_NETWORK_GATEWAY=172.24.4.225
 +
  Q_FLOATING_ALLOCATION_POOL=start=172.24.4.226,end=172.24.4.231
  
 
For '''West''' you can add these lines to localrc to use a different local network, public GW (and implicitly router) IP:
 
For '''West''' you can add these lines to localrc to use a different local network, public GW (and implicitly router) IP:
Line 98: Line 79:
 
On '''East'''
 
On '''East'''
 
  neutron vpn-ikepolicy-create ikepolicy1
 
  neutron vpn-ikepolicy-create ikepolicy1
  neutron vpn-ipsecpolicy-create ipsecpolicy1
+
  neutron vpn-ipsecpolicy-create ipsecpolicy1
  neutron vpn-service-create --name myvpn --description "My vpn service" --subnet-id mysubnet --router-id router1
+
  neutron vpn-service-create --name myvpn --description "My vpn service" router1 mysubnet
  
 
  neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 172.24.4.233 --peer-id 172.24.4.233 --peer-cidr 10.2.0.0/24 --psk secret
 
  neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 172.24.4.233 --peer-id 172.24.4.233 --peer-cidr 10.2.0.0/24 --psk secret
Line 106: Line 87:
 
  neutron vpn-ikepolicy-create ikepolicy1
 
  neutron vpn-ikepolicy-create ikepolicy1
 
  neutron vpn-ipsecpolicy-create ipsecpolicy1
 
  neutron vpn-ipsecpolicy-create ipsecpolicy1
  neutron vpn-service-create --name myvpn --description "My vpn service" --subnet-id mysubnet --router-id router1
+
  neutron vpn-service-create --name myvpn --description "My vpn service" router1 mysubnet
  
 
  neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 172.24.4.226 --peer-id 172.24.4.226 --peer-cidr 10.1.0.0/24 --psk secret
 
  neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 172.24.4.226 --peer-id 172.24.4.226 --peer-cidr 10.1.0.0/24 --psk secret
 +
 +
Note: Please make sure setup security group (open icmp for vpn subnet etc)
  
 
==== Verification ====
 
==== Verification ====
 
You can spin up VMs on each node, and then from the VM ping the far end router's public IP. With '''tcpdump''' running on one of the nodes, you can see that pings appear as encrypted packets (ESP). Note that BOOTP, IGMP, and the keepalive packets between the two nodes are not encrypted (nor are pings between the two external IP addresses).
 
You can spin up VMs on each node, and then from the VM ping the far end router's public IP. With '''tcpdump''' running on one of the nodes, you can see that pings appear as encrypted packets (ESP). Note that BOOTP, IGMP, and the keepalive packets between the two nodes are not encrypted (nor are pings between the two external IP addresses).
 +
 +
==== Kilo Update ====
 +
For Kilo, the localrc contents was moved into local.conf. With (VirtualBox) VMs used as hosts, where eth0 was set up as NAT, and eth1 set up as Internal Network, the following configurations were used in local.conf:
 +
 +
    OVS_PHYSICAL_BRIDGE=br-ex
 +
    PUBLIC_INTERFACE=eth1
 +
 +
Once stacked, VMs were created for testing, VPN IPSec commands used to establish connections between the nodes, and security group rules added to allow ICMP and SSH.
 +
 +
== VPNaaS with Single DevStack and Two Routers ==
 +
Simple instructions on how to setup a test environment where a VPNaaS IPSec connection can be established using the reference implementation (OpenSwan). This example uses VIrtualBox running on laptop to provide a VM for running DevStack. It assumes a Kilo release (post Juno).
 +
 +
The idea here is to have a single OpenStack cloud created using DevStack, two routers (one created automatically), two private networks (one created automatically) -10.1.0.0/24 and 10.2.0.0/24, a VM in each private network, and establish a VPN connection between the two private nets, using the public network (172.24.4.0/24).
 +
 +
=== Preparation ===
 +
Create a VM (e.g. 7 GB RAM, 2 CPUs) running Ubuntu 14.04, with NAT I/F for access to the Internet. Clone a DevStack repo with latest (Kilo-1 used for this example).
 +
 +
=== DevStack Configuration ===
 +
For this example, the following local.conf is used:
 +
 +
    [[local|localrc]]
 +
    GIT_BASE=https://github.com
 +
    DEST=/opt/stack
 +
   
 +
    disable_service n-net
 +
    enable_service q-svc
 +
    enable_service q-agt
 +
    enable_service q-dhcp
 +
    enable_service q-l3
 +
    enable_service q-meta
 +
    enable_service neutron
 +
    enable_plugin neutron-vpnaas https://git.openstack.org/openstack/neutron-vpnaas
 +
   
 +
    FIXED_RANGE=10.1.0.0/24
 +
    FIXED_NETWORK_SIZE=256
 +
    NETWORK_GATEWAY=10.1.0.1
 +
    PRIVATE_SUBNET_NAME=privateA
 +
   
 +
    PUBLIC_SUBNET_NAME=public-subnet
 +
    FLOATING_RANGE=172.24.4.0/24
 +
    PUBLIC_NETWORK_GATEWAY=172.24.4.10
 +
    Q_FLOATING_ALLOCATION_POOL="start=172.24.4.11,end=172.24.4.29"
 +
   
 +
    LIBVIRT_TYPE=qemu
 +
   
 +
    IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04.1/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz"
 +
   
 +
    SCREEN_LOGDIR=/opt/stack/screen-logs
 +
    SYSLOG=True
 +
    LOGFILE=~/devstack/stack.sh.log
 +
   
 +
    ADMIN_PASSWORD=password
 +
    MYSQL_PASSWORD=password
 +
    RABBIT_PASSWORD=password
 +
    SERVICE_PASSWORD=password
 +
    SERVICE_TOKEN=tokentoken
 +
   
 +
    Q_USE_DEBUG_COMMAND=True
 +
   
 +
    # RECLONE=No
 +
    RECLONE=yes
 +
    OFFLINE=False
 +
 +
Start up the cloud using ./stack.sh and ensure it completes successfully. Once stacked, you can change RECLONE to No.
 +
 +
=== Cloud Configuration ===
 +
Once stacking is completed, you'll have a private network (10.1.0.0/24), and a router (router1). To prepare for establishing a VPN connection, a second network, subnet, and router needs to be created, and a VM spun up in each private network.
 +
 +
    # Create second net, subnet, router
 +
    source ~/devstack/openrc admin demo
 +
    neutron net-create privateB
 +
    neutron subnet-create --name subB privateB 10.2.0.0/24 --gateway 10.2.0.1
 +
    neutron router-create router2
 +
    neutron router-interface-add router2 subB
 +
    neutron router-gateway-set router2 public
 +
   
 +
    # Start up a VM in the privateA subnet.
 +
    PRIVATE_NET=`neutron net-list | grep 'private ' | cut -f 2 -d' '`
 +
    nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic net-id=$PRIVATE_NET peter
 +
   
 +
    # Start up a VM in the privateB subnet
 +
    PRIVATE_NETB=`neutron net-list | grep privateB | cut -f 2 -d' '`
 +
    nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic net-id=$PRIVATE_NETB paul
 +
 +
At this point, you can verify that you have basic connectivity. Note, DevStack will create a static route that will allow you to ping the private I/F IP of router1 from privateB network. You can remove the route, if desired.
 +
 +
=== IPSec Site-to-site Connection Creation ===
 +
The following commands will create the IPSec connection:
 +
 +
    # Create VPN connections
 +
    neutron vpn-ikepolicy-create ikepolicy
 +
    neutron vpn-ipsecpolicy-create ipsecpolicy
 +
    neutron vpn-service-create --name myvpn --description "My vpn service" router1 privateA
 +
   
 +
    neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn \
 +
    --ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 172.24.4.13 \
 +
    --peer-id 172.24.4.13 --peer-cidr 10.2.0.0/24 --psk secret
 +
   
 +
    neutron vpn-service-create --name myvpnB --description "My vpn serviceB" router2 subB
 +
   
 +
    neutron ipsec-site-connection-create --name vpnconnection2 --vpnservice-id myvpnB \
 +
    --ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 172.24.4.11 \
 +
    --peer-id 172.24.4.11 --peer-cidr 10.1.0.0/24 --psk secret
 +
 +
At this point (once the connections become active - which can take up to 30 seconds or so), you should be able to ping from the VM in the privateA network, to the VM in the privateB network. You'll see encrypted packets, if you tcpdump using the qg-# interface from one of the router namespaces. If you delete one of the connections, you'll see that the pings fail (if all works out correctly :).
 +
 +
=== Multiple Local Subnets ===
 +
Early in Mitaka, IPSec site-to-site connections will support multiple local subnets, in addition to the current multiple peer CIDRs. The multiple local subnet feature is triggered by '''not''' specifying a local subnet, when creating a VPN service. Backwards compatibility is maintained with single local subnets, by providing the subnet in the VPN service creation.
 +
 +
To support multiple local subnets, a new capability has been provided (in Liberty), called "Endpoint Groups". Each endpoint group will define one or more endpoints of a specific type, and can be used to specify both local and peer endpoints for IPSec Connections. The Endpoint Groups separate the "what gets connected" from the "how to connect" for a VPN service, and can be used for different flavors of VPN, in the future. An example:
 +
 +
    # Create VPN connections
 +
    neutron vpn-ikepolicy-create ikepolicy
 +
    neutron vpn-ipsecpolicy-create ipsecpolicy
 +
    neutron vpn-service-create --name myvpnC --description "My vpn service" router1
 +
 +
To prepare for an IPSec site-to-site, one would create an endpoint group for the local subnets, and an endpoint group for the peer CIDRs, like so:
 +
 +
    neutron vpn-endpoint-group-create --name my-locals --type subnet --value privateA --value privateA2
 +
    neutron vpn-endpoint-group-create --name my-peers --type cidr --value 10.2.0.0/24 --value 20.2.0.0/24
 +
 +
where privateA and privateA2 are two local (private) subnets, and 10.2.0.0/24 and 20.2.0.0/24 are two CIDRs representing peer (private) subnets that will be used by a connection. Then, when creating the IPSec site-to-site connection, these endpoint group IDs would be specified, instead of the peer-cidrs attribute:
 +
 +
    neutron ipsec-site-connection-create --name vpnconnection3 --vpnservice-id myvpnC \
 +
    --ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 172.24.4.11 \
 +
    --peer-id 172.24.4.11 --local-ep-group my-locals --peer-ep-group my-peers --psk secret
 +
 +
Notes:
 +
* The validation logic makes sure that endpoint groups and peer CIDRs are not intermixed.
 +
* Endpoint group types are subnet, cidr, network, router, and vlan. However, only subnet and cidr are implemented (for IPSec use).
 +
* The endpoints in a group must be of the same type, although can mix IP versions.
 +
* For IPSec connections, validation currently enforces that the local and peer endpoints all use the same IP version.
 +
* IPSec connection validation requires that local endpoints are subnets, and peer endpoints are CIDRs.
 +
* Migration will convert information for any existing VPN services and connections to endpoint groups.
 +
* The original APIs will work for backward compatibility.
  
 
== Horizon Support ==
 
== Horizon Support ==
  
'''NOTE''':  Currently this code is not working due to API change (VPNConnection to IPsecSiteConnection and moving dpd/lifetime to subdict).
 
Please use CLI for testing
 
  
 
* Checkout Test branch
 
* Checkout Test branch
  
Horizon: https://review.openstack.org/#/c/34882/
+
Horizon support has been merged.
  
 
* Enable VPN section in Horizon
 
* Enable VPN section in Horizon
 +
 +
Note that ff q-vpn is enabled Horizon VPN support is enabled automatically.
  
 
Open <pre>/opt/stack/horizon/openstack_dashboard/local/local_settings.py</pre>
 
Open <pre>/opt/stack/horizon/openstack_dashboard/local/local_settings.py</pre>
Line 145: Line 263:
  
 
https://wiki.openstack.org/wiki/Neutron/VPNaaS/UI
 
https://wiki.openstack.org/wiki/Neutron/VPNaaS/UI
 
=== OpenSwan Support ===
 
Neutron patch :  https://review.openstack.org/#/c/42264/
 
Devstack patch: https://review.openstack.org/#/c/42264/
 
 
add  this line to the openrc
 
 
  IPSEC_PACKAGE=openswan
 
 
please make sure strongswan is not installed
 

Latest revision as of 23:23, 15 February 2016

Installation

In order to use Neutron-VPNaaS with devstack (http://devstack.org) a single node setup, you'll need the following settings in your local.conf (NEW: neutron-vpnaas plugin is added).

[[local|localrc]]

enable_plugin neutron-vpnaas https://git.openstack.org/openstack/neutron-vpnaas

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
# Optional, to enable tempest configuration as part of devstack
enable_service tempest

# IPSec driver to use. Optional, defaults to OpenSwan.
IPSEC_PACKAGE="openswan"

Quick Test Script

http://paste.openstack.org/raw/44702/

This quick test script create two site with a router,a network and a subnet connected with public network. Then, connect both site via VPN.

Using Two DevStack Nodes for Testing

You can use two DevStack nodes connected by a common "public" network to test VPNaaS. The second node can be set up with the same public network as the first node, except it will use a different gateway IP (and hence router IP). In this example, we'll assume we have two DevStack nodes (East and West), each running on hardware (you can do the same thing with multiple VM guests, if desired). (Note: you can also create similar topology using two virtual routers with one devstack)

Example Topology

A dedicated physical port can be used for the "public" network connection (e.g. eth2) interconnected by a physical switch. You'll need to add the port to the OVS bridge on each DevStack node (e.g. sudo ovs-vsctl add-port br-ex eth2).

      (10.1.0.0/24 - DevStack East)
              |
              |  10.1.0.1
     [Neutron Router]
              |  172.24.4.226
              |
              |  172.24.4.225
     [Internet GW]
              |  
              |
     [Internet GW]
              | 172.24.4.232
              |
              | 172.24.4.233
     [Neutron Router]
              |  10.2.0.1
              |
     (10.2.0.0/24 DevStack West)

DevStack Configuration

For East you can append these lines to the localrc, which will give you a private net of 10.1.0.0/24 and public network of 172.24.4.0/24

PUBLIC_SUBNET_NAME=yoursubnet
PRIVATE_SUBNET_NAME=mysubnet
FIXED_RANGE=10.1.0.0/24
NETWORK_GATEWAY=10.1.0.1
PUBLIC_NETWORK_GATEWAY=172.24.4.225
Q_FLOATING_ALLOCATION_POOL=start=172.24.4.226,end=172.24.4.231

For West you can add these lines to localrc to use a different local network, public GW (and implicitly router) IP:

PUBLIC_SUBNET_NAME=yoursubnet
PRIVATE_SUBNET_NAME=mysubnet
FIXED_RANGE=10.2.0.0/24
NETWORK_GATEWAY=10.2.0.1
PUBLIC_NETWORK_GATEWAY=172.24.4.232
Q_FLOATING_ALLOCATION_POOL=start=172.24.4.233,end=172.24.4.238

VPNaaS Configuration

With DevStack running on East and West and connectivity confirmed (make sure you can ping one router/GW from the other), you can perform these VPNaaS CLI commands.

On East

neutron vpn-ikepolicy-create ikepolicy1
neutron vpn-ipsecpolicy-create ipsecpolicy1
neutron vpn-service-create --name myvpn --description "My vpn service" router1 mysubnet
neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 172.24.4.233 --peer-id 172.24.4.233 --peer-cidr 10.2.0.0/24 --psk secret

On West

neutron vpn-ikepolicy-create ikepolicy1
neutron vpn-ipsecpolicy-create ipsecpolicy1
neutron vpn-service-create --name myvpn --description "My vpn service" router1 mysubnet
neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 172.24.4.226 --peer-id 172.24.4.226 --peer-cidr 10.1.0.0/24 --psk secret

Note: Please make sure setup security group (open icmp for vpn subnet etc)

Verification

You can spin up VMs on each node, and then from the VM ping the far end router's public IP. With tcpdump running on one of the nodes, you can see that pings appear as encrypted packets (ESP). Note that BOOTP, IGMP, and the keepalive packets between the two nodes are not encrypted (nor are pings between the two external IP addresses).

Kilo Update

For Kilo, the localrc contents was moved into local.conf. With (VirtualBox) VMs used as hosts, where eth0 was set up as NAT, and eth1 set up as Internal Network, the following configurations were used in local.conf:

   OVS_PHYSICAL_BRIDGE=br-ex
   PUBLIC_INTERFACE=eth1

Once stacked, VMs were created for testing, VPN IPSec commands used to establish connections between the nodes, and security group rules added to allow ICMP and SSH.

VPNaaS with Single DevStack and Two Routers

Simple instructions on how to setup a test environment where a VPNaaS IPSec connection can be established using the reference implementation (OpenSwan). This example uses VIrtualBox running on laptop to provide a VM for running DevStack. It assumes a Kilo release (post Juno).

The idea here is to have a single OpenStack cloud created using DevStack, two routers (one created automatically), two private networks (one created automatically) -10.1.0.0/24 and 10.2.0.0/24, a VM in each private network, and establish a VPN connection between the two private nets, using the public network (172.24.4.0/24).

Preparation

Create a VM (e.g. 7 GB RAM, 2 CPUs) running Ubuntu 14.04, with NAT I/F for access to the Internet. Clone a DevStack repo with latest (Kilo-1 used for this example).

DevStack Configuration

For this example, the following local.conf is used:

   localrc
   GIT_BASE=https://github.com
   DEST=/opt/stack
   
   disable_service n-net
   enable_service q-svc
   enable_service q-agt
   enable_service q-dhcp
   enable_service q-l3
   enable_service q-meta
   enable_service neutron
   enable_plugin neutron-vpnaas https://git.openstack.org/openstack/neutron-vpnaas
   
   FIXED_RANGE=10.1.0.0/24
   FIXED_NETWORK_SIZE=256
   NETWORK_GATEWAY=10.1.0.1
   PRIVATE_SUBNET_NAME=privateA
   
   PUBLIC_SUBNET_NAME=public-subnet
   FLOATING_RANGE=172.24.4.0/24
   PUBLIC_NETWORK_GATEWAY=172.24.4.10
   Q_FLOATING_ALLOCATION_POOL="start=172.24.4.11,end=172.24.4.29"
   
   LIBVIRT_TYPE=qemu
   
   IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04.1/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz"
   
   SCREEN_LOGDIR=/opt/stack/screen-logs
   SYSLOG=True
   LOGFILE=~/devstack/stack.sh.log
   
   ADMIN_PASSWORD=password
   MYSQL_PASSWORD=password
   RABBIT_PASSWORD=password
   SERVICE_PASSWORD=password
   SERVICE_TOKEN=tokentoken
   
   Q_USE_DEBUG_COMMAND=True
   
   # RECLONE=No
   RECLONE=yes
   OFFLINE=False

Start up the cloud using ./stack.sh and ensure it completes successfully. Once stacked, you can change RECLONE to No.

Cloud Configuration

Once stacking is completed, you'll have a private network (10.1.0.0/24), and a router (router1). To prepare for establishing a VPN connection, a second network, subnet, and router needs to be created, and a VM spun up in each private network.

   # Create second net, subnet, router
   source ~/devstack/openrc admin demo
   neutron net-create privateB
   neutron subnet-create --name subB privateB 10.2.0.0/24 --gateway 10.2.0.1
   neutron router-create router2
   neutron router-interface-add router2 subB
   neutron router-gateway-set router2 public
   
   # Start up a VM in the privateA subnet.
   PRIVATE_NET=`neutron net-list | grep 'private ' | cut -f 2 -d' '`
   nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic net-id=$PRIVATE_NET peter
   
   # Start up a VM in the privateB subnet
   PRIVATE_NETB=`neutron net-list | grep privateB | cut -f 2 -d' '`
   nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic net-id=$PRIVATE_NETB paul

At this point, you can verify that you have basic connectivity. Note, DevStack will create a static route that will allow you to ping the private I/F IP of router1 from privateB network. You can remove the route, if desired.

IPSec Site-to-site Connection Creation

The following commands will create the IPSec connection:

   # Create VPN connections
   neutron vpn-ikepolicy-create ikepolicy
   neutron vpn-ipsecpolicy-create ipsecpolicy
   neutron vpn-service-create --name myvpn --description "My vpn service" router1 privateA
   
   neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn \
   --ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 172.24.4.13 \
   --peer-id 172.24.4.13 --peer-cidr 10.2.0.0/24 --psk secret
   
   neutron vpn-service-create --name myvpnB --description "My vpn serviceB" router2 subB
   
   neutron ipsec-site-connection-create --name vpnconnection2 --vpnservice-id myvpnB \
   --ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 172.24.4.11 \
   --peer-id 172.24.4.11 --peer-cidr 10.1.0.0/24 --psk secret

At this point (once the connections become active - which can take up to 30 seconds or so), you should be able to ping from the VM in the privateA network, to the VM in the privateB network. You'll see encrypted packets, if you tcpdump using the qg-# interface from one of the router namespaces. If you delete one of the connections, you'll see that the pings fail (if all works out correctly :).

Multiple Local Subnets

Early in Mitaka, IPSec site-to-site connections will support multiple local subnets, in addition to the current multiple peer CIDRs. The multiple local subnet feature is triggered by not specifying a local subnet, when creating a VPN service. Backwards compatibility is maintained with single local subnets, by providing the subnet in the VPN service creation.

To support multiple local subnets, a new capability has been provided (in Liberty), called "Endpoint Groups". Each endpoint group will define one or more endpoints of a specific type, and can be used to specify both local and peer endpoints for IPSec Connections. The Endpoint Groups separate the "what gets connected" from the "how to connect" for a VPN service, and can be used for different flavors of VPN, in the future. An example:

   # Create VPN connections
   neutron vpn-ikepolicy-create ikepolicy
   neutron vpn-ipsecpolicy-create ipsecpolicy
   neutron vpn-service-create --name myvpnC --description "My vpn service" router1

To prepare for an IPSec site-to-site, one would create an endpoint group for the local subnets, and an endpoint group for the peer CIDRs, like so:

   neutron vpn-endpoint-group-create --name my-locals --type subnet --value privateA --value privateA2
   neutron vpn-endpoint-group-create --name my-peers --type cidr --value 10.2.0.0/24 --value 20.2.0.0/24

where privateA and privateA2 are two local (private) subnets, and 10.2.0.0/24 and 20.2.0.0/24 are two CIDRs representing peer (private) subnets that will be used by a connection. Then, when creating the IPSec site-to-site connection, these endpoint group IDs would be specified, instead of the peer-cidrs attribute:

   neutron ipsec-site-connection-create --name vpnconnection3 --vpnservice-id myvpnC \
   --ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 172.24.4.11 \
   --peer-id 172.24.4.11 --local-ep-group my-locals --peer-ep-group my-peers --psk secret

Notes:

  • The validation logic makes sure that endpoint groups and peer CIDRs are not intermixed.
  • Endpoint group types are subnet, cidr, network, router, and vlan. However, only subnet and cidr are implemented (for IPSec use).
  • The endpoints in a group must be of the same type, although can mix IP versions.
  • For IPSec connections, validation currently enforces that the local and peer endpoints all use the same IP version.
  • IPSec connection validation requires that local endpoints are subnets, and peer endpoints are CIDRs.
  • Migration will convert information for any existing VPN services and connections to endpoint groups.
  • The original APIs will work for backward compatibility.

Horizon Support

  • Checkout Test branch

Horizon support has been merged.

  • Enable VPN section in Horizon

Note that ff q-vpn is enabled Horizon VPN support is enabled automatically.

Open
/opt/stack/horizon/openstack_dashboard/local/local_settings.py

and replace

OPENSTACK_NEUTRON_NETWORK = {
    'enable_vpn': False,
}

with

OPENSTACK_NEUTRON_NETWORK = {
    'enable_vpn': True,
}
  • Restart Apache to start using
  • Test user scenarios

https://wiki.openstack.org/wiki/Neutron/VPNaaS/UI