Jump to: navigation, search

Difference between revisions of "Neutron/VPNaaS/HowToInstall"

((Alternative) Using Two DevStack Nodes)
m (Fix typo. Quantum => Neutron)
 
(60 intermediate revisions by 8 users not shown)
Line 2: Line 2:
 
== Installation ==
 
== Installation ==
  
* apt-get install strongswan
+
In order to use Neutron-VPNaaS with devstack (http://devstack.org) a single node setup, you'll need the following settings in your local.conf (NEW: neutron-vpnaas plugin is added).
* Replace binary quantum-l3-agent with quantum-vpn-agent
 
  
Note: you can use WIP devstack for VPNaaS
+
<pre><nowiki>
Devstack review is here -> https://review.openstack.org/#/c/32174/ (WIP)
+
[[local|localrc]]
  
    git clone https://github.com/openstack-dev/devstack.git
+
enable_plugin neutron-vpnaas https://git.openstack.org/openstack/neutron-vpnaas
    cd devstack
 
    git review -d 32174
 
  
Set localrc ( q-vpn is added)
+
disable_service n-net
 +
enable_service q-svc
 +
enable_service q-agt
 +
enable_service q-dhcp
 +
enable_service q-l3
 +
enable_service q-meta
 +
# Optional, to enable tempest configuration as part of devstack
 +
enable_service tempest
  
 +
# IPSec driver to use. Optional, defaults to OpenSwan.
 +
IPSEC_PACKAGE="openswan"
 +
</nowiki></pre>
 +
 +
== Quick Test Script ==
 +
 +
http://paste.openstack.org/raw/44702/
 +
 +
This quick test script create two site with a router,a network and a subnet connected with
 +
public network. Then, connect both site via VPN.
 +
 +
== Using Two DevStack Nodes for Testing ==
 +
You can use two DevStack nodes connected by a common "public" network to test VPNaaS. The second node can be set up with the same public network as the first node, except it will use a different gateway IP (and hence router IP). In this example, we'll assume we have two DevStack nodes (East and West), each running on hardware (you can do the same thing with multiple VM guests, if desired).
 +
(Note: you can also create similar topology using two virtual routers with one devstack)
 +
 +
==== Example Topology ====
 +
A dedicated physical port can be used for the "public" network connection (e.g. eth2) interconnected by a physical switch. You'll need to add the port to the OVS bridge on each DevStack node (e.g. <code>sudo ovs-vsctl add-port br-ex eth2</code>).
 +
 +
      (10.1.0.0/24 - DevStack '''East''')
 +
              |
 +
              |  10.1.0.1
 +
      [Neutron Router]
 +
              |  172.24.4.226
 +
              |
 +
              |  172.24.4.225
 +
      [Internet GW]
 +
              | 
 +
              |
 +
      [Internet GW]
 +
              | 172.24.4.232
 +
              |
 +
              | 172.24.4.233
 +
      [Neutron Router]
 +
              |  10.2.0.1
 +
              |
 +
      (10.2.0.0/24 DevStack '''West''')
 +
 +
==== DevStack Configuration ====
 +
 +
For '''East''' you can append these lines to the localrc, which will give you a private net of 10.1.0.0/24 and public network of 172.24.4.0/24
 +
 +
PUBLIC_SUBNET_NAME=yoursubnet
 +
PRIVATE_SUBNET_NAME=mysubnet
 +
FIXED_RANGE=10.1.0.0/24
 +
NETWORK_GATEWAY=10.1.0.1
 +
PUBLIC_NETWORK_GATEWAY=172.24.4.225
 +
Q_FLOATING_ALLOCATION_POOL=start=172.24.4.226,end=172.24.4.231
 +
 +
For '''West''' you can add these lines to localrc to use a different local network, public GW (and implicitly router) IP:
 +
PUBLIC_SUBNET_NAME=yoursubnet
 +
PRIVATE_SUBNET_NAME=mysubnet
 +
FIXED_RANGE=10.2.0.0/24
 +
NETWORK_GATEWAY=10.2.0.1
 +
PUBLIC_NETWORK_GATEWAY=172.24.4.232
 +
Q_FLOATING_ALLOCATION_POOL=start=172.24.4.233,end=172.24.4.238
 +
 +
==== VPNaaS Configuration ====
 +
With DevStack running on East and West and connectivity confirmed (make sure you can ping one router/GW from the other), you can perform these VPNaaS CLI commands.
 +
 +
On '''East'''
 +
neutron vpn-ikepolicy-create ikepolicy1
 +
neutron vpn-ipsecpolicy-create ipsecpolicy1
 +
neutron vpn-service-create --name myvpn --description "My vpn service" router1 mysubnet
 +
 +
neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 172.24.4.233 --peer-id 172.24.4.233 --peer-cidr 10.2.0.0/24 --psk secret
 +
 +
On '''West'''
 +
neutron vpn-ikepolicy-create ikepolicy1
 +
neutron vpn-ipsecpolicy-create ipsecpolicy1
 +
neutron vpn-service-create --name myvpn --description "My vpn service" router1 mysubnet
 +
 +
neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 172.24.4.226 --peer-id 172.24.4.226 --peer-cidr 10.1.0.0/24 --psk secret
 +
 +
Note: Please make sure setup security group (open icmp for vpn subnet etc)
 +
 +
==== Verification ====
 +
You can spin up VMs on each node, and then from the VM ping the far end router's public IP. With '''tcpdump''' running on one of the nodes, you can see that pings appear as encrypted packets (ESP). Note that BOOTP, IGMP, and the keepalive packets between the two nodes are not encrypted (nor are pings between the two external IP addresses).
 +
 +
==== Kilo Update ====
 +
For Kilo, the localrc contents was moved into local.conf. With (VirtualBox) VMs used as hosts, where eth0 was set up as NAT, and eth1 set up as Internal Network, the following configurations were used in local.conf:
 +
 +
    OVS_PHYSICAL_BRIDGE=br-ex
 +
    PUBLIC_INTERFACE=eth1
 +
 +
Once stacked, VMs were created for testing, VPN IPSec commands used to establish connections between the nodes, and security group rules added to allow ICMP and SSH.
 +
 +
== VPNaaS with Single DevStack and Two Routers ==
 +
Simple instructions on how to setup a test environment where a VPNaaS IPSec connection can be established using the reference implementation (OpenSwan). This example uses VIrtualBox running on laptop to provide a VM for running DevStack. It assumes a Kilo release (post Juno).
 +
 +
The idea here is to have a single OpenStack cloud created using DevStack, two routers (one created automatically), two private networks (one created automatically) -10.1.0.0/24 and 10.2.0.0/24, a VM in each private network, and establish a VPN connection between the two private nets, using the public network (172.24.4.0/24).
 +
 +
=== Preparation ===
 +
Create a VM (e.g. 7 GB RAM, 2 CPUs) running Ubuntu 14.04, with NAT I/F for access to the Internet. Clone a DevStack repo with latest (Kilo-1 used for this example).
 +
 +
=== DevStack Configuration ===
 +
For this example, the following local.conf is used:
 +
 +
    [[local|localrc]]
 +
    GIT_BASE=https://github.com
 
     DEST=/opt/stack
 
     DEST=/opt/stack
 +
   
 
     disable_service n-net
 
     disable_service n-net
 
     enable_service q-svc
 
     enable_service q-svc
Line 21: Line 125:
 
     enable_service q-l3
 
     enable_service q-l3
 
     enable_service q-meta
 
     enable_service q-meta
     enable_service quantum
+
     enable_service neutron
     enable_service tempest
+
     enable_plugin neutron-vpnaas https://git.openstack.org/openstack/neutron-vpnaas
    enable_service q-vpn
+
      
    API_RATE_LIMIT=False
 
     VOLUME_BACKING_FILE_SIZE=4G
 
 
     FIXED_RANGE=10.1.0.0/24
 
     FIXED_RANGE=10.1.0.0/24
 
     FIXED_NETWORK_SIZE=256
 
     FIXED_NETWORK_SIZE=256
     VIRT_DRIVER=libvirt
+
     NETWORK_GATEWAY=10.1.0.1
     SWIFT_REPLICAS=1
+
    PRIVATE_SUBNET_NAME=privateA
     export OS_NO_CACHE=True
+
   
 +
     PUBLIC_SUBNET_NAME=public-subnet
 +
    FLOATING_RANGE=172.24.4.0/24
 +
    PUBLIC_NETWORK_GATEWAY=172.24.4.10
 +
    Q_FLOATING_ALLOCATION_POOL="start=172.24.4.11,end=172.24.4.29"
 +
   
 +
    LIBVIRT_TYPE=qemu
 +
   
 +
    IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04.1/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz"
 +
      
 
     SCREEN_LOGDIR=/opt/stack/screen-logs
 
     SCREEN_LOGDIR=/opt/stack/screen-logs
 
     SYSLOG=True
 
     SYSLOG=True
     SKIP_EXERCISES=boot_from_volume,client-env
+
     LOGFILE=~/devstack/stack.sh.log
     ROOTSLEEP=0
+
      
    ACTIVE_TIMEOUT=60
+
     ADMIN_PASSWORD=password
    Q_USE_SECGROUP=True
+
     MYSQL_PASSWORD=password
    BOOT_TIMEOUT=90
+
     RABBIT_PASSWORD=password
    ASSOCIATE_TIMEOUT=60
+
     SERVICE_PASSWORD=password
     ADMIN_PASSWORD=openstack
 
     MYSQL_PASSWORD=openstack
 
     RABBIT_PASSWORD=openstack
 
     SERVICE_PASSWORD=openstack
 
 
     SERVICE_TOKEN=tokentoken
 
     SERVICE_TOKEN=tokentoken
     Q_PLUGIN=openvswitch
+
      
 
     Q_USE_DEBUG_COMMAND=True
 
     Q_USE_DEBUG_COMMAND=True
     NETWORK_GATEWAY=10.1.0.1
+
      
 +
    # RECLONE=No
 +
    RECLONE=yes
 +
    OFFLINE=False
  
* Checkout Test branches
+
Start up the cloud using ./stack.sh and ensure it completes successfully. Once stacked, you can change RECLONE to No.
  
Quantum : https://review.openstack.org/#/c/33148/
+
=== Cloud Configuration ===
 +
Once stacking is completed, you'll have a private network (10.1.0.0/24), and a router (router1). To prepare for establishing a VPN connection, a second network, subnet, and router needs to be created, and a VM spun up in each private network.
  
Quantum client : https://review.openstack.org/#/c/29811/
+
    # Create second net, subnet, router
 
+
    source ~/devstack/openrc admin demo
* Run Devstack
+
    neutron net-create privateB
 +
    neutron subnet-create --name subB privateB 10.2.0.0/24 --gateway 10.2.0.1
 +
    neutron router-create router2
 +
    neutron router-interface-add router2 subB
 +
    neutron router-gateway-set router2 public
 +
   
 +
    # Start up a VM in the privateA subnet.
 +
    PRIVATE_NET=`neutron net-list | grep 'private ' | cut -f 2 -d' '`
 +
    nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic net-id=$PRIVATE_NET peter
 
      
 
      
    ./stack.sh
+
    # Start up a VM in the privateB subnet
 +
    PRIVATE_NETB=`neutron net-list | grep privateB | cut -f 2 -d' '`
 +
    nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic net-id=$PRIVATE_NETB paul
  
* Install quantum client code (devstack installes package version of clients)
+
At this point, you can verify that you have basic connectivity. Note, DevStack will create a static route that will allow you to ping the private I/F IP of router1 from privateB network. You can remove the route, if desired.
  
    cd /opt/stack/python-quantumclient
+
=== IPSec Site-to-site Connection Creation ===
    sudo python setup.py develop
+
The following commands will create the IPSec connection:
  
== CLI Walkthough ==
+
    # Create VPN connections
 
+
    neutron vpn-ikepolicy-create ikepolicy
=== Test Setup ===
+
    neutron vpn-ipsecpolicy-create ipsecpolicy
 
+
    neutron vpn-service-create --name myvpn --description "My vpn service" router1 privateA
      (10.1.0.0/24)
+
   
              |
+
    neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn \
              |  10.1.0.1
+
    --ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 172.24.4.13 \
      [Quantum Router]
+
    --peer-id 172.24.4.13 --peer-cidr 10.2.0.0/24 --psk secret
              |  172.24.4.226
+
   
              |
+
    neutron vpn-service-create --name myvpnB --description "My vpn serviceB" router2 subB
              |  172.24.4.225
+
   
      [ Internet GW ]
+
    neutron ipsec-site-connection-create --name vpnconnection2 --vpnservice-id myvpnB \
              |  172.0.0.1
+
    --ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 172.24.4.11 \
              |
+
    --peer-id 172.24.4.11 --peer-cidr 10.1.0.0/24 --psk secret
      (Internet)
 
              |
 
              |  172.0.0.2
 
      [ Remote GW]
 
              |  20.1.0.1
 
              |
 
      (20.1.0.0/24)
 
 
 
=== Setup VPN Connection ===
 
    #Use subnet_id
 
    SUBNET_ID=`quantum net-show private | awk '/subnets/{print $4}'`
 
    # Create VPN Service
 
    quantum vpn-service-create --name vpn1 --router_id router1 --subnet_id $SUBNET_ID
 
    # List VPN Service
 
    quantum vpn-service-list
 
    # Show VPN Service
 
    quantum vpn-service-show vpn1
 
    # Create IKE policy
 
    quantum vpn-ikepolicy-create --name ikepolicy1
 
    # List IKE policy
 
    quantum vpn-ikepolicy-list
 
    # Show IKE policy
 
    quantum vpn-ikepolicy-show ikepolicy1
 
      # Create IPSec policy
 
    quantum vpn-ipsecpolicy-create --name ipsecpolicy1
 
      # Show IPSec policy
 
      quantum vpn-ipsecpolicy-show ipsecpolicy1
 
      # Create VPN Connection
 
      quantum vpn-connection-create --name vpnconnection1 --vpnservice-id vpn1 --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer_address 172.0.0.2 --peer_id 172.0.0.2 --peer_cidrs list=true 20.1.0.0/24 --psk secret
 
      #List VPN Connection
 
    quantum vpn-connection-list
 
      # Show VPN Connection
 
    quantum vpn-connection-show vpnconnection1
 
 
 
=== Test Connection ===
 
 
 
create remote site using namespace
 
  
    sudo ip netns add remote_site
+
At this point (once the connections become active - which can take up to 30 seconds or so), you should be able to ping from the VM in the privateA network, to the VM in the privateB network. You'll see encrypted packets, if you tcpdump using the qg-# interface from one of the router namespaces. If you delete one of the connections, you'll see that the pings fail (if all works out correctly :).
    sudo ip link add tap_remote type veth peer name tap_remote_peer
 
    sudo ip link set tap_remote_peer netns remote_site
 
    sudo ip addr add 172.0.0.1/24 dev tap_remote
 
    sudo ip link set tap_remote up
 
    sudo ip netns exec remote_site ip addr add 172.0.0.2/24 dev tap_remote_peer
 
    sudo ip netns exec remote_site ip addr add 20.1.0.1/24 dev tap_remote_peer
 
    sudo ip netns exec remote_site ip link set tap_remote_peer up
 
    sudo ip netns exec remote_site ip link set lo up
 
    sudo ip netns exec remote_site ip route add default via 172.0.0.1
 
    sudo ip netns exec remote_site iptables -t nat -A POSTROUTING -s 20.1.0.0/24 -j SNAT --to-source 172.0.0.2
 
    sudo ip netns exec remote_site iptables -t nat -I POSTROUTING 1 -m policy --dir out --pol ipsec -j ACCEPT
 
  
create remote_site dir and setup config
+
=== Multiple Local Subnets ===
 +
Early in Mitaka, IPSec site-to-site connections will support multiple local subnets, in addition to the current multiple peer CIDRs. The multiple local subnet feature is triggered by '''not''' specifying a local subnet, when creating a VPN service. Backwards compatibility is maintained with single local subnets, by providing the subnet in the VPN service creation.
  
remote_site/etc/ipsec.conf
+
To support multiple local subnets, a new capability has been provided (in Liberty), called "Endpoint Groups". Each endpoint group will define one or more endpoints of a specific type, and can be used to specify both local and peer endpoints for IPSec Connections. The Endpoint Groups separate the "what gets connected" from the "how to connect" for a VPN service, and can be used for different flavors of VPN, in the future. An example:
  
     config setup
+
     # Create VPN connections
     conn %default
+
     neutron vpn-ikepolicy-create ikepolicy
        ikelifetime=60m
+
    neutron vpn-ipsecpolicy-create ipsecpolicy
        keylife=20m
+
     neutron vpn-service-create --name myvpnC --description "My vpn service" router1
        rekeymargin=3m
 
        authby=secret
 
        keyexchange=ikev1
 
        mobike=no
 
     conn test_conn
 
        left=172.0.0.2
 
        leftid=172.0.0.2
 
        leftsubnet=20.1.0.0/24
 
        right=172.24.4.226
 
        rightid=172.24.4.226
 
        rightsubnet=10.1.0.0/24
 
        auto=add
 
        dpdaction=hold
 
        dpddelay=30s
 
        dpdtimeout=120s
 
        authby=psk
 
        keyexchange=ikev1
 
        ike=aes128-sha1-modp1536
 
        ikelifetime=3600
 
        auth=esp
 
        esp=aes128-sha1-modp1536!
 
        type=tunnel
 
        lifetime=3600s
 
  
remote_site/etc/ipsec.secrets
+
To prepare for an IPSec site-to-site, one would create an endpoint group for the local subnets, and an endpoint group for the peer CIDRs, like so:
  
     172.0.0.2 172.24.4.226 : PSK "secret"
+
     neutron vpn-endpoint-group-create --name my-locals --type subnet --value privateA --value privateA2
 +
    neutron vpn-endpoint-group-create --name my-peers --type cidr --value 10.2.0.0/24 --value 20.2.0.0/24
  
Start ipsec daemon
+
where privateA and privateA2 are two local (private) subnets, and 10.2.0.0/24 and 20.2.0.0/24 are two CIDRs representing peer (private) subnets that will be used by a connection. Then, when creating the IPSec site-to-site connection, these endpoint group IDs would be specified, instead of the peer-cidrs attribute:
    sudo quantum-vpn-nswrap `pwd` ipsec start
 
    sudo quantum-vpn-nswrap `pwd` ipsec up test_conn
 
  
=== Cleanup VPN Connection ===
+
    neutron ipsec-site-connection-create --name vpnconnection3 --vpnservice-id myvpnC \
 +
    --ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 172.24.4.11 \
 +
    --peer-id 172.24.4.11 --local-ep-group my-locals --peer-ep-group my-peers --psk secret
  
    # Delete VPN Connection
+
Notes:
    quantum vpn-connection-delete vpnconnection1
+
* The validation logic makes sure that endpoint groups and peer CIDRs are not intermixed.
     
+
* Endpoint group types are subnet, cidr, network, router, and vlan. However, only subnet and cidr are implemented (for IPSec use).
    # Delete VPN Service
+
* The endpoints in a group must be of the same type, although can mix IP versions.
    quantum vpn-service-delete $VPN1
+
* For IPSec connections, validation currently enforces that the local and peer endpoints all use the same IP version.
     
+
* IPSec connection validation requires that local endpoints are subnets, and peer endpoints are CIDRs.
    # Delete IKE policy
+
* Migration will convert information for any existing VPN services and connections to endpoint groups.
    quantum vpn-ikepolicy-delete ikepolicy1
+
* The original APIs will work for backward compatibility.
     
 
    # Delete IPSec policy
 
    quantum vpn-ipsecpolicy-delete ipsecpolicy1
 
  
=== (Alternative) Using Two DevStack Nodes ===
+
== Horizon Support ==
You can also use two DevStack nodes connected by a common "public" network to test VPNaaS. The second node can be set up with the same public network as the first node, except it will use a different gateway IP (and hence router IP). In this example, we'll assume we have two DevStack nodes (East and West), each running on hardware (you can do the same thing with multiple VM guests, if desired).
 
  
==== Example Topology ====
 
A dedicated physical port can be used for the "public" network connection (e.g. eth2) interconnected by a physical switch. You'll need to add the port to the OVS bridge on each DevStack node (e.g. <code>sudo ovs-vsctl add-port br-ex eth2</code>).
 
  
      (10.1.0.0/24 - DevStack '''East''')
+
* Checkout Test branch
              |
 
              |  10.1.0.1
 
      [Quantum Router]
 
              |  172.24.4.226
 
              |
 
              |  172.24.4.225
 
      [Internet GW]
 
              | 
 
              |
 
      [Internet GW]
 
              | 172.24.4.232
 
              |
 
              | 172.24.4.233
 
      [Quantum Router]
 
              |  10.2.0.1
 
              |
 
      (10.2.0.0/24 DevStack '''West''')
 
  
==== DevStack Configuration ====
+
Horizon support has been merged.
  
For '''East''' you can use a stock localrc configuration, which will give you a private net of 10.1.0.0/24 and public network of 172.24.4.0/24. Just to make configuration easier, you can name the subnets with:
+
* Enable VPN section in Horizon
<code>
 
PUBLIC_SUBNET_NAME=yoursubnet
 
PRIVATE_SUBNET_NAME=mysubnet
 
</code>
 
  
For '''West''' you can add these lines to localrc to use a different local network, public GW (and implicitly router) IP:
+
Note that ff q-vpn is enabled Horizon VPN support is enabled automatically.
<code>
 
PUBLIC_SUBNET_NAME=yoursubnet
 
PRIVATE_SUBNET_NAME=mysubnet
 
FIXED_RANGE=10.2.0.0/24
 
NETWORK_GATEWAY=10.2.0.1
 
PUBLIC_NETWORK_GATEWAY=172.24.4.232
 
</code>
 
  
==== VPNaaS Configuration ====
+
Open <pre>/opt/stack/horizon/openstack_dashboard/local/local_settings.py</pre>
With DevStack running on East and West and connectivity confirmed (make sure you can ping one router/GW from the other), you can perform these VPNaaS CLI commands.
 
  
On '''East'''
+
and replace
<code>
+
<pre>
quantum vpn-ikepolicy-create --name ikepolicy1
+
OPENSTACK_NEUTRON_NETWORK = {
quantum vpn-ipsecpolicy-create --name ipsecpolicy1
+
    'enable_vpn': False,
quantum vpn-service-create --name myvpn --description "My vpn service" --subnet-id mysubnet --router_id router1
+
}
 +
</pre>
  
quantum vpn-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer_address 172.24.4.233 --peer_id 172.24.4.233 --peer_cidrs list=true 10.2.0.0/24 --psk secret
+
with
</code>
+
<pre>
 +
OPENSTACK_NEUTRON_NETWORK = {
 +
    'enable_vpn': True,
 +
}
 +
</pre>
  
On '''West'''
+
* Restart Apache to start using
<code>
 
quantum vpn-ikepolicy-create --name ikepolicy1
 
quantum vpn-ipsecpolicy-create --name ipsecpolicy1
 
quantum vpn-service-create --name myvpn --description "My vpn service" --subnet-id mysubnet --router_id router1
 
  
quantum vpn-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer_address 172.24.4.226 --peer_id 172.24.4.226 --peer_cidrs list=true 10.1.0.0/24 --psk secret
+
* Test user scenarios
</code>
 
  
==== Verification ====
+
https://wiki.openstack.org/wiki/Neutron/VPNaaS/UI
You can spin up VMs on each node, and then from the VM ping the far end router's public IP. With '''tcpdump''' running on one of the nodes, you can see that pings appear as encrypted packets (ESP). Note that BOOTP, IGMP, and the keepalive packets between the two nodes are not encrypted (nor are pings between the two external IP addresses).
 

Latest revision as of 23:23, 15 February 2016

Installation

In order to use Neutron-VPNaaS with devstack (http://devstack.org) a single node setup, you'll need the following settings in your local.conf (NEW: neutron-vpnaas plugin is added).

[[local|localrc]]

enable_plugin neutron-vpnaas https://git.openstack.org/openstack/neutron-vpnaas

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
# Optional, to enable tempest configuration as part of devstack
enable_service tempest

# IPSec driver to use. Optional, defaults to OpenSwan.
IPSEC_PACKAGE="openswan"

Quick Test Script

http://paste.openstack.org/raw/44702/

This quick test script create two site with a router,a network and a subnet connected with public network. Then, connect both site via VPN.

Using Two DevStack Nodes for Testing

You can use two DevStack nodes connected by a common "public" network to test VPNaaS. The second node can be set up with the same public network as the first node, except it will use a different gateway IP (and hence router IP). In this example, we'll assume we have two DevStack nodes (East and West), each running on hardware (you can do the same thing with multiple VM guests, if desired). (Note: you can also create similar topology using two virtual routers with one devstack)

Example Topology

A dedicated physical port can be used for the "public" network connection (e.g. eth2) interconnected by a physical switch. You'll need to add the port to the OVS bridge on each DevStack node (e.g. sudo ovs-vsctl add-port br-ex eth2).

      (10.1.0.0/24 - DevStack East)
              |
              |  10.1.0.1
     [Neutron Router]
              |  172.24.4.226
              |
              |  172.24.4.225
     [Internet GW]
              |  
              |
     [Internet GW]
              | 172.24.4.232
              |
              | 172.24.4.233
     [Neutron Router]
              |  10.2.0.1
              |
     (10.2.0.0/24 DevStack West)

DevStack Configuration

For East you can append these lines to the localrc, which will give you a private net of 10.1.0.0/24 and public network of 172.24.4.0/24

PUBLIC_SUBNET_NAME=yoursubnet
PRIVATE_SUBNET_NAME=mysubnet
FIXED_RANGE=10.1.0.0/24
NETWORK_GATEWAY=10.1.0.1
PUBLIC_NETWORK_GATEWAY=172.24.4.225
Q_FLOATING_ALLOCATION_POOL=start=172.24.4.226,end=172.24.4.231

For West you can add these lines to localrc to use a different local network, public GW (and implicitly router) IP:

PUBLIC_SUBNET_NAME=yoursubnet
PRIVATE_SUBNET_NAME=mysubnet
FIXED_RANGE=10.2.0.0/24
NETWORK_GATEWAY=10.2.0.1
PUBLIC_NETWORK_GATEWAY=172.24.4.232
Q_FLOATING_ALLOCATION_POOL=start=172.24.4.233,end=172.24.4.238

VPNaaS Configuration

With DevStack running on East and West and connectivity confirmed (make sure you can ping one router/GW from the other), you can perform these VPNaaS CLI commands.

On East

neutron vpn-ikepolicy-create ikepolicy1
neutron vpn-ipsecpolicy-create ipsecpolicy1
neutron vpn-service-create --name myvpn --description "My vpn service" router1 mysubnet
neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 172.24.4.233 --peer-id 172.24.4.233 --peer-cidr 10.2.0.0/24 --psk secret

On West

neutron vpn-ikepolicy-create ikepolicy1
neutron vpn-ipsecpolicy-create ipsecpolicy1
neutron vpn-service-create --name myvpn --description "My vpn service" router1 mysubnet
neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 172.24.4.226 --peer-id 172.24.4.226 --peer-cidr 10.1.0.0/24 --psk secret

Note: Please make sure setup security group (open icmp for vpn subnet etc)

Verification

You can spin up VMs on each node, and then from the VM ping the far end router's public IP. With tcpdump running on one of the nodes, you can see that pings appear as encrypted packets (ESP). Note that BOOTP, IGMP, and the keepalive packets between the two nodes are not encrypted (nor are pings between the two external IP addresses).

Kilo Update

For Kilo, the localrc contents was moved into local.conf. With (VirtualBox) VMs used as hosts, where eth0 was set up as NAT, and eth1 set up as Internal Network, the following configurations were used in local.conf:

   OVS_PHYSICAL_BRIDGE=br-ex
   PUBLIC_INTERFACE=eth1

Once stacked, VMs were created for testing, VPN IPSec commands used to establish connections between the nodes, and security group rules added to allow ICMP and SSH.

VPNaaS with Single DevStack and Two Routers

Simple instructions on how to setup a test environment where a VPNaaS IPSec connection can be established using the reference implementation (OpenSwan). This example uses VIrtualBox running on laptop to provide a VM for running DevStack. It assumes a Kilo release (post Juno).

The idea here is to have a single OpenStack cloud created using DevStack, two routers (one created automatically), two private networks (one created automatically) -10.1.0.0/24 and 10.2.0.0/24, a VM in each private network, and establish a VPN connection between the two private nets, using the public network (172.24.4.0/24).

Preparation

Create a VM (e.g. 7 GB RAM, 2 CPUs) running Ubuntu 14.04, with NAT I/F for access to the Internet. Clone a DevStack repo with latest (Kilo-1 used for this example).

DevStack Configuration

For this example, the following local.conf is used:

   localrc
   GIT_BASE=https://github.com
   DEST=/opt/stack
   
   disable_service n-net
   enable_service q-svc
   enable_service q-agt
   enable_service q-dhcp
   enable_service q-l3
   enable_service q-meta
   enable_service neutron
   enable_plugin neutron-vpnaas https://git.openstack.org/openstack/neutron-vpnaas
   
   FIXED_RANGE=10.1.0.0/24
   FIXED_NETWORK_SIZE=256
   NETWORK_GATEWAY=10.1.0.1
   PRIVATE_SUBNET_NAME=privateA
   
   PUBLIC_SUBNET_NAME=public-subnet
   FLOATING_RANGE=172.24.4.0/24
   PUBLIC_NETWORK_GATEWAY=172.24.4.10
   Q_FLOATING_ALLOCATION_POOL="start=172.24.4.11,end=172.24.4.29"
   
   LIBVIRT_TYPE=qemu
   
   IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04.1/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz"
   
   SCREEN_LOGDIR=/opt/stack/screen-logs
   SYSLOG=True
   LOGFILE=~/devstack/stack.sh.log
   
   ADMIN_PASSWORD=password
   MYSQL_PASSWORD=password
   RABBIT_PASSWORD=password
   SERVICE_PASSWORD=password
   SERVICE_TOKEN=tokentoken
   
   Q_USE_DEBUG_COMMAND=True
   
   # RECLONE=No
   RECLONE=yes
   OFFLINE=False

Start up the cloud using ./stack.sh and ensure it completes successfully. Once stacked, you can change RECLONE to No.

Cloud Configuration

Once stacking is completed, you'll have a private network (10.1.0.0/24), and a router (router1). To prepare for establishing a VPN connection, a second network, subnet, and router needs to be created, and a VM spun up in each private network.

   # Create second net, subnet, router
   source ~/devstack/openrc admin demo
   neutron net-create privateB
   neutron subnet-create --name subB privateB 10.2.0.0/24 --gateway 10.2.0.1
   neutron router-create router2
   neutron router-interface-add router2 subB
   neutron router-gateway-set router2 public
   
   # Start up a VM in the privateA subnet.
   PRIVATE_NET=`neutron net-list | grep 'private ' | cut -f 2 -d' '`
   nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic net-id=$PRIVATE_NET peter
   
   # Start up a VM in the privateB subnet
   PRIVATE_NETB=`neutron net-list | grep privateB | cut -f 2 -d' '`
   nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic net-id=$PRIVATE_NETB paul

At this point, you can verify that you have basic connectivity. Note, DevStack will create a static route that will allow you to ping the private I/F IP of router1 from privateB network. You can remove the route, if desired.

IPSec Site-to-site Connection Creation

The following commands will create the IPSec connection:

   # Create VPN connections
   neutron vpn-ikepolicy-create ikepolicy
   neutron vpn-ipsecpolicy-create ipsecpolicy
   neutron vpn-service-create --name myvpn --description "My vpn service" router1 privateA
   
   neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn \
   --ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 172.24.4.13 \
   --peer-id 172.24.4.13 --peer-cidr 10.2.0.0/24 --psk secret
   
   neutron vpn-service-create --name myvpnB --description "My vpn serviceB" router2 subB
   
   neutron ipsec-site-connection-create --name vpnconnection2 --vpnservice-id myvpnB \
   --ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 172.24.4.11 \
   --peer-id 172.24.4.11 --peer-cidr 10.1.0.0/24 --psk secret

At this point (once the connections become active - which can take up to 30 seconds or so), you should be able to ping from the VM in the privateA network, to the VM in the privateB network. You'll see encrypted packets, if you tcpdump using the qg-# interface from one of the router namespaces. If you delete one of the connections, you'll see that the pings fail (if all works out correctly :).

Multiple Local Subnets

Early in Mitaka, IPSec site-to-site connections will support multiple local subnets, in addition to the current multiple peer CIDRs. The multiple local subnet feature is triggered by not specifying a local subnet, when creating a VPN service. Backwards compatibility is maintained with single local subnets, by providing the subnet in the VPN service creation.

To support multiple local subnets, a new capability has been provided (in Liberty), called "Endpoint Groups". Each endpoint group will define one or more endpoints of a specific type, and can be used to specify both local and peer endpoints for IPSec Connections. The Endpoint Groups separate the "what gets connected" from the "how to connect" for a VPN service, and can be used for different flavors of VPN, in the future. An example:

   # Create VPN connections
   neutron vpn-ikepolicy-create ikepolicy
   neutron vpn-ipsecpolicy-create ipsecpolicy
   neutron vpn-service-create --name myvpnC --description "My vpn service" router1

To prepare for an IPSec site-to-site, one would create an endpoint group for the local subnets, and an endpoint group for the peer CIDRs, like so:

   neutron vpn-endpoint-group-create --name my-locals --type subnet --value privateA --value privateA2
   neutron vpn-endpoint-group-create --name my-peers --type cidr --value 10.2.0.0/24 --value 20.2.0.0/24

where privateA and privateA2 are two local (private) subnets, and 10.2.0.0/24 and 20.2.0.0/24 are two CIDRs representing peer (private) subnets that will be used by a connection. Then, when creating the IPSec site-to-site connection, these endpoint group IDs would be specified, instead of the peer-cidrs attribute:

   neutron ipsec-site-connection-create --name vpnconnection3 --vpnservice-id myvpnC \
   --ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 172.24.4.11 \
   --peer-id 172.24.4.11 --local-ep-group my-locals --peer-ep-group my-peers --psk secret

Notes:

  • The validation logic makes sure that endpoint groups and peer CIDRs are not intermixed.
  • Endpoint group types are subnet, cidr, network, router, and vlan. However, only subnet and cidr are implemented (for IPSec use).
  • The endpoints in a group must be of the same type, although can mix IP versions.
  • For IPSec connections, validation currently enforces that the local and peer endpoints all use the same IP version.
  • IPSec connection validation requires that local endpoints are subnets, and peer endpoints are CIDRs.
  • Migration will convert information for any existing VPN services and connections to endpoint groups.
  • The original APIs will work for backward compatibility.

Horizon Support

  • Checkout Test branch

Horizon support has been merged.

  • Enable VPN section in Horizon

Note that ff q-vpn is enabled Horizon VPN support is enabled automatically.

Open
/opt/stack/horizon/openstack_dashboard/local/local_settings.py

and replace

OPENSTACK_NEUTRON_NETWORK = {
    'enable_vpn': False,
}

with

OPENSTACK_NEUTRON_NETWORK = {
    'enable_vpn': True,
}
  • Restart Apache to start using
  • Test user scenarios

https://wiki.openstack.org/wiki/Neutron/VPNaaS/UI