Jump to: navigation, search

Difference between revisions of "NeutronDevstack"

(Basic Setup)
 
(44 intermediate revisions by 15 users not shown)
Line 1: Line 1:
__NOTOC__
+
 
 
== Basic Setup ==
 
== Basic Setup ==
  
In order to use Quantum with devstack (http://devstack.org) you'll need to add "quantum" and "q-svc" to ENABLED_SERVICES in your localrc. See this page for more details on localrc settings: http://devstack.org/stack.sh.html.
+
In order to use Neutron with devstack (http://devstack.org) a single node setup, you'll need the following settings in your local.conf (see [http://devstack.org/stack.sh.html   this page ] for more details on local.conf).
 
 
If you want to enable the openvswitch plugin, you'll have to set Q_PLUGIN to "openvswitch" and also add "q-agt" to ENABLED_SERVICES in order to start the openvswitch quantum agent (also in your localrc).
 
 
 
For example:
 
  
 
<pre><nowiki>
 
<pre><nowiki>
ENABLED_SERVICES="g-api,g-reg,key,n-api,n-cpu,n-net,n-sch,n-vnc,horizon,mysql,rabbit,openstackx,q-svc,quantum,q-agt"
+
[[local|localrc]]
Q_PLUGIN=openvswitch
+
disable_service n-net
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
+
enable_service q-svc
 +
enable_service q-agt
 +
enable_service q-dhcp
 +
enable_service q-l3
 +
enable_service q-meta
 +
# Optional, to enable tempest configuration as part of devstack
 +
enable_service tempest
 
</nowiki></pre>
 
</nowiki></pre>
  
Line 17: Line 19:
 
Then run stack.sh as normal.
 
Then run stack.sh as normal.
  
== Quantum V2 API ==
+
If tempest has been successfully configured, a basic set of smoke tests can be run as follows:
 
 
If you are interested in running quantum using the V2 api apply the following patch to your devstack repo: https://review.openstack.org/#/c/9161 .
 
Then, set NOVA_USE_QUANTUM_API=v2 in your localrc.
 
 
 
Next, your'll need the following patch: https://review.openstack.org/#/c/9160/ in order to acquire the openvswitch v2 changes.
 
 
 
Then run stack.sh as normal.
 
 
 
At this point lets create a network using Quantumv2:
 
 
 
<pre><nowiki>
 
export SERVICE_ENDPOINT=http://localhost:35357/v2.0
 
export SERVICE_TOKEN=tokentoken
 
</nowiki></pre>
 
 
 
Run the following in order to determine the tenant-id:
 
  
 
<pre><nowiki>
 
<pre><nowiki>
$ keystone tenant-list
+
$ cd /opt/stack/tempest
+----------------------------------+--------------------+---------+
+
$ nosetests tempest/scenario/test_network_basic_ops.py
|                id                |        name        | enabled |
 
+----------------------------------+--------------------+---------+
 
| 0331b94a864a46f9b5ce7e188c115c29 | invisible_to_admin |  True  |
 
| 118c80482f8e4bbc83661399e93ab020 |      service      |  True  |
 
| 7c46d68545ca49e4b52e5bc0be9e2a2a |      admin        |  True  |
 
| 9b234056142543949c95601d7bb77b9b |        demo        |  True  |
 
+----------------------------------+--------------------+---------+
 
 
</nowiki></pre>
 
</nowiki></pre>
  
  
Using the id for the demo user create a network:
+
See the Neutron Admin Guide for details on interacting with Neutron: http://docs.openstack.org/trunk/openstack-network/admin/content/index.html
 
 
<pre><nowiki>
 
$quantumv2 --os-token ADMIN --os-url http://localhost:9696/ create_net --tenant-id 9b234056142543949c95601d7bb77b9b mynet
 
Created a new network:
 
+----------------+--------------------------------------+
 
| Field          | Value                                |
 
+----------------+--------------------------------------+
 
| admin_state_up | True                                |
 
| id            | 335b9d39-f803-4e38-9236-cc7fd047baab |
 
| name          | mynet                                |
 
| status        | ACTIVE                              |
 
| subnets        |                                      |
 
| tenant_id      | 9b234056142543949c95601d7bb77b9b    |
 
+----------------+--------------------------------------+
 
</nowiki></pre>
 
  
Now associate a subnet with this network:
+
== XS/XCP Setup ==
 
 
<pre><nowiki>
 
$quantumv2 --os-token ADMIN --os-url http://localhost:9696/ create_subnet --tenant-id 9b234056142543949c95601d7bb77b9b --ip-version 4 --gateway 10.2.2.1  335b9d39-f803-4e38-9236-cc7fd047baab 10.2.2.0/24
 
Created a new subnet:
 
+------------+--------------------------------------+
 
| Field      | Value                                |
 
+------------+--------------------------------------+
 
| cidr      | 10.2.2.0/24                          |
 
| gateway_ip | 10.2.2.1                            |
 
| id        | badb639d-8191-47d2-a6fa-6bc81beae97f |
 
| ip_version | 4                                    |
 
| network_id | 335b9d39-f803-4e38-9236-cc7fd047baab |
 
+------------+--------------------------------------+
 
</nowiki></pre>
 
 
 
 
 
Now lets boot a VM using this network:
 
 
 
<pre><nowiki>
 
$ nova boot --image $IMG_ID --flavor 1 --nic net-id=335b9d39-f803-4e38-9236-cc7fd047baab test1
 
+------------------------+--------------------------------------+
 
| Property              | Value                                |
 
+------------------------+--------------------------------------+
 
| OS-DCF:diskConfig      | MANUAL                              |
 
| OS-EXT-STS:power_state | 0                                    |
 
| OS-EXT-STS:task_state  | scheduling                          |
 
| OS-EXT-STS:vm_state    | building                            |
 
| accessIPv4            |                                      |
 
| accessIPv6            |                                      |
 
| adminPass              | 4X6a5z55WFxT                        |
 
| config_drive          |                                      |
 
| created                | 2012-06-30T02:25:12Z                |
 
| flavor                | m1.tiny                              |
 
| hostId                |                                      |
 
| id                    | 7ee202a1-f6e6-4e68-b6c3-dc7489ddb433 |
 
| image                  | cirros-0.3.0-x86_64-uec              |
 
| key_name              |                                      |
 
| metadata              | {}                                  |
 
| name                  | test1                                |
 
| progress              | 0                                    |
 
| status                | BUILD                                |
 
| tenant_id              | 9b234056142543949c95601d7bb77b9b    |
 
| updated                | 2012-06-30T02:25:12Z                |
 
| user_id                | 0c23bf7f644a404896d1387e2bb97540    |
 
+------------------------+--------------------------------------+
 
 
 
 
 
$ nova list
 
+--------------------------------------+-------+--------+----------------+
 
| ID                                  | Name  | Status | Networks      |
 
+--------------------------------------+-------+--------+----------------+
 
| 7ee202a1-f6e6-4e68-b6c3-dc7489ddb433 | test1 | ACTIVE | mynet=10.2.2.2 |
 
+--------------------------------------+-------+--------+----------------+
 
</nowiki></pre>
 
  
 +
See the following page for instructions on configuring Neutron (then called Quantum, which tells you how old the linked doc is) with OVS on XS/XCP: [[QuantumDevstackOvsXcp]]
  
 
== Multi-Node Setup ==
 
== Multi-Node Setup ==
  
A more interesting setup involves running multiple compute nodes, with Quantum networks connecting VMs on different compute nodes.  This is now possible with the latest (5/26/12) version of devstack. 
+
A more interesting setup involves running multiple compute nodes, with Neutron networks connecting VMs on different compute nodes.   
  
You should run at least one "controller node", which should have a stackrc that includes at least:  
+
You should run at least one "controller node", which should have a localrc that includes at least:  
  
  
 
<pre><nowiki>
 
<pre><nowiki>
ENABLED_SERVICES="g-api,g-reg,key,n-api,n-cpu,n-net,n-sch,n-vnc,horizon,mysql,rabbit,openstackx,q-svc,quantum,q-agt"
+
disable_service n-net
Q_PLUGIN=openvswitch
+
enable_service q-svc
 +
enable_service q-agt
 +
enable_service q-dhcp
 +
enable_service q-l3
 +
enable_service q-meta
 +
enable_service neutron
 
</nowiki></pre>
 
</nowiki></pre>
  
  
You likely want to change your localrc to run a scheduler that will balance VMs across hosts:
+
You can then run many compute nodes, each of which should have a localrc which includes the following, with the IP address of the above controller node:  
 
 
 
 
<pre><nowiki>
 
SCHEDULER=nova.scheduler.simple.SimpleScheduler
 
</nowiki></pre>
 
 
 
 
 
You can then run many compute nodes, each of which should have a stackrc which includes:
 
 
 
 
 
<pre><nowiki>
 
ENABLED_SERVICES="quantum,q-agt,n-cpu,g-api,rabbit"
 
Q_PLUGIN=openvswitch
 
</nowiki></pre>
 
 
 
 
 
Each compute node also needs to have a modified localrc to point to the "controller" for services that are only run once per deployment:  
 
  
  
 
<pre><nowiki>
 
<pre><nowiki>
 +
ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
 
SERVICE_HOST=[IP of controller node]
 
SERVICE_HOST=[IP of controller node]
 
MYSQL_HOST=$SERVICE_HOST
 
MYSQL_HOST=$SERVICE_HOST
 
RABBIT_HOST=$SERVICE_HOST
 
RABBIT_HOST=$SERVICE_HOST
 +
Q_HOST=$SERVICE_HOST
 
</nowiki></pre>
 
</nowiki></pre>
  
  
'''Note:''' the need to include 'g-api' and 'rabbit' here seems to be a bug.  Without it, nova-compute dies because it can't import the glance.common library.  This process does not actually need to be running on this host, we just need a way to make sure the glance.common library is installed.  If 'rabbit' is not specified, nova-compute also will try to connect to rabbit on localhost, not the "controller host".  See the following link for info on both issues: https://answers.launchpad.net/devstack/+question/197749
+
'''Note:''' the need to include 'rabbit' here seems to be a bug, which may have been fixed by the time you're reading this.If 'rabbit' is not specified, nova-compute also will try to connect to rabbit on localhost, not the "controller host".  See the following link for info on both issues: https://answers.launchpad.net/devstack/+question/197749
 
 
== Running with Melange ==
 
 
 
To enable melange in devstack add "melange" and "m-svc" to the ENABLED_SERVICES (melange requires quantum to be enabled as well).
 
 
 
For example:
 
 
 
<pre><nowiki>
 
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-net,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,openstackx,q-svc,q-agt,m-svc,quantum,melange
 
</nowiki></pre>
 

Latest revision as of 19:23, 21 August 2014

Basic Setup

In order to use Neutron with devstack (http://devstack.org) a single node setup, you'll need the following settings in your local.conf (see this page for more details on local.conf).

[[local|localrc]]
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
# Optional, to enable tempest configuration as part of devstack
enable_service tempest


Then run stack.sh as normal.

If tempest has been successfully configured, a basic set of smoke tests can be run as follows:

$ cd /opt/stack/tempest
$ nosetests tempest/scenario/test_network_basic_ops.py


See the Neutron Admin Guide for details on interacting with Neutron: http://docs.openstack.org/trunk/openstack-network/admin/content/index.html

XS/XCP Setup

See the following page for instructions on configuring Neutron (then called Quantum, which tells you how old the linked doc is) with OVS on XS/XCP: QuantumDevstackOvsXcp

Multi-Node Setup

A more interesting setup involves running multiple compute nodes, with Neutron networks connecting VMs on different compute nodes.

You should run at least one "controller node", which should have a localrc that includes at least:


disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron


You can then run many compute nodes, each of which should have a localrc which includes the following, with the IP address of the above controller node:


ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
SERVICE_HOST=[IP of controller node]
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
Q_HOST=$SERVICE_HOST


Note: the need to include 'rabbit' here seems to be a bug, which may have been fixed by the time you're reading this.If 'rabbit' is not specified, nova-compute also will try to connect to rabbit on localhost, not the "controller host". See the following link for info on both issues: https://answers.launchpad.net/devstack/+question/197749