Jump to: navigation, search

Difference between revisions of "NeutronDevstack"

(Apparently won't work unless the quantum agent on the compute node can refer to the main server (presumably since the DB comms link was broken).)
(Basic Setup)
 
(25 intermediate revisions by 15 users not shown)
Line 1: Line 1:
__NOTOC__
+
 
 
== Basic Setup ==
 
== Basic Setup ==
  
'''Note:''' This description covers only Quantum's v2 API (i.e. Folsom code).
+
In order to use Neutron with devstack (http://devstack.org) a single node setup, you'll need the following settings in your local.conf (see [http://devstack.org/stack.sh.html  this page ] for more details on local.conf).
 
 
In order to use Quantum with devstack (http://devstack.org) a single node setup, you'll need the following settings in your localrc (see [http://devstack.org/stack.sh.html  this page ] for more details on localrc).
 
 
 
  
 
<pre><nowiki>
 
<pre><nowiki>
 +
[[local|localrc]]
 
disable_service n-net
 
disable_service n-net
 
enable_service q-svc
 
enable_service q-svc
 
enable_service q-agt
 
enable_service q-agt
 
enable_service q-dhcp
 
enable_service q-dhcp
enable_service quantum
+
enable_service q-l3
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
+
enable_service q-meta
Q_PLUGIN=openvswitch
+
# Optional, to enable tempest configuration as part of devstack
NOVA_USE_QUANTUM_API=v2
+
enable_service tempest
 
</nowiki></pre>
 
</nowiki></pre>
  
Line 21: Line 19:
 
Then run stack.sh as normal.
 
Then run stack.sh as normal.
  
'''Note:''' For more information about creating networks/subnets and VMs attached to them, see: [[RunningQuantumV2Api]]
+
If tempest has been successfully configured, a basic set of smoke tests can be run as follows:
 +
 
 +
<pre><nowiki>
 +
$ cd /opt/stack/tempest
 +
$ nosetests tempest/scenario/test_network_basic_ops.py
 +
</nowiki></pre>
 +
 
 +
 
 +
See the Neutron Admin Guide for details on interacting with Neutron: http://docs.openstack.org/trunk/openstack-network/admin/content/index.html
 +
 
 +
== XS/XCP Setup ==
 +
 
 +
See the following page for instructions on configuring Neutron (then called Quantum, which tells you how old the linked doc is) with OVS on XS/XCP: [[QuantumDevstackOvsXcp]]
  
 
== Multi-Node Setup ==
 
== Multi-Node Setup ==
  
A more interesting setup involves running multiple compute nodes, with Quantum networks connecting VMs on different compute nodes.   
+
A more interesting setup involves running multiple compute nodes, with Neutron networks connecting VMs on different compute nodes.   
  
You should run at least one "controller node", which should have a stackrc that includes at least:  
+
You should run at least one "controller node", which should have a localrc that includes at least:  
  
  
Line 35: Line 45:
 
enable_service q-agt
 
enable_service q-agt
 
enable_service q-dhcp
 
enable_service q-dhcp
enable_service quantum
+
enable_service q-l3
Q_PLUGIN=openvswitch
+
enable_service q-meta
NOVA_USE_QUANTUM_API=v2
+
enable_service neutron
</nowiki></pre>
 
 
 
 
 
You likely want to change your localrc to run a scheduler that will balance VMs across hosts:
 
 
 
 
 
<pre><nowiki>
 
SCHEDULER=nova.scheduler.simple.SimpleScheduler
 
 
</nowiki></pre>
 
</nowiki></pre>
  
  
You can then run many compute nodes, each of which should have a stackrc which includes the following, with the IP address of the above controller node:  
+
You can then run many compute nodes, each of which should have a localrc which includes the following, with the IP address of the above controller node:  
  
  
 
<pre><nowiki>
 
<pre><nowiki>
ENABLED_SERVICES=n-cpu,rabbit,g-api,quantum,q-agt,q-dhcp
+
ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
Q_PLUGIN=openvswitch
 
NOVA_USE_QUANTUM_API=v2
 
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
 
 
SERVICE_HOST=[IP of controller node]
 
SERVICE_HOST=[IP of controller node]
 
MYSQL_HOST=$SERVICE_HOST
 
MYSQL_HOST=$SERVICE_HOST
Line 64: Line 63:
  
  
'''Note:''' the need to include 'g-api' and 'rabbit' here seems to be a bug, which may have been fixed by the time you're reading this. Without it, nova-compute dies because it can't import the glance.common library.  This process does not actually need to be running on this host, we just need a way to make sure the glance.common library is installed.  If 'rabbit' is not specified, nova-compute also will try to connect to rabbit on localhost, not the "controller host".  See the following link for info on both issues: https://answers.launchpad.net/devstack/+question/197749
+
'''Note:''' the need to include 'rabbit' here seems to be a bug, which may have been fixed by the time you're reading this.If 'rabbit' is not specified, nova-compute also will try to connect to rabbit on localhost, not the "controller host".  See the following link for info on both issues: https://answers.launchpad.net/devstack/+question/197749

Latest revision as of 19:23, 21 August 2014

Basic Setup

In order to use Neutron with devstack (http://devstack.org) a single node setup, you'll need the following settings in your local.conf (see this page for more details on local.conf).

[[local|localrc]]
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
# Optional, to enable tempest configuration as part of devstack
enable_service tempest


Then run stack.sh as normal.

If tempest has been successfully configured, a basic set of smoke tests can be run as follows:

$ cd /opt/stack/tempest
$ nosetests tempest/scenario/test_network_basic_ops.py


See the Neutron Admin Guide for details on interacting with Neutron: http://docs.openstack.org/trunk/openstack-network/admin/content/index.html

XS/XCP Setup

See the following page for instructions on configuring Neutron (then called Quantum, which tells you how old the linked doc is) with OVS on XS/XCP: QuantumDevstackOvsXcp

Multi-Node Setup

A more interesting setup involves running multiple compute nodes, with Neutron networks connecting VMs on different compute nodes.

You should run at least one "controller node", which should have a localrc that includes at least:


disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron


You can then run many compute nodes, each of which should have a localrc which includes the following, with the IP address of the above controller node:


ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
SERVICE_HOST=[IP of controller node]
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
Q_HOST=$SERVICE_HOST


Note: the need to include 'rabbit' here seems to be a bug, which may have been fixed by the time you're reading this.If 'rabbit' is not specified, nova-compute also will try to connect to rabbit on localhost, not the "controller host". See the following link for info on both issues: https://answers.launchpad.net/devstack/+question/197749