Jump to: navigation, search

Difference between revisions of "NeutronDevstack"

(Apparently won't work unless the quantum agent on the compute node can refer to the main server (presumably since the DB comms link was broken).)
(Reference to the API example doc)
Line 65: Line 65:
  
 
'''Note:''' the need to include 'g-api' and 'rabbit' here seems to be a bug, which may have been fixed by the time you're reading this.  Without it, nova-compute dies because it can't import the glance.common library.  This process does not actually need to be running on this host, we just need a way to make sure the glance.common library is installed.  If 'rabbit' is not specified, nova-compute also will try to connect to rabbit on localhost, not the "controller host".  See the following link for info on both issues: https://answers.launchpad.net/devstack/+question/197749
 
'''Note:''' the need to include 'g-api' and 'rabbit' here seems to be a bug, which may have been fixed by the time you're reading this.  Without it, nova-compute dies because it can't import the glance.common library.  This process does not actually need to be running on this host, we just need a way to make sure the glance.common library is installed.  If 'rabbit' is not specified, nova-compute also will try to connect to rabbit on localhost, not the "controller host".  See the following link for info on both issues: https://answers.launchpad.net/devstack/+question/197749
 +
 +
== Using Quantum ==
 +
 +
Now you have Quantum up and running, look at [[RunningQuantumV2Api]] to find out how to make of v2 Quantum.  The commands you need to use to create a network in v2 have changed since the Essex documentation.

Revision as of 11:08, 23 August 2012

Basic Setup

Note: This description covers only Quantum's v2 API (i.e. Folsom code).

In order to use Quantum with devstack (http://devstack.org) a single node setup, you'll need the following settings in your localrc (see this page for more details on localrc).


disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service quantum
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
Q_PLUGIN=openvswitch
NOVA_USE_QUANTUM_API=v2


Then run stack.sh as normal.

Note: For more information about creating networks/subnets and VMs attached to them, see: RunningQuantumV2Api .

Multi-Node Setup

A more interesting setup involves running multiple compute nodes, with Quantum networks connecting VMs on different compute nodes.

You should run at least one "controller node", which should have a stackrc that includes at least:


disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service quantum
Q_PLUGIN=openvswitch
NOVA_USE_QUANTUM_API=v2


You likely want to change your localrc to run a scheduler that will balance VMs across hosts:


SCHEDULER=nova.scheduler.simple.SimpleScheduler


You can then run many compute nodes, each of which should have a stackrc which includes the following, with the IP address of the above controller node:


ENABLED_SERVICES=n-cpu,rabbit,g-api,quantum,q-agt,q-dhcp
Q_PLUGIN=openvswitch
NOVA_USE_QUANTUM_API=v2
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
SERVICE_HOST=[IP of controller node]
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
Q_HOST=$SERVICE_HOST


Note: the need to include 'g-api' and 'rabbit' here seems to be a bug, which may have been fixed by the time you're reading this. Without it, nova-compute dies because it can't import the glance.common library. This process does not actually need to be running on this host, we just need a way to make sure the glance.common library is installed. If 'rabbit' is not specified, nova-compute also will try to connect to rabbit on localhost, not the "controller host". See the following link for info on both issues: https://answers.launchpad.net/devstack/+question/197749

Using Quantum

Now you have Quantum up and running, look at RunningQuantumV2Api to find out how to make of v2 Quantum. The commands you need to use to create a network in v2 have changed since the Essex documentation.