Difference between revisions of "Neutron/LBaaS/Agent"
Line 57: | Line 57: | ||
The second approach may preferable from client perspective. | The second approach may preferable from client perspective. | ||
+ | |||
+ | == How to run the code on [[DevStack]] == | ||
+ | |||
+ | The code is available under https://review.openstack.org/#/c/20579. The patch introduces new Quantum service agent, APIs for RPC, extension for loadbalancer plugin, [[Quantum/LBaaS/DriverAPI|driver API]] and dummy loadbalancer driver for testing purposes. To run the code on [[DevStack]] one needs: | ||
+ | # Check out Quantum code from gerrit topic (i.e. <code><nowiki>git fetch https://review.openstack.org/openstack/quantum refs/changes/79/20579/6 && git checkout FETCH_HEAD </nowiki></code>) | ||
+ | # Update <code><nowiki> /etc/quantum/quantum.conf </nowiki></code> with the following parameters: | ||
+ | |||
+ | <pre><nowiki> | ||
+ | # Advanced service modules | ||
+ | service_plugins = quantum.plugins.services.loadbalancer.loadbalancerPlugin.LoadBalancerPlugin | ||
+ | |||
+ | # Service schedulers | ||
+ | balancer_devices_definitions = /etc/quantum/devices.json | ||
+ | |||
+ | # Loadbalancer drivers | ||
+ | service_drivers = quantum.tests.unit.drivers.dummy.dummy_driver.DummyDriver | ||
+ | </nowiki></pre> | ||
+ | |||
+ | # Create <code><nowiki> /etc/quantum/devices.json </nowiki></code> with the list of available devices. For workflow testing one may use dummy implementation: | ||
+ | |||
+ | <pre><nowiki> | ||
+ | { | ||
+ | "devices": [ | ||
+ | { | ||
+ | "name": "dummy-d", | ||
+ | "type": "dummy", | ||
+ | "version": "1.0", | ||
+ | "status": "ACTIVE", | ||
+ | "management": { | ||
+ | "ip": "172.24.1.1", | ||
+ | "port": "22", | ||
+ | } | ||
+ | } | ||
+ | ] | ||
+ | } | ||
+ | </nowiki></pre> | ||
+ | |||
+ | |||
+ | # Restart Quantum server (in corresponding screen) | ||
+ | # Start Agent by <code><nowiki> /opt/stack/quantum$ ./bin/quantum-service-agent --config-file /etc/quantum/quantum.conf --debug </nowiki></code> | ||
+ | # Check that the workflow works by creating pool object: | ||
+ | |||
+ | <code><nowiki> quantum lb-pool-create --lb-method ROUNDROBIN --name Uno --protocol TCP --subnet-id <subnet-id> </nowiki></code> | ||
+ | |||
+ | The command outputs Pool in PENDING_CREATE state, to verify that it is actually created (state is ACTIVE), run | ||
+ | |||
+ | <code><nowiki> quantum lb-pool-show <pool-id> </nowiki></code> | ||
+ | |||
+ | Full list of LBaaS CLI commands is available at [[Quantum/LBaaS/CLI]] |
Revision as of 10:59, 7 February 2013
Quantum Agent for LoadBalancing service
Scope of the document
This document describes Agent component of the Quantum's Loadbalancing service, defines requrements, architecture and some implementation details.
Requirements
Service agent is needed to add several important features to a service which operations are potentially time consuming such as:
- Asynchronous execution.
Plugin is notified by agent when requested operation is complete.
- Workload balancing.
There could be serveral agents listening for quantum server messages thus in large deployments it allows splitting workload to several hosts.
Architecture
- Plugin part
1.1. Calling agent Plugin packs all required information in the json message and sends it over AMQP to durable message queue. The message is consumed by one of the running agents and corresponding call is made via one of the drivers in synchronous manner. After driver call completes agent sends message to the plugin with status information for modified object.
1.2. Receiving response Plugin should wait on another queue where responses are posted by the agent. Information in response should allow plugin to uniquely identify object for which status of operation is returned. Plugin consumes response messages in synchronous manner one by one since their processing is not a time consuming operation.
- Agent part
Agent should be able to consume messages in multithreaded manner. E.g. agent should allow execution of several operations at once. One technical difficulty of this is device sharing. Consider the case when several operations go for the same device. Such operations should be executed sequentially rather than concurrently.
- Authentication.
Agent doesn't perform authentication of received requests, nor the requests contain any authentication information. It's presumed that authentication was done on the plugin part so caller has access to the device it is configuring.
Brief Component/Workflow diagram:
File:Quantum$$LBaaS$$Agent$Plugin-Agent Architecture.png
Implementation
- Agent Part
One of the most important issues of plugin-agent-driver architecture is device sharing/locking. Since Agent can process requests from plugin concurrently, it's important to preserve atomicity of access to the same device so different calls for the same device would not break its configuration. Consider the following use case:
Tenant creates a vip object and then creates members. Depending on how those actions are packed as requests to agent it could result that agent may start executing member creation requests before pool creation is finished. The solution for this problem is that agent should internally queue the requests for the device. E.g. requests dedicated to the same device are processed sequentially.
Also, Agent is responsible for detecting timed out operations and sending a message with appropriate reason to plugin's response consumer.
- Plugin part
Plugin is also responsible for assuring atomic device access. There could be two approaches to this:
- Plugin locks object in the DB, replying with some HTTP error to concurrent call for the same resource and thus putting responsibility to client to handle it.
- Plugin queues request as usual and lets Agent to queue requests for specific device to execute them sequentially.
The second approach may preferable from client perspective.
How to run the code on DevStack
The code is available under https://review.openstack.org/#/c/20579. The patch introduces new Quantum service agent, APIs for RPC, extension for loadbalancer plugin, driver API and dummy loadbalancer driver for testing purposes. To run the code on DevStack one needs:
- Check out Quantum code from gerrit topic (i.e.
git fetch https://review.openstack.org/openstack/quantum refs/changes/79/20579/6 && git checkout FETCH_HEAD
) - Update
/etc/quantum/quantum.conf
with the following parameters:
# Advanced service modules service_plugins = quantum.plugins.services.loadbalancer.loadbalancerPlugin.LoadBalancerPlugin # Service schedulers balancer_devices_definitions = /etc/quantum/devices.json # Loadbalancer drivers service_drivers = quantum.tests.unit.drivers.dummy.dummy_driver.DummyDriver
- Create
/etc/quantum/devices.json
with the list of available devices. For workflow testing one may use dummy implementation:
{ "devices": [ { "name": "dummy-d", "type": "dummy", "version": "1.0", "status": "ACTIVE", "management": { "ip": "172.24.1.1", "port": "22", } } ] }
- Restart Quantum server (in corresponding screen)
- Start Agent by
/opt/stack/quantum$ ./bin/quantum-service-agent --config-file /etc/quantum/quantum.conf --debug
- Check that the workflow works by creating pool object:
quantum lb-pool-create --lb-method ROUNDROBIN --name Uno --protocol TCP --subnet-id <subnet-id>
The command outputs Pool in PENDING_CREATE state, to verify that it is actually created (state is ACTIVE), run
quantum lb-pool-show <pool-id>
Full list of LBaaS CLI commands is available at Quantum/LBaaS/CLI