Jump to: navigation, search

Difference between revisions of "Neutron/ServiceAgent"

(Created page with "== Scope == There are two more service agents planned to be added in Havana release. The goal of this blueprint is to specify common architecture for all service agents and s...")
 
m (ThierryCarrez moved page Quantum/ServiceAgent to Neutron/ServiceAgent)
 
(8 intermediate revisions by one other user not shown)
Line 5: Line 5:
 
== Use Cases ==
 
== Use Cases ==
  
** Note **. Here and below the term 'resource object' means the root object of service model (e.g. VIP in case of LBaaS,  vpn_connections in case of VPNaaS). Term 'appliance' represents machine/process where service runs.  
+
''Note''. Here and below the term 'resource object' means the root object of service model (e.g. VIP in case of LBaaS,  vpn_connections in case of VPNaaS). Term 'appliance' represents machine/process where service runs.  
  
 
From deployment perspective appliance can be:
 
From deployment perspective appliance can be:
1) on-host process - ex. haproxy as process on network controller or dedicated node with running haproxy
+
# on-host process - ex. haproxy as process on network controller or dedicated node with running haproxy
2) VM running on compute node
+
# VM running on compute node
3) physical appliance
+
# physical appliance
  
<figure 1>
 
  
Use case 1. Load balancing using 3 nodes with HAProxy processes
+
==== Figure 1. Service agent deployment ====
* Enable 'lbaas-haproxy-namespace-agent' on every host
+
[[File:service_agent_deployment.png]]
* Configure scheduler (similar to what is done for L3 and DHCP agents)
 
* Register agents
 
* Create LB logic objects. Once VIP is created the agent scheduler picks the agent and binds them. The agent compiles logic model into haproxy configuration and launches process (the way it is done in Grizzly reference implementation)
 
In this use case there will be one agent per node
 
  
Use case 2. Load balancing using F5 hardware balancers
+
In this deployment user has 1 network controller, 2 dedicated hosts where haproxy-namespace agent run, 1 host with F5 agent and 2 HW LBs managed by F5 agent.
* Enable 'lbaas-f5-agent' on network controller (here instead of specific lbaas-f5-agent we may have lbaas-agent with f5 driver)
 
* The agent is configured with the list of devices. The configuration may be static (via conf file) or exposed via API (like those for l3-agent and dhcp-agent)
 
* The
 
  
An alternative solution may be based on agent scheduler feature introduced in Grizzly-3. Agent scheduler feature includes infrastructure that supports API extension used for agent configuration, pluggable scheduling algorithms, health monitoring (good blog post on how the scheduler works http://www.mirantis.com/blog/a-new-agent-management-approach-in-quantum-for-openstack-grizzly/)
 
  
Agent scheduler -based solution fits to all types of appliances: for type 1 appliances agent is located on the same host that provides service. For type 2 and 3 agent may be a single process per cloud, or as many as needed to have high availability. There may be one class of agent per service type / vendor or common agent extendable with service plugins. Agent may hold list of appliances it configures, that list may be static or configurable via agent API (extension of agent scheduler API).
+
==== Use case 1. Load balancing using 3 nodes with HAProxy processes ====
 +
* Enable 'lbaas-haproxy-namespace-agent' on every host
 +
* Configure the scheduler (similar to what is done for L3 and DHCP agents)
 +
* Register agents
 +
* Create LB logic objects. Once VIP is created the agent scheduler picks the agent and binds them. The agent compiles logic model into haproxy configuration and launches process (the way it is done in Grizzly reference implementation)
 +
 
 +
==== Use case 2. Load balancing using F5 hardware balancers ====
 +
* Enable 'lbaas-f5-agent' on network controller (here instead of specific lbaas-f5-agent we may have lbaas-agent with f5 driver). The agent is configured with the list of devices. The configuration may be static (via conf file) or exposed via API (like that for l3-agent and dhcp-agent)
 +
* Configure the scheduler. Here a specific scheduler may be chosen - the one that is aware of devices state and their capacity.
 +
 
 +
==== Use case 3. VPN tunnel is created ====
 +
* Enable 'vpnaas-agent' on network node
 +
* Configure the scheduler with specific VPNaaS-aware scheduler
 +
 
 +
== Implementation Overview ==
 +
 
 +
The agent scheduler framework already introduces parts needed for implementation: pluggable scheduling algorithm, API extensions, l3-agent and dhcp-agent may be used as reference point.
  
 
From code perspective the changes are:
 
From code perspective the changes are:
1. Introduce new type of agent (for lbaas we can base on reference implementation of lbaas-agent)
+
# Introduce new type of agent (for lbaas we can base on reference implementation of lbaas-namespace-agent)
2. Introduce new scheduling algorithm that takes into account capabilities of appliance, their load, etc.
+
# Introduce new scheduling algorithm that takes into account capabilities of appliance, their load, etc.
3. Introduce reporting protocol between agent and agent scheduler
+
# Introduce reporting protocol between agent and agent scheduler
4. Introduce API for service agent just like it is done for l3-agent and dhcp-agent. This API may support operations for managing list of devices.
+
# Introduce API for service agent just like it is done for l3-agent and dhcp-agent. This API may support operations for managing list of devices.
 +
 
 +
The architecture is following:
  
 +
==== Figure 2. Plugin and agent architecture ====
 +
[[File:service_agent_architecture.png]]
  
Implementation Overview: [Provide an overview of the implementation and any algorithms that will be used]
+
'''Note'''. There are options on agent implementation: mixin with specific service implementations or extension-per-service.
  
Data Model Changes: [Are you introducing new model classes, or extending existing ones?]
+
== Data Model Changes ==
  
Configuration variables: [List and explanation of the new configuration variables (if they exist)]
+
Introduce new tables for storing binding between resource object and agent. As an option we may think of refactoring existing NetworkDhcpAgentBinding and RouterL3AgentBinding into common model.
  
API's: [List and explanation of the new API's (if they exist)]
+
== Configuration variables ==
  
Plugin Interface: [Does this feature introduce any change?]
+
* quantum.conf: variable to configure scheduler for service agent.
 +
* service-agent.conf
  
Required Plugin support: [What should the plugins do to support this new feature? (If applicable)]
+
== API's ==
  
Dependencies: [List of python packages and/or OpenStack components? (If applicable)]
+
API to configure list of devices for device-oriented service agent. But this is not top priority for the first version as devices may be configured statically.
  
CLI Requirements: [List of CLI requirements (If applicable)]
+
== CLI Requirements ==
  
Horizon Requirements: [List of Horizon requirements (If applicable)]
+
CLI for device configuration (optional)
  
Usage Example: [How to run/use/interface with the new feature. (If applicable)]
+
== Horizon Requirements ==
  
Test Cases: [Description of various test cases. (If applicable)]
+
'tbd'

Latest revision as of 15:54, 21 June 2013

Scope

There are two more service agents planned to be added in Havana release. The goal of this blueprint is to specify common architecture for all service agents and show how existing agent-scheduler framework may be used. The new service agent will be used as a base for lbaas, vpnaas, fwaas agents.

Use Cases

Note. Here and below the term 'resource object' means the root object of service model (e.g. VIP in case of LBaaS, vpn_connections in case of VPNaaS). Term 'appliance' represents machine/process where service runs.

From deployment perspective appliance can be:

  1. on-host process - ex. haproxy as process on network controller or dedicated node with running haproxy
  2. VM running on compute node
  3. physical appliance


Figure 1. Service agent deployment

Service agent deployment.png

In this deployment user has 1 network controller, 2 dedicated hosts where haproxy-namespace agent run, 1 host with F5 agent and 2 HW LBs managed by F5 agent.


Use case 1. Load balancing using 3 nodes with HAProxy processes

  • Enable 'lbaas-haproxy-namespace-agent' on every host
  • Configure the scheduler (similar to what is done for L3 and DHCP agents)
  • Register agents
  • Create LB logic objects. Once VIP is created the agent scheduler picks the agent and binds them. The agent compiles logic model into haproxy configuration and launches process (the way it is done in Grizzly reference implementation)

Use case 2. Load balancing using F5 hardware balancers

  • Enable 'lbaas-f5-agent' on network controller (here instead of specific lbaas-f5-agent we may have lbaas-agent with f5 driver). The agent is configured with the list of devices. The configuration may be static (via conf file) or exposed via API (like that for l3-agent and dhcp-agent)
  • Configure the scheduler. Here a specific scheduler may be chosen - the one that is aware of devices state and their capacity.

Use case 3. VPN tunnel is created

  • Enable 'vpnaas-agent' on network node
  • Configure the scheduler with specific VPNaaS-aware scheduler

Implementation Overview

The agent scheduler framework already introduces parts needed for implementation: pluggable scheduling algorithm, API extensions, l3-agent and dhcp-agent may be used as reference point.

From code perspective the changes are:

  1. Introduce new type of agent (for lbaas we can base on reference implementation of lbaas-namespace-agent)
  2. Introduce new scheduling algorithm that takes into account capabilities of appliance, their load, etc.
  3. Introduce reporting protocol between agent and agent scheduler
  4. Introduce API for service agent just like it is done for l3-agent and dhcp-agent. This API may support operations for managing list of devices.

The architecture is following:

Figure 2. Plugin and agent architecture

Service agent architecture.png

Note. There are options on agent implementation: mixin with specific service implementations or extension-per-service.

Data Model Changes

Introduce new tables for storing binding between resource object and agent. As an option we may think of refactoring existing NetworkDhcpAgentBinding and RouterL3AgentBinding into common model.

Configuration variables

  • quantum.conf: variable to configure scheduler for service agent.
  • service-agent.conf

API's

API to configure list of devices for device-oriented service agent. But this is not top priority for the first version as devices may be configured statically.

CLI Requirements

CLI for device configuration (optional)

Horizon Requirements

'tbd'