Jump to: navigation, search


< Neutron
Revision as of 00:34, 20 October 2012 by Salvatore (talk)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

API proposal for service insertion:

API Changes

ServiceType Definition

ServiceType: {

 id: uuid,
 name: string,
 service_definitions: list<ServiceDefinition>
 default: {True¦False} # Only one service_type could be the default one. This is the service type which should be picked if none is specified in the API (think backward compatibility)


ServiceDefinition: {

 id: uuid,
 name: string,
 type: enum(LB, VPN, FW) # or whatever you think is right
 provider: string # This ultimately must map to a python class (other options welcome)
 capabilities: list of API extensions which are enable for this specify service provider



--> extend with attribute: services:service_type_id # which refers to service_type object

NOTE: We need to understand whether we want to model also a network_level service insertion capability. the insertion point is purely logical, and determines the scope of the advanced service being plugged in - no changes are expected in back-end implementation. This could achieved by assigning a service_type_id to networks too; however we might need to describe in the service_type definition, which insertion_modes are allowed for each service_definition. Good food for thought, but not necessarily something we want to address immediately. Please note that the "Floating" insertion is not necessarily a network level insertion, as routing might still occur (see examples in slides). Network level insertion might be an interesting concepts for services such as L2 VPNs, or if we want to move the DHCP service we already run in the service insertion scheme.

Enabling or Disabling services on a router

Option 1: Resource action /routers/<router_id>/enable_service /routers/<router_id>/disable_service

Option 2: List Attribute Option 3: Sub-Resource POST /routers/<router_id>/enabled_services

Load Balancer, or other entities which allow tenants to define higher layer services

---> extend with attribute: services:service_type_id # standalone, one-arm, or whatever we call it ---> specify router_id # insertion in routed mode

[ALTERNATIVE] Consider attaching services to routers or networks with interfaces, as we do for the DHCP service The DHCP service is a sort of half-hidden in the logical topology - one can see its port, but not retrieve it from the API. Something similar happens with advanced services (think about the source IP address on a LB, exactly the same thing as a router interface).

Service insertion framework API

Propose it as an extension. This would be a required extension for eventual service API extensions - such as LB. This might imply the rework of the extension API framework should be complete before

Policy Engine

The default behaviour should be:

service_type operations: admin_only tenants should be allowed to specify a service_type for their routers and or "advanced services"

DB Changes

ServiceType definition will require two model classes. The ServiceType class will have a 1:n relationship with the ServiceDefinition class. This class will have an attribute for defining the type of the advanced services provider. This could either be another class in the model or extracted from the Quantum configuration file.

ServiceProvider definition could as well be either another model class or information extracted from the configuration file (see next section on plugin integration)

Every time a router is created, an instance of service_type should be created. Even if this could be part of the router object, it might come handy having it as a standalone model class.


          LB - 1 ---- 1 --/ 

If the service_type_instance model class is not used, model classes for LB, and other type of services will directly refer a service_type, which is however acceptable.

Plugin Integration

The concept of service type implicitly allows for different paths for an API call. Ideally one would reuse the "mixin" mechanism which proved successful when integrating DHCP and L3 services. The possibility of augmenting the core_plugin with the capability of handling advanced services should definitely be allowed. In this case there will still be a 'core_plugin" only. However, it is a principle of Quantum that plugins are required only to implement the plugin interface. Hence they are not required to adopt the mixin mechanism; hence another aspect to consider is that there could be multiplie plugins at the same type, and the intelligence of dispatching a call to one plugin or another should reside in the API layer.

It seems that the following situations are therefore to be considered: 1) Single plugin with the same provider for all advanced services (think about current implementation). In this case no service provider definition should be needed. 2) Single plugin with distinct "drivers" for each service provider. This implies that all services implement the same db model. The mixin classes for advanced services will have a driver interface which will be invoked for performing actual configuration. In this case service drivers should be specified in the configuration file. For each service multiple drivers should be allowed (consider using multivalued options); The service driver should not be seen as a driver for a specific appliance implementing the service. It is rather a driver managing the service for a given provider. As an example, it might manage a pool of physical load balancers, or handle the provisioning of integrated services virtual appliances. 3) Multiple independent plugins. In this case several plugins might be configured at the same time. The API layer will need to figure out, for each router and/or advanced service, to which plugin the API call should be dispatched. 4) An arbitrary mix of scenarios #2 and #3,

Scenarios #3 and #4 appear more complex than #1 and #2 (are they?). This is for the following reasons: - multiple plugins will require the API to manipulate data from different sources. Even if each plugin will return data in the same format, we will still need logic to handle collecting data from multiple sources on GET requests, and dispatching commands to the appropriate destination on POST/PUT/DELETE. - currently all plugins already implement the "mixin" approach - compatibility between plugins (not something we need to worry from day 1, but still an interesting problem) - interactions among plugins. In scenarios #1 and #2, having a single plugin, the data model has all the information it needs. With scenarios #3 and #4 the "additional" plugins will have to interact with the "base" plugin (or in some cases even among them); this could be achieved throughout he REST interfaces or by creating appropriate RPC interfaces

Grizzly plan: provide support for #1 and #2 (actually #1 is just a particular case of #2)

Changes to the current implementation

--> Define a driver interface for L3 and Floating IP functionalities --> Implement drivers for the l3_agent (they will probably empty as the l3 agent polls quantum server) --> Allow multiple external gateways per router (thus enabling PBR)? [Probably not for Grizzly] --> Allow multiple routers per external network? [Probably yes for Grizzly]

Items worth being considered:

Think about the current Floating IP API and whether it fits the service insertions model (as it plugs into an external network and derives the router from the context) --> allow for router_id or service_type_id specification (or restricted floating ip to be applicable to routers only)?

Think about how the DHCP service we implement today through the agent fits in this model. We are not exposing specifically any DHCP API, creating implicitly an instance of this service for each subnet for which dhcp_enable=True. We can either leave DHCP as it is, or include it in the service insertion model. If we go the second route, we need to think about network-level insertion, and devise a way for doing this in a backward compatible way.

POC implementation (with APIs already available in Quantum - namely L3 and Floating IPs)

Integrated services VMs - provides same services as L3 agent, but all within a "router VM".

Choice of service type definition: 1) ServiceType1: routing, floating_ips: l3_agent 2) ServiceType2: routing, floating_ips: integrated_services_vm

Provisioning of Back end resource: this should probably always be a driver-specific task.