Difference between revisions of "Tacker"
Revision as of 17:49, 19 May 2015
Title: Tacker / ServiceVM
Aim: To develop a framework to simplify hosting of services in virtual and physical appliances
Meetings: Wednesdays 1700 UTC @
Tags: [ServiceVM] [Tacker]
Mission & Scope
Tacker is a project developing a framework to simplify hosting of services in virtual and physical appliances. The framework relieves service developers of having to, themselves, implement functionality to manage devices that host service instances.
The capabilities and functionality envisioned for Tacker are:
- Device inventory to keep track of available devices.
- Template constructs to describe and categorize devices of different types.
- Lifecycle management of virtual machines and containers, including maintenance of VM pools that can hide boot latency.
- Control allocation of capacity in the devices.
Implementation or orchestration of the services (e.g., FWaaS or LBaaS) is NOT in the scope of Tacker. Initial use cases are advanced network services oriented but use cases from other areas are not excluded. Indeed, folks from non-networking communities are welcomed to participate and contribute.
Relation to Nova, Glance, Heat : Tacker makes use of Nova and Glance for VM and image management. Tacker does not, but could, make use of Heat as a subsystem to manage virtual devices.
High Level Functionality
- VNF Catalog
- Basic life-cycle of VNF (define/start/stop/undefine)
- Basic health monitoring of VNF
- Re-spin of VNF on failure
- Maintaining configuration state of VNF
|1||Define ServiceVM||Ability to define a ServiceVM template using (a) instance image (2) device specification (to run the image)||Operator||High||Kilo|
|2||ServiceVM CRUD API||API for basic life-cycle management of ServiceVMs (a) acquire a service-vm instance (b) release a service-vm instance||Plugin||High||Kilo|
|3||ServiceVM Neutron plugging driver||Neutron port attributes to plug in ServiceVM to appropriate networks (a) Tenant L2 network||Plugin||High||Kilo|
|4||Resource Pools of ServiceVMs||Operator should be able to configure a pool of ServiceVMs to be in standby for immediate allocation (to avoid bootup latency)||Operator||Medium||L|
|5||Service allocation using partial ServiceVM||Allocate a portion of ServiceVM for specific service. Reason: Overhead cost of spinning another VM instance is high. If a ServiceVM supports multi-tenancy (for eg VRF for network services) a single ServiceVM can be segregated to support multiple tenant services. Note, there should be 100% tenant isolation while providing this scheme||Operator, Plugin||Medium||L|
|6||Scheduling Scheme for ServiceVM allocation across a pool of VMs.||Ability to schedule ServiceVMs based on different filters (Gnaat integration). For e.g. Tenant-A might have a higher application throughput needs. It would need a "high-end" FW ServiceVM with SR-IOV nics. Tacker scheduler need to get the "hint" from the plugin and allocate "appropriate" FW ServiceVM from the pool.||Operator, Plugin||Low||M|
|7||Capacity management of ServiceVM||Ability to report and use "remaining-capacity" data within a ServiceVM to host the next service request||Plugin||Low||L|
|8||Docker Container Hosted Services||Ability to host a ServiceVM in a Docker Container||Operator||Low||M|
|9||Physical Appliance Hosted Services||Ability to host a Service-Instance in a physical appliance||Operator||Low||M|
Points of contact
- Launchpad project page: https://launchpad.net/tacker
- IRC meeting information: https://wiki.openstack.org/wiki/Meetings/ServiceVM
- IRC channel on Freenode:
|Design & Documentation||ServiceVM/Design|
|Dependencies & Wish List||ServiceVM/Dependencies|
|Spec/Patch Tracking||Spec & Patch Tracking|
|Juno Design Summit||ServiceVM/JunoSummit|