Jump to: navigation, search

Difference between revisions of "TelcoWorkingGroup"

(Technical Team Meetings)
 
(56 intermediate revisions by 4 users not shown)
Line 4: Line 4:
  
 
<blockquote>The requirements expressed by this group should be made so that each of them have a test case which can be verified using an OpenSource implementation. This is to ensure that tests can be done without any special hardware or proprietary software, which is key for continuous integration tests in the OpenStack gate. If special setups are required which cannot be reproduced on the standard OpenStack gate, the use cases proponent will have to provide a 3rd party CI setup, accessible by OpenStack infra, which will be used to validate developments against.</blockquote>
 
<blockquote>The requirements expressed by this group should be made so that each of them have a test case which can be verified using an OpenSource implementation. This is to ensure that tests can be done without any special hardware or proprietary software, which is key for continuous integration tests in the OpenStack gate. If special setups are required which cannot be reproduced on the standard OpenStack gate, the use cases proponent will have to provide a 3rd party CI setup, accessible by OpenStack infra, which will be used to validate developments against.</blockquote>
 +
 +
The work group has also established a team to focus ecosystem development (both vendors and industry co-travelers), collateral development and marketing messaging to address the needs to Telco operators who are interested in deploying OpenStack today.
  
 
= Membership =
 
= Membership =
  
 
Members of the Telco Working Group come from a broad array of backgrounds and include service providers, equipment providers, and OpenStack vendors. We aim to include both operators and developers in an open discussion about the needs of this sector and how to meet them in OpenStack. You can find the current membership list of at [[TelcoWorkingGroup/Members]]. Feel free to add your name If you're interested in working with us to improve OpenStack for telecommunications workloads.
 
Members of the Telco Working Group come from a broad array of backgrounds and include service providers, equipment providers, and OpenStack vendors. We aim to include both operators and developers in an open discussion about the needs of this sector and how to meet them in OpenStack. You can find the current membership list of at [[TelcoWorkingGroup/Members]]. Feel free to add your name If you're interested in working with us to improve OpenStack for telecommunications workloads.
 +
 +
= Communication =
 +
 +
== IRC ==
 +
 +
Members of the working group hang out in the #openstack-nfv IRC channel on irc.freenode.net. Refer to [[IRC]] for more information on OpenStack IRC channels and how to use them.
 +
 +
== Mailing Lists ==
 +
 +
The working group does not have a dedicated mailing list, instead using the existing openstack-dev and openstack-operators mailing lists:
 +
 +
* http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 +
* http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 +
 +
These are high traffic lists, when sending mail pertaining to the working group include the [NFV] and [Telco] tags in the subject line, users filtering the list specifically for emails pertaining to the working group will do so based on these tags.
 +
 +
Refer to [[Mailing_Lists]] for more information on OpenStack mailing lists and how to use them.
  
 
= Meetings =
 
= Meetings =
 +
 +
 +
== Technical Team Meetings ==
  
The working group meets alternating on Wednesdays between [http://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=0&sec=0&p1=0 1400 UTC] in #openstack-meeting-alt and  [http://www.timeanddate.com/worldclock/fixedtime.html?hour=22&min=0&sec=0&p1=0 2200 UTC] in #openstack-meeting.
+
The working group meeting schedule is available at http://eavesdrop.openstack.org/#Telco_Working_Group_meeting
  
 
[[IRC|OpenStack IRC details]]
 
[[IRC|OpenStack IRC details]]
  
== Upcoming Meetings ==
+
=== Upcoming Meetings ===  
  
 
Agenda: [https://etherpad.openstack.org/p/nfv-meeting-agenda]
 
Agenda: [https://etherpad.openstack.org/p/nfv-meeting-agenda]
  
 +
=== Previous Meetings ===
 +
 +
* [http://eavesdrop.openstack.org/meetings/telcowg/ Meeting logs]
 +
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs (archived)]
 +
* [https://etherpad.openstack.org/p/juno-nfv-bof Atlanta (Juno) Summit NFV BoF]
 +
* [https://etherpad.openstack.org/p/kilo-summit-ops-telco Paris (Kilo) Summit Telco Working Group]
 +
* [https://etherpad.openstack.org/p/YVR-ops-telco Vancouver (Liberty) Summit Telco Working Group]
 +
* [https://etherpad.openstack.org/p/TYO-telcowg Tokyo (Mitaka) Summit Telco Working Group]]
 +
 +
<!--== Ecosystem and Collateral Team ==
 +
This team is focused on accelerating the deployment of OpenStack by Telco Operators by enaging with the Ecosystem (vendors and industry groups) and developing needed information/collateral (case studies, reference architectures, etc).
 +
=== Upcoming Meetings ===
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
! Date !! Time !! IRC Channel
+
! Date || Time || Bridge Information || Link to Etherpad Notes
 +
|-
 +
| Tuesday 9th December 2014 || 8ː00 Pacific || Access: (888) 875-9370, Bridge: 3; PC: 7053780 || https://etherpad.openstack.org/p/12_9_TWG_Ecosystem_and_Collateral
 +
|-
 +
| Thursday 8th January 2015 ||  9ː00 Pacific || Access: (888) 875-9370, Bridge: 3; PC: 7053780 ||  https://etherpad.openstack.org/p/1_8_TWG_Ecosystem_and_Collateral
 +
|-
 +
| Thursday 15th January 2015 ||  9ː00 Pacific || Access: (888) 875-9370, Bridge: 3; PC: 7053780 ||  https://etherpad.openstack.org/p/1_15_TWG_Ecosystem_and_Collateral_Team
 
|-
 
|-
| Wednesday 19th November 2014 || 1400 UTC || #openstack-meeting-alt
+
| Thursday 22th January 2015 || 9ː00 Pacific || Access: (888) 875-9370, Bridge: 3; PC: 7053780 ||  https://etherpad.openstack.org/p/1_22_TWG_Ecosystem_and_Collateral_Team
 
|-
 
|-
| Wednesday 26th November 2014 || 2200 UTC || To be Confirmed
+
| Thursday 29th January 2015 || 9ː00 Pacific || Access: (888) 875-9370, Bridge: 3; PC: 7053780 ||  https://etherpad.openstack.org/p/1_29_TWG_Ecosystem_and_Collateral_Team
 
|-
 
|-
| Wednesday 3rd December 2014 || 1400 UTC || #openstack-meeting-alt
+
| Thursday 5th February 2015 || 9ː00 Pacific || Access: (888) 875-9370, Bridge: 3; PC: 7053780 ||  https://etherpad.openstack.org/p/2_5_TWG_Ecosystem_and_Collateral_Team
 
|-
 
|-
| Wednesday 10th December 2014 || 2200 UTC || To be Confirmed
+
| Thursday 12th February 2015 || 9ː00 Pacific || Access: (888) 875-9370, Bridge: 3; PC: 7053780 ||  https://etherpad.openstack.org/p/2_12_TWG_Ecosystem_and_Collateral_Team
 
|-
 
|-
 
|}
 
|}
 
+
-->
== Previous Meetings ==
 
 
 
* [http://eavesdrop.openstack.org/meetings/nfv/ Meeting logs]
 
* [https://etherpad.openstack.org/p/juno-nfv-bof Juno Design Summit NFV BoF]
 
* [https://etherpad.openstack.org/p/kilo-summit-ops-telco Kilo Ops Summit Telco Working Group]
 
  
 
= What is NFV? =
 
= What is NFV? =
Line 49: Line 84:
 
* [http://en.wikipedia.org/wiki/Network_Functions_Virtualization Definition of NFV on Wikipedia]
 
* [http://en.wikipedia.org/wiki/Network_Functions_Virtualization Definition of NFV on Wikipedia]
  
=Use Cases=
+
== Glossary ==
  
{| class="wikitable"
+
[[TelcoWorkingGroup/Glossary]]
|-
 
! Workload Type !! Description || Characteristics !! Examples !! Requirements
 
|-
 
| Data plane || Tasks related to packet handing in an end-to-end communication between edge applications. ||
 
* Intensive I/O requirements - potentially millions of small VoIP packets per second per core
 
* Intensive memory R/W requirements
 
||
 
* CDN cache node
 
* Router
 
* IPSec tunneller
 
* Session Border Controller - media relay function
 
|| -
 
|-
 
| Control plane || Any other communication between network functions that is not directly related to the end-to-end data communication between edge applications. ||
 
* Less intensive I/O and R/W requirements than data plane, due to lower packets per second
 
* More complicated transactions resulting in (potentially) higher CPU load per packet.
 
||
 
* PPP session management
 
* Border Gateway Protocol (BGP) routing
 
* Remote Authentication Dial In User Service (RADIUS) authentication in a Broadband Remote Access Server (BRAS) network function
 
* Session Border Controller - SIP signaling function
 
* IMS core functions (S-CSCF / I-CSCF / BGCF)
 
|| -
 
|-
 
| Signal processing || All network function tasks related to digital processing
 
||
 
* Very sensitive to CPU processing capacity.
 
* Delay sensitive.
 
||
 
* Fast Fourier Transform (FFT) decoding
 
* Encoding in a Cloud-Radio Access Network (C-RAN) Base Band Unit (BBU)
 
* Audio transcoding in a Session Border Controller
 
|| -
 
|-
 
| Storage || All tasks related to disk storage.
 
||
 
* Varying disk, SAN, or NAS, I/O requirements based on applications, ranging from low to extremely high intensity.
 
||
 
* Logger
 
* Network probe
 
|| -
 
|-
 
|}
 
 
 
== ETSI-NFV Use Cases - High Level Description ==
 
 
 
ETSI NFV gap analysis document: https://wiki.openstack.org/wiki/File:NFV%2814%29000154r2_NFV_LS_to_OpenStack.pdf
 
 
 
===Use Case #1: Network Functions Virtualisation Infrastructure as a Service===
 
 
 
This is a reasonably generic IaaS requirement.
 
 
 
===Use Case #2: Virtual Network Function as a Service (VNFaaS)===
 
This primarily targets Customer Premise Equipment (CPE) devices such as access routers, enterprise firewall, WAN optimizers etc. with some Provider Edge devices possible at a later date. ETSI-NFV Performance & portability considerations will apply to deployments that strive to meet high performance and low latency considerations.
 
 
 
===Use Case #3: Virtual Network Platform as a Service (VNPaaS)===
 
This is similar to #2 but at the service level. At larger scale and not at the "app" level only.
 
 
 
===Use Case #4: VNF Forwarding Graphs===
 
Dynamic connectivity between apps in a "service chain".
 
 
 
===Use Case #5: Virtualisation of Mobile Core Network and IMS===
 
Primarily focusing on Evolved Packet Core appliances such as the Mobility Management Entity (MME), Serving Gateway (S-GW), etc. and the IP Multimedia Subsystem (IMS).
 
 
 
===Use Case #6: Virtualisation of Mobile base station===
 
Focusing on parts of the Radio Access Network such as eNodeB's, Radio Link Control and Packet Data Convergence Protocol, etc..
 
 
 
===Use Case #7: Virtualisation of the Home Environment===
 
Similar to Use Case 2, but with a focus on virtualising residential devices instead of enterprise devices. Covers DHCP, NAT, PPPoE, Firewall devices, etc.
 
 
 
===Use Case #8: Virtualisation of CDNs===
 
Content Delivery Networks focusing on video traffic delivery.
 
 
 
===Use Case #9: Fixed Access Network Functions Virtualisation===
 
Wireline related access technologies.
 
 
 
==Contributed Use Cases==
 
 
 
===Session Border Controller===
 
 
 
Contributed by: Calum Loudon
 
 
 
====Description====
 
 
 
Perimeta Session Border Controller, Metaswitch Networks.  Sits on the edge of a service provider's network and polices SIP and RTP (i.e. VoIP) control and media traffic passing over the access network between end-users and the core network or the trunk network between the core and another SP.
 
 
 
====Characteristics====
 
 
 
* Fast and guaranteed performance:
 
** Performance in the order of several million VoIP packets (~64-220 bytes depending on codec) per second per core (achievable on COTS hardware).
 
** Guarantees provided via SLAs.
 
* Fully high availability
 
** No single point of failure, service continuity over both software and hardware failures.
 
* Elastically scalable
 
** NFV orchestrator adds and removes instances in response to network demands.
 
* Traffic segregation (ideally)
 
** Separate traffic from different customers via VLANs.
 
 
 
====Requirements====
 
 
 
* Fast & guaranteed performance (network)
 
** Packets per second target -> either SR-IOV or an accelerated DPDK-like data plane:
 
*** "SR-IOV Networking Support" (https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov) -completed 2014.2
 
*** "Open vSwitch to use patch ports" (https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use) - patched 2014.2
 
*** "userspace vhost in ovd vif bindings" (https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost) - BP says Abandoned?
 
*** "Snabb NFV driver" (https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver)  - Not clear reasons of hold up (-2status )?
 
*** "VIF_VHOSTUSER"  (https://blueprints.launchpad.net/nova/+spec/vif-vhostuser) -In code review, awaiting approval, kilo-1?
 
 
 
* Fast & guaranteed performance (compute):
 
** To optimize data rate we need to keep all working data in L3 cache:
 
***"Virt driver pinning guest vCPUs to host pCPUs" (https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning) - Needs code review, kilo-1?
 
** To optimize data rate need to bind to NIC on host CPU's bus:
 
*** "I/O (PCIe) Based NUMA Scheduling" (https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling) - targeted kilo-1
 
**To offer guaranteed performance as opposed to 'best efforts' we need:
 
** To control placement of cores, minimise TLB misses and get accurate info about core topology (threads vs. hyperthreads etc.); maps to the remaining blueprints on NUMA & vCPU topology:
 
*** "Virt driver guest vCPU topology configuration" (https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology) - implemented 2014.2
 
*** "Virt driver guest NUMA node placement & topology" (https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement) - implemented 2014.2
 
*** "Virt driver large page allocation for guest RAM" (https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages) - proposed kilo, target for kilo-1
 
** May need support to prevent 'noisy neighbours' stealing L3 cache - unproven, and no blueprint we're aware of.
 
 
 
* High availability:
 
** Requires anti-affinity rules to prevent active/passive being instantiated on same host - already supported, so no gap.
 
 
 
* Elastic scaling:
 
** Readily achievable using existing features - no gap.
 
 
 
* VLAN trunking:
 
** "VLAN trunking networks for NFV" (https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks et al). - Needs resubmission by Ian Wells for approval
 
 
 
* "GTP tunnel support for mobile network for NFV  VNFs like SGW, PGW, MME (https://blueprints.launchpad.net/neutron/+spec/provider-network.type-gtp -needs update for resubmission)
 
 
 
* Other:
 
** Being able to offer apparent traffic separation (e.g. service traffic vs. application management) over single network is also useful in some cases.
 
*** "Support two interfaces from one VM attached to the same network" (https://blueprints.launchpad.net/nova/+spec/multiple-if-1-net) - implemented 2014.2
 
 
 
===Virtual IMS Core===
 
 
 
Contributed by: Calum Loudon
 
 
 
====Description====
 
 
 
Project Clearwater, http://www.projectclearwater.org/.  An open source implementation of an IMS core designed to run in the cloud and be massively scalable.  It provides SIP-based call control for voice and video as well as SIP-based messaging apps.  As an IMS core it provides P/I/S-CSCF function together with a BGCF and an HSS cache, and includes
 
a WebRTC gateway providing interworking between WebRTC & SIP clients.
 
 
 
====Characteristics relevant to NFV/OpenStack====
 
 
 
* Mainly a compute application: modest demands on storage and networking.
 
* Fully HA, with no SPOFs and service continuity over software and hardware failures; must be able to offer SLAs.
 
* Elastically scalable by adding/removing instances under the control of the NFV orchestrator.
 
 
 
====Requirements====
 
 
 
* Compute application:
 
** OpenStack already provides everything needed; in particular, there are no requirements for an accelerated data plane, nor for core pinning nor NUMA
 
 
 
*HA:
 
** implemented as a series of N+k compute pools; meeting a given SLA requires being able to limit the impact of a single host failure
 
** potentially a scheduler gap here: affinity/anti-affinity can be expressed pair-wise between VMs, which is sufficient for a 1:1 active/passive architecture, but an N+k pool needs a concept equivalent to "group anti-affinity" i.e. allowing the NFV orchestrator to assign each VM in a pool to one of X buckets, and requesting OpenStack to ensure no single host failure can affect more than one bucket
 
** (there are other approaches which achieve the same end e.g. defining a group where the scheduler ensures every pair of VMs within that group are not instantiated on the same host)
 
** for study whether this can be implemented using current scheduler hints
 
 
 
* Elastic scaling:
 
** as for compute requirements there is no gap - OpenStack already provides everything needed.
 
 
 
=== VLAN Trunking ===
 
 
 
 
 
The big picture is that this is about how service providers can use
 
virtualisation to provide differentiated network services to their customers
 
(and specifically enterprise customers rather than end users); it's not about
 
VMs want to set up networking between themselves.
 
 
 
A typical service provider may be providing network services to thousands or
 
more of enterprise customers.  The details of and configuration required for
 
individual services will differ from customer to customer.  For example,
 
consider a Session Border Control service (basically, policing VoIP
 
interconnect): different customers will have different sets of SIP trunks that
 
they can connect to, different traffic shaping requirements, different
 
transcoding rules etc.
 
 
 
Those customers will normally connect in to the service provider in one of two
 
ways: a dedicated physical link, or through a VPN over the public Internet. 
 
Once that traffic reaches the edge of the SP's network, then it makes sense for
 
the SP to put all that traffic onto the same core network while keeping some
 
form of separation to allow the network services to identify the source of the
 
traffic and treat it independently.  There are various overlay techniques that
 
can be used (e.g. VXLAN, GRE tunnelling) but one common and simple one is VLANs.
 
Carrying VLAN trunking into the VM allows this scheme to continue to be used in
 
a virtual world.
 
 
 
In this set-up, then any VMs implementing those services have to be able to
 
differentiate between customers.  About the only way of doing that today in
 
OpenStack is to configure one provider network per customer then have one vNIC
 
per provider network, but that approach clearly doesn't scale (both performance
 
and configuration effort) if a VM has to see traffic from hundreds or thousands
 
of customers.  Instead, carrying VLAN trunking into the VM allows them to do
 
this scalably.
 
 
 
The net is that a VM providing a service that needs to have access to a
 
customer's non-NATed source addresses needs an overlay technology to allow this,
 
and VLAN trunking into the VM is sufficiently scalable for this use case and
 
leverages a common approach.
 
 
 
From: http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
 
 
 
== References: ==
 
* [http://docbox.etsi.org/ISG/NFV/Open/Latest_Drafts/NFV-PER001v009%20-%20NFV%20Performance%20&%20Portability%20Best%20Practises.pdf Network Functions Virtualization NFV Performance & Portability Best Practices - DRAFT]
 
* ETSI-NFV Use Cases V1.1.1 [http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf]
 
  
 
== Related Teams and Projects ==
 
== Related Teams and Projects ==
Line 266: Line 93:
 
= Development Efforts =
 
= Development Efforts =
  
== Active Bugs ==
+
== Use Case Definition ==
 +
 
 +
Use cases are currently collected at [[TelcoWorkingGroup/UseCases]], more are welcome! Definition of target use cases, and identification of gaps based on these, is a primary focus of this working group with the goal being to ensure that blueprints created to close these gaps are furnished with appropriately descriptive information on how they will actually be used in practice. Ultimately it is expected that this will help core review teams understand the need for a given feature when reviewing the blueprint and its associated specification.
 +
 
 +
<!--== Active Bugs ==
  
 
Add the "nfv" tag to bugs to have them appear in these queries:
 
Add the "nfv" tag to bugs to have them appear in these queries:
Line 594: Line 425:
 
|-
 
|-
 
| Evacuate instance to scheduled host || Nova || Approved / Implemented (juno-2) || https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance || https://review.openstack.org/84429 ||
 
| Evacuate instance to scheduled host || Nova || Approved / Implemented (juno-2) || https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance || https://review.openstack.org/84429 ||
|-
+
|-
 +
| Heat Multi-region Support || Heat || Approved / Code Review || https://blueprints.launchpad.net/heat/+spec/multi-region-support || https://blueprints.launchpad.net/openstack/?searchtext=multi-region-support ||
 +
|-
 +
 
 
|}
 
|}
  
 
== Needed Development Not Yet Started ==
 
== Needed Development Not Yet Started ==
  
 
+
-->
[[Category: teams]]
+
[[Category: Working_Groups]]

Latest revision as of 13:36, 25 November 2015

Mission statement and scope

The working group aims to define the use cases and identify and prioritise the requirements which are needed to deploy, manage, and run telecommunication services on top of OpenStack. This work includes identifying functional gaps, creating blueprints, submitting and reviewing patches to the relevant OpenStack projects and tracking their completion in support of telecommunication services.
The requirements expressed by this group should be made so that each of them have a test case which can be verified using an OpenSource implementation. This is to ensure that tests can be done without any special hardware or proprietary software, which is key for continuous integration tests in the OpenStack gate. If special setups are required which cannot be reproduced on the standard OpenStack gate, the use cases proponent will have to provide a 3rd party CI setup, accessible by OpenStack infra, which will be used to validate developments against.

The work group has also established a team to focus ecosystem development (both vendors and industry co-travelers), collateral development and marketing messaging to address the needs to Telco operators who are interested in deploying OpenStack today.

Membership

Members of the Telco Working Group come from a broad array of backgrounds and include service providers, equipment providers, and OpenStack vendors. We aim to include both operators and developers in an open discussion about the needs of this sector and how to meet them in OpenStack. You can find the current membership list of at TelcoWorkingGroup/Members. Feel free to add your name If you're interested in working with us to improve OpenStack for telecommunications workloads.

Communication

IRC

Members of the working group hang out in the #openstack-nfv IRC channel on irc.freenode.net. Refer to IRC for more information on OpenStack IRC channels and how to use them.

Mailing Lists

The working group does not have a dedicated mailing list, instead using the existing openstack-dev and openstack-operators mailing lists:

These are high traffic lists, when sending mail pertaining to the working group include the [NFV] and [Telco] tags in the subject line, users filtering the list specifically for emails pertaining to the working group will do so based on these tags.

Refer to Mailing_Lists for more information on OpenStack mailing lists and how to use them.

Meetings

Technical Team Meetings

The working group meeting schedule is available at http://eavesdrop.openstack.org/#Telco_Working_Group_meeting

OpenStack IRC details

Upcoming Meetings

Agenda: [1]

Previous Meetings


What is NFV?

NFV stands for Network Functions Virtualization. It defines the replacement of usually stand alone appliances used for high and low level network functions, such as firewalls, network address translation, intrusion detection, caching, gateways, accelerators, etc, into virtual instance or set of virtual instances, which are called Virtual Network Functions (VNF). In other words, it could be seen as replacing some of the hardware network appliances with high-performance software taking advantage of high performance para-virtual devices, other acceleration mechanisms, and smart placement of instances. The origin of NFV comes from a working group from the European Telecommunications Standards Institute (ETSI) whose work is the basis of most current implementations. The main consumers of NFV are Service providers (telecommunication providers and the like) who are looking to accelerate the deployment of new network services, and to do that, need to eliminate the constraint of slow renewal cycle of hardware appliances, which do not autoscale and limit their innovation.

NFV support for OpenStack aims to provide the best possible infrastructure for such workloads to be deployed in, while respecting the design principles of a IaaS cloud. In order for VNF to perform correctly in a cloud world, the underlying infrastructure needs to provide a certain number of functionalities which range from scheduling to networking and from orchestration to monitoring capacities. This means that to correctly support NFV use cases in OpenStack, implementations may be required across most, if not all, main OpenStack projects, starting with Neutron and Nova.

For more details on NFV, the following references may be useful:

Glossary

TelcoWorkingGroup/Glossary

Related Teams and Projects

  • OpenStack Congress - Policy as a Service [2]

Development Efforts

Use Case Definition

Use cases are currently collected at TelcoWorkingGroup/UseCases, more are welcome! Definition of target use cases, and identification of gaps based on these, is a primary focus of this working group with the goal being to ensure that blueprints created to close these gaps are furnished with appropriately descriptive information on how they will actually be used in practice. Ultimately it is expected that this will help core review teams understand the need for a given feature when reviewing the blueprint and its associated specification.