Jump to: navigation, search

Difference between revisions of "TelcoWorkingGroup/UseCases"

(Created page with " {| class="wikitable" |- ! Workload Type !! Description || Characteristics !! Examples !! Requirements |- | Data plane || Tasks related to packet handing in an end-to-end comm...")
 
(Session Border Controller)
 
(49 intermediate revisions by 7 users not shown)
Line 1: Line 1:
  
{| class="wikitable"
+
=Contributing Use Cases=
|-
 
! Workload Type !! Description || Characteristics !! Examples !! Requirements
 
|-
 
| Data plane || Tasks related to packet handing in an end-to-end communication between edge applications. ||
 
* Intensive I/O requirements - potentially millions of small VoIP packets per second per core
 
* Intensive memory R/W requirements
 
||
 
* CDN cache node
 
* Router
 
* IPSec tunneller
 
* Session Border Controller - media relay function
 
|| -
 
|-
 
| Control plane || Any other communication between network functions that is not directly related to the end-to-end data communication between edge applications. ||
 
* Less intensive I/O and R/W requirements than data plane, due to lower packets per second
 
* More complicated transactions resulting in (potentially) higher CPU load per packet.
 
||
 
* PPP session management
 
* Border Gateway Protocol (BGP) routing
 
* Remote Authentication Dial In User Service (RADIUS) authentication in a Broadband Remote Access Server (BRAS) network function
 
* Session Border Controller - SIP signaling function
 
* IMS core functions (S-CSCF / I-CSCF / BGCF)
 
|| -
 
|-
 
| Signal processing || All network function tasks related to digital processing
 
||
 
* Very sensitive to CPU processing capacity.
 
* Delay sensitive.
 
||
 
* Fast Fourier Transform (FFT) decoding
 
* Encoding in a Cloud-Radio Access Network (C-RAN) Base Band Unit (BBU)
 
* Audio transcoding in a Session Border Controller
 
|| -
 
|-
 
| Storage || All tasks related to disk storage.
 
||
 
* Varying disk, SAN, or NAS, I/O requirements based on applications, ranging from low to extremely high intensity.
 
||
 
* Logger
 
* Network probe
 
|| -
 
|-
 
|}
 
  
== ETSI-NFV Use Cases - High Level Description ==
+
The Telecommunications Working group welcomes use cases from Communication Service Providers (CSPs), Network Equipment Providers (NEPs) and other organizations in the telecommunications industry. To begin adding a use case simply copy the "Template" section of this page to the bottom of the list and rename it to a name that describes your use case.
  
ETSI NFV gap analysis document: https://wiki.openstack.org/wiki/File:NFV%2814%29000154r2_NFV_LS_to_OpenStack.pdf
+
When writing use cases focus on "what" you want to do and "why" rather than specific OpenStack requirements or solutions. Our aim as a working group is to assist in distilling those requirements or solutions from the use cases presented to ensure that we are building functionality that benefits all relevant telecommunications use cases. Submission of use cases that pertain to different implementations of the same network function (e.g. vEPC) are welcome as are use cases that speak to the more general demands telecommunications workloads place upon the infrastructure that supports them. In this initial phase of use case analysis the intent is to focus on those workloads that run on top of the provided infrastructure before moving focus to other areas.
  
===Use Case #1: Network Functions Virtualisation Infrastructure as a Service===
+
Use cases are now written in [http://docutils.sourceforge.net/rst.html ReStructured Text] format and stored in the [http://git.openstack.org/cgit/stackforge/telcowg-usecases/ telcowg-usecases] git repository on Stackforge.
  
This is a reasonably generic IaaS requirement.
+
=Reviewing Use Cases=
  
===Use Case #2: Virtual Network Function as a Service (VNFaaS)===
+
The working group uses [http://review.openstack.org OpenStack's Gerrit installation] to collaborate on use case documentation, with the resultant work ultimately being stored [http://git.openstack.org/cgit/stackforge/telcowg-usecases/ in a git repository]. To review items stored in Gerrit you will first need to [[Gerrit_Workflow#Account_Setup|create an account]].
This primarily targets Customer Premise Equipment (CPE) devices such as access routers, enterprise firewall, WAN optimizers etc. with some Provider Edge devices possible at a later date. ETSI-NFV Performance & portability considerations will apply to deployments that strive to meet high performance and low latency considerations.
 
  
===Use Case #3: Virtual Network Platform as a Service (VNPaaS)===
+
Note that to simply review items you will not need to sign the CLA, you will need to do this to upload use cases though. If you have any concerns about this process, consider joining one of the weekly [[TelcoWorkingGroup]] meetings to ask for assistance.
This is similar to #2 but at the service level. At larger scale and not at the "app" level only.
 
  
===Use Case #4: VNF Forwarding Graphs===
+
Once you have created an account you can find open items for review by opening this query in your web browser:
Dynamic connectivity between apps in a "service chain".
 
  
===Use Case #5: Virtualisation of Mobile Core Network and IMS===
+
* https://review.openstack.org/#/q/status:open+project:stackforge/telcowg-usecases,n,z
Primarily focusing on Evolved Packet Core appliances such as the Mobility Management Entity (MME), Serving Gateway (S-GW), etc. and the IP Multimedia Subsystem (IMS).
 
  
===Use Case #6: Virtualisation of Mobile base station===
+
The result of which will look something like this:
Focusing on parts of the Radio Access Network such as eNodeB's, Radio Link Control and Packet Data Convergence Protocol, etc..
 
  
===Use Case #7: Virtualisation of the Home Environment===
+
<gallery>
Similar to Use Case 2, but with a focus on virtualising residential devices instead of enterprise devices. Covers DHCP, NAT, PPPoE, Firewall devices, etc.  
+
Telcowg-user-cases-screen-1.png|Example telcowg-usecases query result.
 +
</gallery>
  
===Use Case #8: Virtualisation of CDNs===
+
=Updating Use Cases=
Content Delivery Networks focusing on video traffic delivery.
 
  
===Use Case #9: Fixed Access Network Functions Virtualisation===
+
=Contributed Use Cases=
Wireline related access technologies.
 
  
==Contributed Use Cases==
+
== Template ==
  
===Session Border Controller===
+
=== Description ===
  
Contributed by: Calum Loudon
+
''Describe the use case in terms of what's being done and why.''
 +
 
 +
=== Characteristics ===
 +
 
 +
''Describe important characteristics of the use case.''
  
====Description====
+
==VPN Instantiation==
  
Perimeta Session Border Controller, Metaswitch Networks.  Sits on the edge of a service provider's network and polices SIP and RTP (i.e. VoIP) control and media traffic passing over the access network between end-users and the core network or the trunk network between the core and another SP.
+
Contributed by Margaret Chiosi
  
====Characteristics====
+
Etherpad: https://etherpad.openstack.org/p/telcowg-usecase-VPN_Instantiation
  
* Fast and guaranteed performance:
+
===Description===
** Performance in the order of several million VoIP packets (~64-220 bytes depending on codec) per second per core (achievable on COTS hardware).
 
** Guarantees provided via SLAs.
 
* Fully high availability
 
** No single point of failure, service continuity over both software and hardware failures.
 
* Elastically scalable
 
** NFV orchestrator adds and removes instances in response to network demands.
 
* Traffic segregation (ideally)
 
** Separate traffic from different customers via VLANs.
 
  
====Requirements====
+
VPN services are critical for the enterprise market which the Telcos provide services to. As we look to virtualize our PEs, VPN instantiation on a vPE needs to be addressed since connectivity is important. Proposal is to focus on ODL/Neutron linkage to openstack orchestration.
 +
Instantiate a VPN service on a vPE connecting to either a vPE or PE. This includes identifying where the vPE needs to be located (some set of criteria needs to be defined  - latency, diversity..) and then created on a virtualized environment. Connectivity to the other vPE/PEs need to be setup. Then finally the VPN service over the different vPE/PE which match the customer sites needs to get instantiated.
  
* Fast & guaranteed performance (network)
+
===Characteristics===
** Packets per second target -> either SR-IOV or an accelerated DPDK-like data plane:
 
*** "SR-IOV Networking Support" (https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov) -completed 2014.2
 
*** "Open vSwitch to use patch ports" (https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use) - patched 2014.2
 
*** "userspace vhost in ovd vif bindings" (https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost) - BP says Abandoned?
 
*** "Snabb NFV driver" (https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver)  - Not clear reasons of hold up (-2status )?
 
*** "VIF_VHOSTUSER"  (https://blueprints.launchpad.net/nova/+spec/vif-vhostuser) -In code review, awaiting approval, kilo-1?
 
  
* Fast & guaranteed performance (compute):
+
*Affinity rules
** To optimize data rate we need to keep all working data in L3 cache:
+
*ODL SDN Controller for connectivity setup
***"Virt driver pinning guest vCPUs to host pCPUs" (https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning) - Needs code review, kilo-1?
+
*Physical connectivity between the different vPE/PE environments are assumed to exist
** To optimize data rate need to bind to NIC on host CPU's bus:
+
*Logical connectivity between different vPE/PE needs to be setup as the vPE is instantiated
*** "I/O (PCIe) Based NUMA Scheduling" (https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling) - targeted kilo-1
+
*VPN service connectivity needs to be setup
**To offer guaranteed performance as opposed to 'best efforts' we need:
+
*need to add the flow logic between the openstack components and ODL
** To control placement of cores, minimise TLB misses and get accurate info about core topology (threads vs. hyperthreads etc.); maps to the remaining blueprints on NUMA & vCPU topology:
 
*** "Virt driver guest vCPU topology configuration" (https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology) - implemented 2014.2
 
*** "Virt driver guest NUMA node placement & topology" (https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement) - implemented 2014.2
 
*** "Virt driver large page allocation for guest RAM" (https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages) - proposed kilo, target for kilo-1
 
** May need support to prevent 'noisy neighbours' stealing L3 cache - unproven, and no blueprint we're aware of.
 
  
* High availability:
+
===Requirements===
** Requires anti-affinity rules to prevent active/passive being instantiated on same host - already supported, so no gap.
 
  
* Elastic scaling:
+
*Affinity rules
** Readily achievable using existing features - no gap.
+
*ODL SDN Controller for connectivity setup
 +
*Physical connectivity between the different vPE/PE environments are assumed to exist
 +
*Logical connectivity between different vPE/PE needs to be setup as the vPE is instantiated
 +
*VPN service connectivity needs to be setup
 +
*Don't need to setup connectivity to customer router (CE) for this use case
  
* VLAN trunking:
+
==Session Border Controller==
** "VLAN trunking networks for NFV" (https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks et al). - Needs resubmission by Ian Wells for approval
 
  
* "GTP tunnel support for mobile network for NFV  VNFs like SGW, PGW, MME (https://blueprints.launchpad.net/neutron/+spec/provider-network.type-gtp -needs update for resubmission)
+
Contributed by: Calum Loudon
  
* Other:
+
Review: https://review.openstack.org/#/c/176301/
** Being able to offer apparent traffic separation (e.g. service traffic vs. application management) over single network is also useful in some cases.
 
*** "Support two interfaces from one VM attached to the same network" (https://blueprints.launchpad.net/nova/+spec/multiple-if-1-net) - implemented 2014.2
 
  
===Virtual IMS Core===
+
==Virtual IMS Core==
  
 
Contributed by: Calum Loudon
 
Contributed by: Calum Loudon
  
====Description====
+
Review: https://review.openstack.org/#/c/158997/
 +
 
 +
== Access to physical network resources ==
 +
 
 +
Contributed by: Jannis Rake-Revelant
  
Project Clearwater, http://www.projectclearwater.org/.  An open source implementation of an IMS core designed to run in the cloud and be massively scalable.  It provides SIP-based call control for voice and video as well as SIP-based messaging apps.  As an IMS core it provides P/I/S-CSCF function together with a BGCF and an HSS cache, and includes
+
Etherpad: https://etherpad.openstack.org/p/telcowg-usecase-Access_to_physical_network
a WebRTC gateway providing interworking between WebRTC & SIP clients.
 
  
====Characteristics relevant to NFV/OpenStack====
+
=== Description ===
  
* Mainly a compute application: modest demands on storage and networking.
+
This use case aims to solve the problem of accessing physical (network) devices outside of the Openstack Infrastructure, that are '''not''' addressable by a public IP address. This use case can currently be implemented in various ways, as will be detailed later on.
* Fully HA, with no SPOFs and service continuity over software and hardware failures; must be able to offer SLAs.
+
The background of this use case is the necessity to communicate with physical devices, in our case e.g. an eNodeB, to a VNF, e.g. a vEPC. Communication/ addressability should be possible from either side. In the current environment different physical devices are separated by VLANs and private IP subnets. The goal is to establish L3 (or L2 if that is "easier") connectivity.  
* Elastically scalable by adding/removing instances under the control of the NFV orchestrator.
 
  
====Requirements====
 
  
* Compute application:
+
The main goal of this use case is not necessarily to implement something new but to discuss the practicability of the current implementations. If I missed an alternative implementation please add it to the list.
** OpenStack already provides everything needed; in particular, there are no requirements for an accelerated data plane, nor for core pinning nor NUMA
 
  
*HA:
+
=== Characteristics ===
** implemented as a series of N+k compute pools; meeting a given SLA requires being able to limit the impact of a single host failure
 
** potentially a scheduler gap here: affinity/anti-affinity can be expressed pair-wise between VMs, which is sufficient for a 1:1 active/passive architecture, but an N+k pool needs a concept equivalent to "group anti-affinity" i.e. allowing the NFV orchestrator to assign each VM in a pool to one of X buckets, and requesting OpenStack to ensure no single host failure can affect more than one bucket
 
** (there are other approaches which achieve the same end e.g. defining a group where the scheduler ensures every pair of VMs within that group are not instantiated on the same host)
 
** for study whether this can be implemented using current scheduler hints
 
  
* Elastic scaling:
+
Possible current implementations include:
** as for compute requirements there is no gap - OpenStack already provides everything needed.
 
  
=== VLAN Trunking ===
+
* L3 gateways
 +
** SNAT
 +
** L3 forwarding
 +
** Floating IPs
 +
* External provider networks, e.g. VLAN backed
 +
* L2 gateways, currently only possible with 3rd party software (?)
  
 +
=== References ===
 +
* [http://docbox.etsi.org/ISG/NFV/Open/Latest_Drafts/NFV-PER001v009%20-%20NFV%20Performance%20&%20Portability%20Best%20Practises.pdf Network Functions Virtualization NFV Performance & Portability Best Practices - DRAFT]
 +
* ETSI-NFV Use Cases V1.1.1 [http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf]
  
The big picture is that this is about how service providers can use
+
==Security Segregation (Placement Zones)==
virtualisation to provide differentiated network services to their customers
 
(and specifically enterprise customers rather than end users); it's not about
 
VMs want to set up networking between themselves.
 
  
A typical service provider may be providing network services to thousands or
+
Contributed by: Daniel Schabarum (DaSchab)
more of enterprise customers.  The details of and configuration required for
 
individual services will differ from customer to customer.  For example,
 
consider a Session Border Control service (basically, policing VoIP
 
interconnect): different customers will have different sets of SIP trunks that
 
they can connect to, different traffic shaping requirements, different
 
transcoding rules etc.
 
  
Those customers will normally connect in to the service provider in one of two
+
Review: https://review.openstack.org/#/c/163399
ways: a dedicated physical link, or through a VPN over the public Internet. 
 
Once that traffic reaches the edge of the SP's network, then it makes sense for
 
the SP to put all that traffic onto the same core network while keeping some
 
form of separation to allow the network services to identify the source of the
 
traffic and treat it independently.  There are various overlay techniques that
 
can be used (e.g. VXLAN, GRE tunnelling) but one common and simple one is VLANs.
 
Carrying VLAN trunking into the VM allows this scheme to continue to be used in
 
a virtual world.
 
  
In this set-up, then any VMs implementing those services have to be able to
+
= Work In Progress =
differentiate between customers.  About the only way of doing that today in
 
OpenStack is to configure one provider network per customer then have one vNIC
 
per provider network, but that approach clearly doesn't scale (both performance
 
and configuration effort) if a VM has to see traffic from hundreds or thousands
 
of customers.  Instead, carrying VLAN trunking into the VM allows them to do
 
this scalably.
 
  
The net is that a VM providing a service that needs to have access to a
+
== Service Chaining ==
customer's non-NATed source addresses needs an overlay technology to allow this,
 
and VLAN trunking into the VM is sufficiently scalable for this use case and
 
leverages a common approach.
 
  
From: http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
+
Etherpad: https://etherpad.openstack.org/p/kKIqu2ipN6
  
== References: ==
+
== Orchestration ==
* [http://docbox.etsi.org/ISG/NFV/Open/Latest_Drafts/NFV-PER001v009%20-%20NFV%20Performance%20&%20Portability%20Best%20Practises.pdf Network Functions Virtualization NFV Performance & Portability Best Practices - DRAFT]
+
 
* ETSI-NFV Use Cases V1.1.1 [http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf]
+
Etherpad: https://etherpad.openstack.org/p/telco_orchestration
 +
 
 +
== MNO/MVNO Use Case==
 +
 
 +
Etherpad: https://etherpad.openstack.org/p/mno-mvno
 +
 
 +
== SIP Load-Balancing-as-a-Service==
 +
 
 +
Etherpad: https://etherpad.openstack.org/p/telcowg-usecase-SIP_LBaaS

Latest revision as of 13:27, 22 April 2015

Contributing Use Cases

The Telecommunications Working group welcomes use cases from Communication Service Providers (CSPs), Network Equipment Providers (NEPs) and other organizations in the telecommunications industry. To begin adding a use case simply copy the "Template" section of this page to the bottom of the list and rename it to a name that describes your use case.

When writing use cases focus on "what" you want to do and "why" rather than specific OpenStack requirements or solutions. Our aim as a working group is to assist in distilling those requirements or solutions from the use cases presented to ensure that we are building functionality that benefits all relevant telecommunications use cases. Submission of use cases that pertain to different implementations of the same network function (e.g. vEPC) are welcome as are use cases that speak to the more general demands telecommunications workloads place upon the infrastructure that supports them. In this initial phase of use case analysis the intent is to focus on those workloads that run on top of the provided infrastructure before moving focus to other areas.

Use cases are now written in ReStructured Text format and stored in the telcowg-usecases git repository on Stackforge.

Reviewing Use Cases

The working group uses OpenStack's Gerrit installation to collaborate on use case documentation, with the resultant work ultimately being stored in a git repository. To review items stored in Gerrit you will first need to create an account.

Note that to simply review items you will not need to sign the CLA, you will need to do this to upload use cases though. If you have any concerns about this process, consider joining one of the weekly TelcoWorkingGroup meetings to ask for assistance.

Once you have created an account you can find open items for review by opening this query in your web browser:

The result of which will look something like this:

Updating Use Cases

Contributed Use Cases

Template

Description

Describe the use case in terms of what's being done and why.

Characteristics

Describe important characteristics of the use case.

VPN Instantiation

Contributed by Margaret Chiosi

Etherpad: https://etherpad.openstack.org/p/telcowg-usecase-VPN_Instantiation

Description

VPN services are critical for the enterprise market which the Telcos provide services to. As we look to virtualize our PEs, VPN instantiation on a vPE needs to be addressed since connectivity is important. Proposal is to focus on ODL/Neutron linkage to openstack orchestration. Instantiate a VPN service on a vPE connecting to either a vPE or PE. This includes identifying where the vPE needs to be located (some set of criteria needs to be defined - latency, diversity..) and then created on a virtualized environment. Connectivity to the other vPE/PEs need to be setup. Then finally the VPN service over the different vPE/PE which match the customer sites needs to get instantiated.

Characteristics

  • Affinity rules
  • ODL SDN Controller for connectivity setup
  • Physical connectivity between the different vPE/PE environments are assumed to exist
  • Logical connectivity between different vPE/PE needs to be setup as the vPE is instantiated
  • VPN service connectivity needs to be setup
  • need to add the flow logic between the openstack components and ODL

Requirements

  • Affinity rules
  • ODL SDN Controller for connectivity setup
  • Physical connectivity between the different vPE/PE environments are assumed to exist
  • Logical connectivity between different vPE/PE needs to be setup as the vPE is instantiated
  • VPN service connectivity needs to be setup
  • Don't need to setup connectivity to customer router (CE) for this use case

Session Border Controller

Contributed by: Calum Loudon

Review: https://review.openstack.org/#/c/176301/

Virtual IMS Core

Contributed by: Calum Loudon

Review: https://review.openstack.org/#/c/158997/

Access to physical network resources

Contributed by: Jannis Rake-Revelant

Etherpad: https://etherpad.openstack.org/p/telcowg-usecase-Access_to_physical_network

Description

This use case aims to solve the problem of accessing physical (network) devices outside of the Openstack Infrastructure, that are not addressable by a public IP address. This use case can currently be implemented in various ways, as will be detailed later on. The background of this use case is the necessity to communicate with physical devices, in our case e.g. an eNodeB, to a VNF, e.g. a vEPC. Communication/ addressability should be possible from either side. In the current environment different physical devices are separated by VLANs and private IP subnets. The goal is to establish L3 (or L2 if that is "easier") connectivity.


The main goal of this use case is not necessarily to implement something new but to discuss the practicability of the current implementations. If I missed an alternative implementation please add it to the list.

Characteristics

Possible current implementations include:

  • L3 gateways
    • SNAT
    • L3 forwarding
    • Floating IPs
  • External provider networks, e.g. VLAN backed
  • L2 gateways, currently only possible with 3rd party software (?)

References

Security Segregation (Placement Zones)

Contributed by: Daniel Schabarum (DaSchab)

Review: https://review.openstack.org/#/c/163399

Work In Progress

Service Chaining

Etherpad: https://etherpad.openstack.org/p/kKIqu2ipN6

Orchestration

Etherpad: https://etherpad.openstack.org/p/telco_orchestration

MNO/MVNO Use Case

Etherpad: https://etherpad.openstack.org/p/mno-mvno

SIP Load-Balancing-as-a-Service

Etherpad: https://etherpad.openstack.org/p/telcowg-usecase-SIP_LBaaS