Difference between revisions of "TelcoWorkingGroup/UseCases"
(Created page with " {| class="wikitable" |- ! Workload Type !! Description || Characteristics !! Examples !! Requirements |- | Data plane || Tasks related to packet handing in an end-to-end comm...") |
|||
Line 1: | Line 1: | ||
+ | |||
+ | =Overview = | ||
{| class="wikitable" | {| class="wikitable" |
Revision as of 18:07, 25 November 2014
Contents
- 1 Overview
- 1.1 ETSI-NFV Use Cases - High Level Description
- 1.1.1 Use Case #1: Network Functions Virtualisation Infrastructure as a Service
- 1.1.2 Use Case #2: Virtual Network Function as a Service (VNFaaS)
- 1.1.3 Use Case #3: Virtual Network Platform as a Service (VNPaaS)
- 1.1.4 Use Case #4: VNF Forwarding Graphs
- 1.1.5 Use Case #5: Virtualisation of Mobile Core Network and IMS
- 1.1.6 Use Case #6: Virtualisation of Mobile base station
- 1.1.7 Use Case #7: Virtualisation of the Home Environment
- 1.1.8 Use Case #8: Virtualisation of CDNs
- 1.1.9 Use Case #9: Fixed Access Network Functions Virtualisation
- 1.2 Contributed Use Cases
- 1.3 References:
- 1.1 ETSI-NFV Use Cases - High Level Description
Overview
Workload Type | Description | Characteristics | Examples | Requirements |
---|---|---|---|---|
Data plane | Tasks related to packet handing in an end-to-end communication between edge applications. |
|
|
- |
Control plane | Any other communication between network functions that is not directly related to the end-to-end data communication between edge applications. |
|
|
- |
Signal processing | All network function tasks related to digital processing |
|
|
- |
Storage | All tasks related to disk storage. |
|
|
- |
ETSI-NFV Use Cases - High Level Description
ETSI NFV gap analysis document: https://wiki.openstack.org/wiki/File:NFV%2814%29000154r2_NFV_LS_to_OpenStack.pdf
Use Case #1: Network Functions Virtualisation Infrastructure as a Service
This is a reasonably generic IaaS requirement.
Use Case #2: Virtual Network Function as a Service (VNFaaS)
This primarily targets Customer Premise Equipment (CPE) devices such as access routers, enterprise firewall, WAN optimizers etc. with some Provider Edge devices possible at a later date. ETSI-NFV Performance & portability considerations will apply to deployments that strive to meet high performance and low latency considerations.
Use Case #3: Virtual Network Platform as a Service (VNPaaS)
This is similar to #2 but at the service level. At larger scale and not at the "app" level only.
Use Case #4: VNF Forwarding Graphs
Dynamic connectivity between apps in a "service chain".
Use Case #5: Virtualisation of Mobile Core Network and IMS
Primarily focusing on Evolved Packet Core appliances such as the Mobility Management Entity (MME), Serving Gateway (S-GW), etc. and the IP Multimedia Subsystem (IMS).
Use Case #6: Virtualisation of Mobile base station
Focusing on parts of the Radio Access Network such as eNodeB's, Radio Link Control and Packet Data Convergence Protocol, etc..
Use Case #7: Virtualisation of the Home Environment
Similar to Use Case 2, but with a focus on virtualising residential devices instead of enterprise devices. Covers DHCP, NAT, PPPoE, Firewall devices, etc.
Use Case #8: Virtualisation of CDNs
Content Delivery Networks focusing on video traffic delivery.
Use Case #9: Fixed Access Network Functions Virtualisation
Wireline related access technologies.
Contributed Use Cases
Session Border Controller
Contributed by: Calum Loudon
Description
Perimeta Session Border Controller, Metaswitch Networks. Sits on the edge of a service provider's network and polices SIP and RTP (i.e. VoIP) control and media traffic passing over the access network between end-users and the core network or the trunk network between the core and another SP.
Characteristics
- Fast and guaranteed performance:
- Performance in the order of several million VoIP packets (~64-220 bytes depending on codec) per second per core (achievable on COTS hardware).
- Guarantees provided via SLAs.
- Fully high availability
- No single point of failure, service continuity over both software and hardware failures.
- Elastically scalable
- NFV orchestrator adds and removes instances in response to network demands.
- Traffic segregation (ideally)
- Separate traffic from different customers via VLANs.
Requirements
- Fast & guaranteed performance (network)
- Packets per second target -> either SR-IOV or an accelerated DPDK-like data plane:
- "SR-IOV Networking Support" (https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov) -completed 2014.2
- "Open vSwitch to use patch ports" (https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use) - patched 2014.2
- "userspace vhost in ovd vif bindings" (https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost) - BP says Abandoned?
- "Snabb NFV driver" (https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver) - Not clear reasons of hold up (-2status )?
- "VIF_VHOSTUSER" (https://blueprints.launchpad.net/nova/+spec/vif-vhostuser) -In code review, awaiting approval, kilo-1?
- Packets per second target -> either SR-IOV or an accelerated DPDK-like data plane:
- Fast & guaranteed performance (compute):
- To optimize data rate we need to keep all working data in L3 cache:
- "Virt driver pinning guest vCPUs to host pCPUs" (https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning) - Needs code review, kilo-1?
- To optimize data rate need to bind to NIC on host CPU's bus:
- "I/O (PCIe) Based NUMA Scheduling" (https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling) - targeted kilo-1
- To offer guaranteed performance as opposed to 'best efforts' we need:
- To control placement of cores, minimise TLB misses and get accurate info about core topology (threads vs. hyperthreads etc.); maps to the remaining blueprints on NUMA & vCPU topology:
- "Virt driver guest vCPU topology configuration" (https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology) - implemented 2014.2
- "Virt driver guest NUMA node placement & topology" (https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement) - implemented 2014.2
- "Virt driver large page allocation for guest RAM" (https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages) - proposed kilo, target for kilo-1
- May need support to prevent 'noisy neighbours' stealing L3 cache - unproven, and no blueprint we're aware of.
- To optimize data rate we need to keep all working data in L3 cache:
- High availability:
- Requires anti-affinity rules to prevent active/passive being instantiated on same host - already supported, so no gap.
- Elastic scaling:
- Readily achievable using existing features - no gap.
- VLAN trunking:
- "VLAN trunking networks for NFV" (https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks et al). - Needs resubmission by Ian Wells for approval
- "GTP tunnel support for mobile network for NFV VNFs like SGW, PGW, MME (https://blueprints.launchpad.net/neutron/+spec/provider-network.type-gtp -needs update for resubmission)
- Other:
- Being able to offer apparent traffic separation (e.g. service traffic vs. application management) over single network is also useful in some cases.
- "Support two interfaces from one VM attached to the same network" (https://blueprints.launchpad.net/nova/+spec/multiple-if-1-net) - implemented 2014.2
- Being able to offer apparent traffic separation (e.g. service traffic vs. application management) over single network is also useful in some cases.
Virtual IMS Core
Contributed by: Calum Loudon
Description
Project Clearwater, http://www.projectclearwater.org/. An open source implementation of an IMS core designed to run in the cloud and be massively scalable. It provides SIP-based call control for voice and video as well as SIP-based messaging apps. As an IMS core it provides P/I/S-CSCF function together with a BGCF and an HSS cache, and includes a WebRTC gateway providing interworking between WebRTC & SIP clients.
Characteristics relevant to NFV/OpenStack
- Mainly a compute application: modest demands on storage and networking.
- Fully HA, with no SPOFs and service continuity over software and hardware failures; must be able to offer SLAs.
- Elastically scalable by adding/removing instances under the control of the NFV orchestrator.
Requirements
- Compute application:
- OpenStack already provides everything needed; in particular, there are no requirements for an accelerated data plane, nor for core pinning nor NUMA
- HA:
- implemented as a series of N+k compute pools; meeting a given SLA requires being able to limit the impact of a single host failure
- potentially a scheduler gap here: affinity/anti-affinity can be expressed pair-wise between VMs, which is sufficient for a 1:1 active/passive architecture, but an N+k pool needs a concept equivalent to "group anti-affinity" i.e. allowing the NFV orchestrator to assign each VM in a pool to one of X buckets, and requesting OpenStack to ensure no single host failure can affect more than one bucket
- (there are other approaches which achieve the same end e.g. defining a group where the scheduler ensures every pair of VMs within that group are not instantiated on the same host)
- for study whether this can be implemented using current scheduler hints
- Elastic scaling:
- as for compute requirements there is no gap - OpenStack already provides everything needed.
VLAN Trunking
The big picture is that this is about how service providers can use virtualisation to provide differentiated network services to their customers (and specifically enterprise customers rather than end users); it's not about VMs want to set up networking between themselves.
A typical service provider may be providing network services to thousands or more of enterprise customers. The details of and configuration required for individual services will differ from customer to customer. For example, consider a Session Border Control service (basically, policing VoIP interconnect): different customers will have different sets of SIP trunks that they can connect to, different traffic shaping requirements, different transcoding rules etc.
Those customers will normally connect in to the service provider in one of two ways: a dedicated physical link, or through a VPN over the public Internet. Once that traffic reaches the edge of the SP's network, then it makes sense for the SP to put all that traffic onto the same core network while keeping some form of separation to allow the network services to identify the source of the traffic and treat it independently. There are various overlay techniques that can be used (e.g. VXLAN, GRE tunnelling) but one common and simple one is VLANs. Carrying VLAN trunking into the VM allows this scheme to continue to be used in a virtual world.
In this set-up, then any VMs implementing those services have to be able to differentiate between customers. About the only way of doing that today in OpenStack is to configure one provider network per customer then have one vNIC per provider network, but that approach clearly doesn't scale (both performance and configuration effort) if a VM has to see traffic from hundreds or thousands of customers. Instead, carrying VLAN trunking into the VM allows them to do this scalably.
The net is that a VM providing a service that needs to have access to a customer's non-NATed source addresses needs an overlay technology to allow this, and VLAN trunking into the VM is sufficiently scalable for this use case and leverages a common approach.
From: http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
References:
- Network Functions Virtualization NFV Performance & Portability Best Practices - DRAFT
- ETSI-NFV Use Cases V1.1.1 [1]