Difference between revisions of "TelcoWorkingGroup/UseCases"
(→Contributed Use Cases) |
|||
Line 97: | Line 97: | ||
* Elastic scaling: | * Elastic scaling: | ||
** as for compute requirements there is no gap - OpenStack already provides everything needed. | ** as for compute requirements there is no gap - OpenStack already provides everything needed. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
= References: = | = References: = | ||
* [http://docbox.etsi.org/ISG/NFV/Open/Latest_Drafts/NFV-PER001v009%20-%20NFV%20Performance%20&%20Portability%20Best%20Practises.pdf Network Functions Virtualization NFV Performance & Portability Best Practices - DRAFT] | * [http://docbox.etsi.org/ISG/NFV/Open/Latest_Drafts/NFV-PER001v009%20-%20NFV%20Performance%20&%20Portability%20Best%20Practises.pdf Network Functions Virtualization NFV Performance & Portability Best Practices - DRAFT] | ||
* ETSI-NFV Use Cases V1.1.1 [http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf] | * ETSI-NFV Use Cases V1.1.1 [http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf] |
Revision as of 22:10, 26 November 2014
Overview
Contributed Use Cases
Template
Description
Characteristics
Requirements
Session Border Controller
Contributed by: Calum Loudon
Description
Perimeta Session Border Controller, Metaswitch Networks. Sits on the edge of a service provider's network and polices SIP and RTP (i.e. VoIP) control and media traffic passing over the access network between end-users and the core network or the trunk network between the core and another SP.
Characteristics
- Fast and guaranteed performance:
- Performance in the order of several million VoIP packets (~64-220 bytes depending on codec) per second per core (achievable on COTS hardware).
- Guarantees provided via SLAs.
- Fully high availability
- No single point of failure, service continuity over both software and hardware failures.
- Elastically scalable
- NFV orchestrator adds and removes instances in response to network demands.
- Traffic segregation (ideally)
- Separate traffic from different customers via VLANs.
Requirements
- Fast & guaranteed performance (network)
- Packets per second target -> either SR-IOV or an accelerated DPDK-like data plane:
- "SR-IOV Networking Support" (https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov) -completed 2014.2
- "Open vSwitch to use patch ports" (https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use) - patched 2014.2
- "userspace vhost in ovd vif bindings" (https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost) - BP says Abandoned?
- "Snabb NFV driver" (https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver) - Not clear reasons of hold up (-2status )?
- "VIF_VHOSTUSER" (https://blueprints.launchpad.net/nova/+spec/vif-vhostuser) -In code review, awaiting approval, kilo-1?
- Packets per second target -> either SR-IOV or an accelerated DPDK-like data plane:
- Fast & guaranteed performance (compute):
- To optimize data rate we need to keep all working data in L3 cache:
- "Virt driver pinning guest vCPUs to host pCPUs" (https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning) - Needs code review, kilo-1?
- To optimize data rate need to bind to NIC on host CPU's bus:
- "I/O (PCIe) Based NUMA Scheduling" (https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling) - targeted kilo-1
- To offer guaranteed performance as opposed to 'best efforts' we need:
- To control placement of cores, minimise TLB misses and get accurate info about core topology (threads vs. hyperthreads etc.); maps to the remaining blueprints on NUMA & vCPU topology:
- "Virt driver guest vCPU topology configuration" (https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology) - implemented 2014.2
- "Virt driver guest NUMA node placement & topology" (https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement) - implemented 2014.2
- "Virt driver large page allocation for guest RAM" (https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages) - proposed kilo, target for kilo-1
- May need support to prevent 'noisy neighbours' stealing L3 cache - unproven, and no blueprint we're aware of.
- To optimize data rate we need to keep all working data in L3 cache:
- High availability:
- Requires anti-affinity rules to prevent active/passive being instantiated on same host - already supported, so no gap.
- Elastic scaling:
- Readily achievable using existing features - no gap.
- VLAN trunking:
- "VLAN trunking networks for NFV" (https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks et al). - Needs resubmission by Ian Wells for approval
- "GTP tunnel support for mobile network for NFV VNFs like SGW, PGW, MME (https://blueprints.launchpad.net/neutron/+spec/provider-network.type-gtp -needs update for resubmission)
- Other:
- Being able to offer apparent traffic separation (e.g. service traffic vs. application management) over single network is also useful in some cases.
- "Support two interfaces from one VM attached to the same network" (https://blueprints.launchpad.net/nova/+spec/multiple-if-1-net) - implemented 2014.2
- Being able to offer apparent traffic separation (e.g. service traffic vs. application management) over single network is also useful in some cases.
Virtual IMS Core
Contributed by: Calum Loudon
Description
Project Clearwater, http://www.projectclearwater.org/. An open source implementation of an IMS core designed to run in the cloud and be massively scalable. It provides SIP-based call control for voice and video as well as SIP-based messaging apps. As an IMS core it provides P/I/S-CSCF function together with a BGCF and an HSS cache, and includes a WebRTC gateway providing interworking between WebRTC & SIP clients.
Characteristics relevant to NFV/OpenStack
- Mainly a compute application: modest demands on storage and networking.
- Fully HA, with no SPOFs and service continuity over software and hardware failures; must be able to offer SLAs.
- Elastically scalable by adding/removing instances under the control of the NFV orchestrator.
Requirements
- Compute application:
- OpenStack already provides everything needed; in particular, there are no requirements for an accelerated data plane, nor for core pinning nor NUMA
- HA:
- implemented as a series of N+k compute pools; meeting a given SLA requires being able to limit the impact of a single host failure
- potentially a scheduler gap here: affinity/anti-affinity can be expressed pair-wise between VMs, which is sufficient for a 1:1 active/passive architecture, but an N+k pool needs a concept equivalent to "group anti-affinity" i.e. allowing the NFV orchestrator to assign each VM in a pool to one of X buckets, and requesting OpenStack to ensure no single host failure can affect more than one bucket
- (there are other approaches which achieve the same end e.g. defining a group where the scheduler ensures every pair of VMs within that group are not instantiated on the same host)
- for study whether this can be implemented using current scheduler hints
- Elastic scaling:
- as for compute requirements there is no gap - OpenStack already provides everything needed.
References:
- Network Functions Virtualization NFV Performance & Portability Best Practices - DRAFT
- ETSI-NFV Use Cases V1.1.1 [1]