Difference between revisions of "TelcoWorkingGroup/UseCases"
(→Characteristics) |
Randymartini (talk | contribs) (→Requirements) |
||
Line 73: | Line 73: | ||
*** "SR-IOV Networking Support" (https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov) -completed 2014.2 | *** "SR-IOV Networking Support" (https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov) -completed 2014.2 | ||
*** "Open vSwitch to use patch ports" (https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use) - patched 2014.2 | *** "Open vSwitch to use patch ports" (https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use) - patched 2014.2 | ||
− | *** "userspace vhost in ovd vif bindings" (https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost) - BP says Abandoned | + | *** "userspace vhost in ovd vif bindings" (https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost) - BP says Abandoned - superseded by "Support vhost user in libvirt vif driver" |
+ | *** "Support vhost user in libvirt vif driver" (https://blueprints.launchpad.net/nova/+spec/libvirt-vif-vhost-user) - Not started - spec posted for review | ||
*** "Snabb NFV driver" (https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver) - Not clear reasons of hold up (-2status )? | *** "Snabb NFV driver" (https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver) - Not clear reasons of hold up (-2status )? | ||
*** "VIF_VHOSTUSER" (https://blueprints.launchpad.net/nova/+spec/vif-vhostuser) -In code review, awaiting approval, kilo-1? | *** "VIF_VHOSTUSER" (https://blueprints.launchpad.net/nova/+spec/vif-vhostuser) -In code review, awaiting approval, kilo-1? |
Revision as of 18:07, 5 December 2014
Contents
Contributing Use Cases
The Telecommunications Working group welcomes use cases from Communication Service Providers (CSPs), Network Equipment Providers (NEPs) and other organizations in the telecommunications industry. To begin adding a use case simply copy the "Template" section of this page to the bottom of the list and rename it to a name that describes your use case.
When writing use cases focus on "what" you want to do and "why" rather than specific OpenStack requirements or solutions. Our aim as a working group is to assist in distilling those requirements or solutions from the use cases presented to ensure that we are building functionality that benefits all relevant telecommunications use cases. Submission of use cases that pertain to different implementations of the same network function (e.g. vEPC) are welcome as are use cases that speak to the more general demands telecommunications workloads place upon the infrastructure that supports them.
In this initial phase of use case analysis the intent is to focus on those workloads that run on top of the provided infrastructure before moving focus to other areas.
Contributed Use Cases
Template
Description
Describe the use case in terms of what's being done and why.
Characteristics
Describe important characteristics of the use case.
VPN Instantiation
Contributed by Margaret Chiosi
Description
VPN services are critical for the enterprise market which the Telcos provide services to. As we look to virtualize our PEs, VPN instantiation on a vPE needs to be addressed since connectivity is important. Proposal is to focus on ODL/Neutron linkage to openstack orchestration. Instantiate a VPN service on a vPE connecting to either a vPE or PE. This includes identifying where the vPE needs to be located (some set of criteria needs to be defined - latency, diversity..) and then created on a virtualized environment. Connectivity to the other vPE/PEs need to be setup. Then finally the VPN service over the different vPE/PE which match the customer sites needs to get instantiated.
Characteristics
- Affinity rules
- ODL SDN Controller for connectivity setup
- Physical connectivity between the different vPE/PE environments are assumed to exist
- Logical connectivity between different vPE/PE needs to be setup as the vPE is instantiated
- VPN service connectivity needs to be setup
- need to add the flow logic between the openstack components and ODL
Requirements
- Affinity rules
- ODL SDN Controller for connectivity setup
- Physical connectivity between the different vPE/PE environments are assumed to exist
- Logical connectivity between different vPE/PE needs to be setup as the vPE is instantiated
- VPN service connectivity needs to be setup
- Don't need to setup connectivity to customer router (CE) for this use case
Session Border Controller
Contributed by: Calum Loudon
Description
Perimeta Session Border Controller, Metaswitch Networks. Sits on the edge of a service provider's network and polices SIP and RTP (i.e. VoIP) control and media traffic passing over the access network between end-users and the core network or the trunk network between the core and another SP.
Characteristics
- Fast and guaranteed performance:
- Performance in the order of several million VoIP packets (~64-220 bytes depending on codec) per second per core (achievable on COTS hardware).
- Guarantees provided via SLAs.
- Fully high availability
- No single point of failure, service continuity over both software and hardware failures.
- Elastically scalable
- NFV orchestrator adds and removes instances in response to network demands.
- Traffic segregation (ideally)
- Separate traffic from different customers via VLANs.
Requirements
- Fast & guaranteed performance (network)
- Packets per second target -> either SR-IOV or an accelerated DPDK-like data plane:
- "SR-IOV Networking Support" (https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov) -completed 2014.2
- "Open vSwitch to use patch ports" (https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use) - patched 2014.2
- "userspace vhost in ovd vif bindings" (https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost) - BP says Abandoned - superseded by "Support vhost user in libvirt vif driver"
- "Support vhost user in libvirt vif driver" (https://blueprints.launchpad.net/nova/+spec/libvirt-vif-vhost-user) - Not started - spec posted for review
- "Snabb NFV driver" (https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver) - Not clear reasons of hold up (-2status )?
- "VIF_VHOSTUSER" (https://blueprints.launchpad.net/nova/+spec/vif-vhostuser) -In code review, awaiting approval, kilo-1?
- Packets per second target -> either SR-IOV or an accelerated DPDK-like data plane:
- Fast & guaranteed performance (compute):
- To optimize data rate we need to keep all working data in L3 cache:
- "Virt driver pinning guest vCPUs to host pCPUs" (https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning) - Needs code review, kilo-1?
- To optimize data rate need to bind to NIC on host CPU's bus:
- "I/O (PCIe) Based NUMA Scheduling" (https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling) - targeted kilo-1
- To offer guaranteed performance as opposed to 'best efforts' we need:
- To control placement of cores, minimise TLB misses and get accurate info about core topology (threads vs. hyperthreads etc.); maps to the remaining blueprints on NUMA & vCPU topology:
- "Virt driver guest vCPU topology configuration" (https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology) - implemented 2014.2
- "Virt driver guest NUMA node placement & topology" (https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement) - implemented 2014.2
- "Virt driver large page allocation for guest RAM" (https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages) - proposed kilo, target for kilo-1
- May need support to prevent 'noisy neighbours' stealing L3 cache - unproven, and no blueprint we're aware of.
- To optimize data rate we need to keep all working data in L3 cache:
- High availability:
- Requires anti-affinity rules to prevent active/passive being instantiated on same host - already supported, so no gap.
- Elastic scaling:
- Readily achievable using existing features - no gap.
- VLAN trunking:
- "VLAN trunking networks for NFV" (https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks et al). - Needs resubmission by Ian Wells for approval
- "GTP tunnel support for mobile network for NFV VNFs like SGW, PGW, MME (https://blueprints.launchpad.net/neutron/+spec/provider-network.type-gtp -needs update for resubmission)
- Other:
- Being able to offer apparent traffic separation (e.g. service traffic vs. application management) over single network is also useful in some cases.
- "Support two interfaces from one VM attached to the same network" (https://blueprints.launchpad.net/nova/+spec/multiple-if-1-net) - implemented 2014.2
- Being able to offer apparent traffic separation (e.g. service traffic vs. application management) over single network is also useful in some cases.
Virtual IMS Core
Contributed by: Calum Loudon
Description
Project Clearwater, http://www.projectclearwater.org/. An open source implementation of an IMS core designed to run in the cloud and be massively scalable. It provides SIP-based call control for voice and video as well as SIP-based messaging apps. As an IMS core it provides P/I/S-CSCF function together with a BGCF and an HSS cache, and includes a WebRTC gateway providing interworking between WebRTC & SIP clients.
Characteristics relevant to NFV/OpenStack
- Mainly a compute application: modest demands on storage and networking.
- Fully HA, with no SPOFs and service continuity over software and hardware failures; must be able to offer SLAs.
- Elastically scalable by adding/removing instances under the control of the NFV orchestrator.
Requirements
- Compute application:
- OpenStack already provides everything needed; in particular, there are no requirements for an accelerated data plane, nor for core pinning nor NUMA
- HA:
- implemented as a series of N+k compute pools; meeting a given SLA requires being able to limit the impact of a single host failure
- potentially a scheduler gap here: affinity/anti-affinity can be expressed pair-wise between VMs, which is sufficient for a 1:1 active/passive architecture, but an N+k pool needs a concept equivalent to "group anti-affinity" i.e. allowing the NFV orchestrator to assign each VM in a pool to one of X buckets, and requesting OpenStack to ensure no single host failure can affect more than one bucket
- (there are other approaches which achieve the same end e.g. defining a group where the scheduler ensures every pair of VMs within that group are not instantiated on the same host)
- for study whether this can be implemented using current scheduler hints
- Elastic scaling:
- as for compute requirements there is no gap - OpenStack already provides everything needed.
Access to physical network resources
Contributed by: Jannis Rake-Revelant
Description
This use case aims to solve the problem of accessing physical (network) devices outside of the Openstack Infrastructure, that are not addressable by a public IP address. This use case can currently be implemented in various ways, as will be detailed later on. The background of this use case is the necessity to communicate with physical devices, in our case e.g. an eNodeB, to a VNF, e.g. a vEPC. Communication/ addressability should be possible from either side. In the current environment different physical devices are separated by VLANs and private IP subnets. The goal is to establish L3 (or L2 if that is "easier") connectivity.
The main goal of this use case is not necessarily to implement something new but to discuss the practicability of the current implementations. If I missed an alternative implementation please add it to the list.
Characteristics
Possible current implementations include:
- L3 gateways
- SNAT
- L3 forwarding
- Floating IPs
- External provider networks, e.g. VLAN backed
- L2 gateways, currently only possible with 3rd party software (?)
References:
- Network Functions Virtualization NFV Performance & Portability Best Practices - DRAFT
- ETSI-NFV Use Cases V1.1.1 [1]