Jump to: navigation, search

TelcoWorkingGroup/UseCases

< TelcoWorkingGroup
Revision as of 13:10, 3 December 2014 by Jannis Rake-Revelant (talk | contribs) (Description)

Contributing Use Cases

The Telecommunications Working group welcomes use cases from Communication Service Providers (CSPs), Network Equipment Providers (NEPs) and other organizations in the telecommunications industry. To begin adding a use case simply copy the "Template" section of this page to the bottom of the list and rename it to a name that describes your use case.

When writing use cases focus on "what" you want to do and "why" rather than specific OpenStack requirements or solutions. Our aim as a working group is to assist in distilling those requirements or solutions from the use cases presented to ensure that we are building functionality that benefits all relevant telecommunications use cases. Submission of use cases that pertain to different implementations of the same network function (e.g. vEPC) are welcome as are use cases that speak to the more general demands telecommunications workloads place upon the infrastructure that supports them.

In this initial phase of use case analysis the intent is to focus on those workloads that run on top of the provided infrastructure before moving focus to other areas.

Contributed Use Cases

Template

Description

Describe the use case in terms of what's being done and why.

Characteristics

Describe important characteristics of the use case.

Session Border Controller

Contributed by: Calum Loudon

Description

Perimeta Session Border Controller, Metaswitch Networks. Sits on the edge of a service provider's network and polices SIP and RTP (i.e. VoIP) control and media traffic passing over the access network between end-users and the core network or the trunk network between the core and another SP.

Characteristics

  • Fast and guaranteed performance:
    • Performance in the order of several million VoIP packets (~64-220 bytes depending on codec) per second per core (achievable on COTS hardware).
    • Guarantees provided via SLAs.
  • Fully high availability
    • No single point of failure, service continuity over both software and hardware failures.
  • Elastically scalable
    • NFV orchestrator adds and removes instances in response to network demands.
  • Traffic segregation (ideally)
    • Separate traffic from different customers via VLANs.

Requirements

  • High availability:
    • Requires anti-affinity rules to prevent active/passive being instantiated on same host - already supported, so no gap.
  • Elastic scaling:
    • Readily achievable using existing features - no gap.
  • Other:

Virtual IMS Core

Contributed by: Calum Loudon

Description

Project Clearwater, http://www.projectclearwater.org/. An open source implementation of an IMS core designed to run in the cloud and be massively scalable. It provides SIP-based call control for voice and video as well as SIP-based messaging apps. As an IMS core it provides P/I/S-CSCF function together with a BGCF and an HSS cache, and includes a WebRTC gateway providing interworking between WebRTC & SIP clients.

Characteristics relevant to NFV/OpenStack

  • Mainly a compute application: modest demands on storage and networking.
  • Fully HA, with no SPOFs and service continuity over software and hardware failures; must be able to offer SLAs.
  • Elastically scalable by adding/removing instances under the control of the NFV orchestrator.

Requirements

  • Compute application:
    • OpenStack already provides everything needed; in particular, there are no requirements for an accelerated data plane, nor for core pinning nor NUMA
  • HA:
    • implemented as a series of N+k compute pools; meeting a given SLA requires being able to limit the impact of a single host failure
    • potentially a scheduler gap here: affinity/anti-affinity can be expressed pair-wise between VMs, which is sufficient for a 1:1 active/passive architecture, but an N+k pool needs a concept equivalent to "group anti-affinity" i.e. allowing the NFV orchestrator to assign each VM in a pool to one of X buckets, and requesting OpenStack to ensure no single host failure can affect more than one bucket
    • (there are other approaches which achieve the same end e.g. defining a group where the scheduler ensures every pair of VMs within that group are not instantiated on the same host)
    • for study whether this can be implemented using current scheduler hints
  • Elastic scaling:
    • as for compute requirements there is no gap - OpenStack already provides everything needed.

Access to physical network resources

Contributed by: Jannis Rake-Revelant

Description

This use case aims to solve the problem of accessing physical (network) devices outside of the Openstack Infrastructure, that are not addressable by a public IP address. This use case can currently be implemented in various ways, as will be detailed later on. The background of this use case is the necessity to communicate with physical devices, in our case e.g. an eNodeB, to a VNF, e.g. a vEPC. Communication/ addressability should be possible from either side. In the current environment different physical devices are separated by VLANs and private IP subnets. The goal is to establish L3 (or L2 if that is "easier") connectivity.


The main goal of this use case is not necessarily to implement something new but to discuss the practicability of the current implementations. If I missed an alternative implementation please add it to the list.

Characteristics

Possible current implementations include:

  • L3 gateways
    • SNAT
    • L3 forwarding
    • Floating IPs
  • External provider networks, e.g. VLAN backed
  • L2 gateways, currently only possible with 3rd party software (?)

References: