Jump to: navigation, search

Difference between revisions of "TelcoWorkingGroup/UseCases"

(Contributed Use Cases)
Line 97: Line 97:
 
* Elastic scaling:
 
* Elastic scaling:
 
** as for compute requirements there is no gap - OpenStack already provides everything needed.
 
** as for compute requirements there is no gap - OpenStack already provides everything needed.
 
== VLAN Trunking ==
 
 
 
The big picture is that this is about how service providers can use
 
virtualisation to provide differentiated network services to their customers
 
(and specifically enterprise customers rather than end users); it's not about
 
VMs want to set up networking between themselves.
 
 
A typical service provider may be providing network services to thousands or
 
more of enterprise customers.  The details of and configuration required for
 
individual services will differ from customer to customer.  For example,
 
consider a Session Border Control service (basically, policing VoIP
 
interconnect): different customers will have different sets of SIP trunks that
 
they can connect to, different traffic shaping requirements, different
 
transcoding rules etc.
 
 
Those customers will normally connect in to the service provider in one of two
 
ways: a dedicated physical link, or through a VPN over the public Internet. 
 
Once that traffic reaches the edge of the SP's network, then it makes sense for
 
the SP to put all that traffic onto the same core network while keeping some
 
form of separation to allow the network services to identify the source of the
 
traffic and treat it independently.  There are various overlay techniques that
 
can be used (e.g. VXLAN, GRE tunnelling) but one common and simple one is VLANs.
 
Carrying VLAN trunking into the VM allows this scheme to continue to be used in
 
a virtual world.
 
 
In this set-up, then any VMs implementing those services have to be able to
 
differentiate between customers.  About the only way of doing that today in
 
OpenStack is to configure one provider network per customer then have one vNIC
 
per provider network, but that approach clearly doesn't scale (both performance
 
and configuration effort) if a VM has to see traffic from hundreds or thousands
 
of customers.  Instead, carrying VLAN trunking into the VM allows them to do
 
this scalably.
 
 
The net is that a VM providing a service that needs to have access to a
 
customer's non-NATed source addresses needs an overlay technology to allow this,
 
and VLAN trunking into the VM is sufficiently scalable for this use case and
 
leverages a common approach.
 
 
From: http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
 
  
 
= References: =
 
= References: =
 
* [http://docbox.etsi.org/ISG/NFV/Open/Latest_Drafts/NFV-PER001v009%20-%20NFV%20Performance%20&%20Portability%20Best%20Practises.pdf Network Functions Virtualization NFV Performance & Portability Best Practices - DRAFT]
 
* [http://docbox.etsi.org/ISG/NFV/Open/Latest_Drafts/NFV-PER001v009%20-%20NFV%20Performance%20&%20Portability%20Best%20Practises.pdf Network Functions Virtualization NFV Performance & Portability Best Practices - DRAFT]
 
* ETSI-NFV Use Cases V1.1.1 [http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf]
 
* ETSI-NFV Use Cases V1.1.1 [http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf]

Revision as of 22:10, 26 November 2014

Overview

Contributed Use Cases

Template

Description

Characteristics

Requirements

Session Border Controller

Contributed by: Calum Loudon

Description

Perimeta Session Border Controller, Metaswitch Networks. Sits on the edge of a service provider's network and polices SIP and RTP (i.e. VoIP) control and media traffic passing over the access network between end-users and the core network or the trunk network between the core and another SP.

Characteristics

  • Fast and guaranteed performance:
    • Performance in the order of several million VoIP packets (~64-220 bytes depending on codec) per second per core (achievable on COTS hardware).
    • Guarantees provided via SLAs.
  • Fully high availability
    • No single point of failure, service continuity over both software and hardware failures.
  • Elastically scalable
    • NFV orchestrator adds and removes instances in response to network demands.
  • Traffic segregation (ideally)
    • Separate traffic from different customers via VLANs.

Requirements

  • High availability:
    • Requires anti-affinity rules to prevent active/passive being instantiated on same host - already supported, so no gap.
  • Elastic scaling:
    • Readily achievable using existing features - no gap.
  • Other:

Virtual IMS Core

Contributed by: Calum Loudon

Description

Project Clearwater, http://www.projectclearwater.org/. An open source implementation of an IMS core designed to run in the cloud and be massively scalable. It provides SIP-based call control for voice and video as well as SIP-based messaging apps. As an IMS core it provides P/I/S-CSCF function together with a BGCF and an HSS cache, and includes a WebRTC gateway providing interworking between WebRTC & SIP clients.

Characteristics relevant to NFV/OpenStack

  • Mainly a compute application: modest demands on storage and networking.
  • Fully HA, with no SPOFs and service continuity over software and hardware failures; must be able to offer SLAs.
  • Elastically scalable by adding/removing instances under the control of the NFV orchestrator.

Requirements

  • Compute application:
    • OpenStack already provides everything needed; in particular, there are no requirements for an accelerated data plane, nor for core pinning nor NUMA
  • HA:
    • implemented as a series of N+k compute pools; meeting a given SLA requires being able to limit the impact of a single host failure
    • potentially a scheduler gap here: affinity/anti-affinity can be expressed pair-wise between VMs, which is sufficient for a 1:1 active/passive architecture, but an N+k pool needs a concept equivalent to "group anti-affinity" i.e. allowing the NFV orchestrator to assign each VM in a pool to one of X buckets, and requesting OpenStack to ensure no single host failure can affect more than one bucket
    • (there are other approaches which achieve the same end e.g. defining a group where the scheduler ensures every pair of VMs within that group are not instantiated on the same host)
    • for study whether this can be implemented using current scheduler hints
  • Elastic scaling:
    • as for compute requirements there is no gap - OpenStack already provides everything needed.

References: