Jump to: navigation, search

Difference between revisions of "TelcoWorkingGroup/UseCases"

m (SIP Load-Balancer-as-a-Service)
(Virtual IMS Core)
Line 131: Line 131:
 
Contributed by: Calum Loudon
 
Contributed by: Calum Loudon
  
Etherpad: https://etherpad.openstack.org/p/telcowg-usecase-Virtual_IMS_Core
+
Review: https://review.openstack.org/#/c/158997/
 
 
===Description===
 
 
 
Project Clearwater, http://www.projectclearwater.org/.  An open source implementation of an IMS core designed to run in the cloud and be massively scalable.  It provides SIP-based call control for voice and video as well as SIP-based messaging apps.  As an IMS core it provides P/I/S-CSCF function together with a BGCF and an HSS cache, and includes
 
a WebRTC gateway providing interworking between WebRTC & SIP clients.
 
 
 
===Characteristics relevant to NFV/OpenStack===
 
 
 
* Mainly a compute application: modest demands on storage and networking.
 
* Fully HA, with no SPOFs and service continuity over software and hardware failures; must be able to offer SLAs.
 
* Elastically scalable by adding/removing instances under the control of the NFV orchestrator.
 
 
 
===Requirements===
 
 
 
* Compute application:
 
** OpenStack already provides everything needed; in particular, there are no requirements for an accelerated data plane, nor for core pinning nor NUMA
 
 
 
*HA:
 
** implemented as a series of N+k compute pools; meeting a given SLA requires being able to limit the impact of a single host failure
 
** potentially a scheduler gap here: affinity/anti-affinity can be expressed pair-wise between VMs, which is sufficient for a 1:1 active/passive architecture, but an N+k pool needs a concept equivalent to "group anti-affinity" i.e. allowing the NFV orchestrator to assign each VM in a pool to one of X buckets, and requesting OpenStack to ensure no single host failure can affect more than one bucket
 
** (there are other approaches which achieve the same end e.g. defining a group where the scheduler ensures every pair of VMs within that group are not instantiated on the same host)
 
** for study whether this can be implemented using current scheduler hints
 
 
 
* Elastic scaling:
 
** as for compute requirements there is no gap - OpenStack already provides everything needed.
 
  
 
== Access to physical network resources ==
 
== Access to physical network resources ==

Revision as of 14:37, 25 March 2015

Contributing Use Cases

The Telecommunications Working group welcomes use cases from Communication Service Providers (CSPs), Network Equipment Providers (NEPs) and other organizations in the telecommunications industry. To begin adding a use case simply copy the "Template" section of this page to the bottom of the list and rename it to a name that describes your use case.

When writing use cases focus on "what" you want to do and "why" rather than specific OpenStack requirements or solutions. Our aim as a working group is to assist in distilling those requirements or solutions from the use cases presented to ensure that we are building functionality that benefits all relevant telecommunications use cases. Submission of use cases that pertain to different implementations of the same network function (e.g. vEPC) are welcome as are use cases that speak to the more general demands telecommunications workloads place upon the infrastructure that supports them. In this initial phase of use case analysis the intent is to focus on those workloads that run on top of the provided infrastructure before moving focus to other areas.

Use cases are now written in ReStructured Text format and stored in the telcowg-usecases git repository on Stackforge.

Reviewing Use Cases

The working group uses OpenStack's Gerrit installation to collaborate on use case documentation, with the resultant work ultimately being stored in a git repository. To review items stored in Gerrit you will first need to create an account.

Note that to simply review items you will not need to sign the CLA, you will need to do this to upload use cases though. If you have any concerns about this process, consider joining one of the weekly TelcoWorkingGroup meetings to ask for assistance.

Once you have created an account you can find open items for review by opening this query in your web browser:

The result of which will look something like this:

Updating Use Cases

Contributed Use Cases

Template

Description

Describe the use case in terms of what's being done and why.

Characteristics

Describe important characteristics of the use case.

VPN Instantiation

Contributed by Margaret Chiosi

Etherpad: https://etherpad.openstack.org/p/telcowg-usecase-VPN_Instantiation

Description

VPN services are critical for the enterprise market which the Telcos provide services to. As we look to virtualize our PEs, VPN instantiation on a vPE needs to be addressed since connectivity is important. Proposal is to focus on ODL/Neutron linkage to openstack orchestration. Instantiate a VPN service on a vPE connecting to either a vPE or PE. This includes identifying where the vPE needs to be located (some set of criteria needs to be defined - latency, diversity..) and then created on a virtualized environment. Connectivity to the other vPE/PEs need to be setup. Then finally the VPN service over the different vPE/PE which match the customer sites needs to get instantiated.

Characteristics

  • Affinity rules
  • ODL SDN Controller for connectivity setup
  • Physical connectivity between the different vPE/PE environments are assumed to exist
  • Logical connectivity between different vPE/PE needs to be setup as the vPE is instantiated
  • VPN service connectivity needs to be setup
  • need to add the flow logic between the openstack components and ODL

Requirements

  • Affinity rules
  • ODL SDN Controller for connectivity setup
  • Physical connectivity between the different vPE/PE environments are assumed to exist
  • Logical connectivity between different vPE/PE needs to be setup as the vPE is instantiated
  • VPN service connectivity needs to be setup
  • Don't need to setup connectivity to customer router (CE) for this use case

Session Border Controller

Contributed by: Calum Loudon

Etherpad: https://etherpad.openstack.org/p/telcowg-usecase-Session_Border_Controller

Description

Perimeta Session Border Controller, Metaswitch Networks. Sits on the edge of a service provider's network and polices SIP and RTP (i.e. VoIP) control and media traffic passing over the access network between end-users and the core network or the trunk network between the core and another SP.

Characteristics

  • Fast and guaranteed performance:
    • Performance in the order of several million VoIP packets (~64-220 bytes depending on codec) per second per core (achievable on COTS hardware).
    • Guarantees provided via SLAs.
  • Fully high availability
    • No single point of failure, service continuity over both software and hardware failures.
  • Elastically scalable
    • NFV orchestrator adds and removes instances in response to network demands.
  • Traffic segregation (ideally)
    • Separate traffic from different customers via VLANs.

Requirements

  • High availability:
    • Requires anti-affinity rules to prevent active/passive being instantiated on same host - already supported, so no gap.
  • Elastic scaling:
    • Readily achievable using existing features - no gap.
  • Other:

Virtual IMS Core

Contributed by: Calum Loudon

Review: https://review.openstack.org/#/c/158997/

Access to physical network resources

Contributed by: Jannis Rake-Revelant

Etherpad: https://etherpad.openstack.org/p/telcowg-usecase-Access_to_physical_network

Description

This use case aims to solve the problem of accessing physical (network) devices outside of the Openstack Infrastructure, that are not addressable by a public IP address. This use case can currently be implemented in various ways, as will be detailed later on. The background of this use case is the necessity to communicate with physical devices, in our case e.g. an eNodeB, to a VNF, e.g. a vEPC. Communication/ addressability should be possible from either side. In the current environment different physical devices are separated by VLANs and private IP subnets. The goal is to establish L3 (or L2 if that is "easier") connectivity.


The main goal of this use case is not necessarily to implement something new but to discuss the practicability of the current implementations. If I missed an alternative implementation please add it to the list.

Characteristics

Possible current implementations include:

  • L3 gateways
    • SNAT
    • L3 forwarding
    • Floating IPs
  • External provider networks, e.g. VLAN backed
  • L2 gateways, currently only possible with 3rd party software (?)

References

Security Segregation (Placement Zones)

Contributed by: Daniel Schabarum (DaSchab)

Etherpad: https://etherpad.openstack.org/p/telcowg-usecase-Security_Segregation

Description

The goal of this use-case it to present the need for a (partly) segregation of physical resources to support the well-known classic separation of DMZ and MZ, which is still needed by several applications (VNFs) and requested by Telco security rules. The main driver is therefore is that a vulnerability of a single system must not affect further critical system or endanger exposure of sensitive data. On the one side the benefits of virtualization and automation techniques are mandatory for Telcos but on the other side telecommunication data and involved systems must be protected by the highest level of security and comply to local regulation laws (which are often more strict in comparison with enterprise).

Placement Zones should act as multiple lines of defense against a security breach. This use-case affects all of the main OpenStack modules.

Current Situation:

Today the DMZ and MZ concept is an essential part of the security design of nearly every Telco application deployment. Today this separation is done on by a consequent physical segregation including hosts, network and management systems. This separation leads to high investment and operational costs.

Enable the following:

We would like to use Placement Zones to ensure that only VMs following the same security classification will run on the same group of physical hosts. This should avoid a mix of VMs from different zones (e.g. DMZ and MZ), coming up with different security requirements, running on the same group of hosts. Therefore a host (respectively multiple hosts) must be classified and assigned to only one placement zone. During the deployment process of a VM it must be possible to assign it to one placement zone (or use the default one), which automatically leads to a grouping of VMs. The security separation within the network can be done on a logical layer using the same physical elements but adding segregation through VLANs, virtual firewalls and other overlay techniques.

Example:

An application presentation layer (e.g. webserver) must be decoupled from the systems containing sensitive data (e.g. database) through at least one security enforcement device (e.g. virtual firewall) and a separation of underlying infrastructure (hypervisor).

TelcoWG Placementzones.png


[ logical view ; data center router and switching infrastructure intentionally missing ]

Characteristics

Four placement zones separated through different server pools using the same physical network:

  • Exposed Host Domain (EHD) provides direct access from the public network (internet). Designed for customer facing services which require a high traffic volume (e.g. CDN) and are not security critical.
  • Demilitarized Zone (DMZ) provides access to the public network, but adds an additional security layer (e.g. firewall). Designed for security critical customer facing services (e.g. customer control center).
  • Militarized Zone (MZ) provides a logical network without any access from public network. Designed for systems without direct customer connectivity (e.g. databases containing sensitive data) and high security demand.
  • Secure Network (SEC) for all devices providing a security function including devices providing connectivity between Placement Zones (e.g. virtual firewall for DMZ-MZ traffic)


Requirements

  • One OpenStack installation must be capable to manage different placement zones. All resources (compute, network and storage) are assigned to one placement zone. By default, all resources are assigned to the "default" placement zone of OpenStack
  • SEC is a special placement zone - it provides the glue to connect the placement zones on the network layer using vNFs. SEC VNFs may be attached to resources of other placement zones
  • Placement zone usage requires a permission (in SEC, tenants cannot start VMs, this zone supports only the deployment of Xaas services [FWaas, LBaas,...])
  • If placement zones are required in a cloud, VMs must be assigned to one placement zone
  • All resources, which are needed to run a VM must belong to the same placement zone
  • Physical Hosts (compute nodes) must be able to assigned to only one placement zone (re-assigning possible due to changing utilization)
    • Several assignments must be restricted by the API
    • If a host is reassigned it must evacuate all existing VM
  • ...and the whole thing must be optional  :-)


Current state in OpenStack

Nova issues:

  • Usage of availability zone(AZ)/host aggregates to assign a vm to a sec (placement??) zone [Ref. 1]
    • By default a host can be assigned to multiple availability zones
    • It's up to the operator to ensure security
    • Maybe Congress [Ref. 2] (Policy as a Service) could be a solution?
  • In case of a reassignment the running VM (potential from a different availability zone) remain on the host


Neutron issues:

  • AZ's are not known to Neutron services
    • It's up to the operator to ensure that the right networks are attached


Cinder/Manila/Storage issues:

  • Storage can be segregated with volume-types.
  • AZ's are not known to the storage services
    • Must be ensured from the deployment tool that the right storage is accessible


OpenStack regions provide a segregation of all resources. They cloud be used to implement placement zones, BUT:

  • Complex and resource consuming installation for the Openstack management systems
  • Tenants must deal with additional regions
  • No L2 network sharing for VMs in the SEC placement zone required to glue the zones together
  • No real enforcement
  • Complex operations

References

  1. http://docs.openstack.org/openstack-ops/content/scaling.html
  2. https://wiki.openstack.org/wiki/Congress

Work In Progress

Service Chaining

Etherpad: https://etherpad.openstack.org/p/kKIqu2ipN6

Orchestration

Etherpad: https://etherpad.openstack.org/p/telco_orchestration

MNO/MVNO Use Case

Etherpad: https://etherpad.openstack.org/p/mno-mvno

SIP Load-Balancing-as-a-Service

Etherpad: https://etherpad.openstack.org/p/telcowg-usecase-SIP_LBaaS