Jump to: navigation, search

Neutron/DynamicRouting/UseCases

< Neutron‎ | DynamicRouting
Revision as of 14:48, 27 March 2014 by Carl (talk | contribs)

Neutron Dynamic Routing Use Cases

This is a short attempt to list the use cases for dynamic routing in Neutron. We’ll then attempt to find commonalities between the use cases to see where abstractions can be made and code shared in a common framework.



Multi-homed Openstack Cloud

A multi-homed Openstack cloud would run a routing protocol (example BGP) against at least one router in each uplink network provider. By announcing floating IP prefixes to those peers, the Neutron network would be reachable by the rest of the internet via both paths. If the link to an uplink provider broke, the failure information would propagate to routers further up the stream, keeping the cloud reachable through the remaining healthy link. Likewise, in such a case, Neutron’s L3 router would eliminate the routes learned through the faulty link from its forwarding table, redirecting all cloud-originated traffic through the healthy link.



External Network Spread Over Multiple L2 Networks

This use case will allow a single external network with a large public IP space to be spread across more than one L2 network. This could allow for improved scale and isolation between AZs while maintaining the appearance of a large external network where floating IPs and networks can still float freely.

This use case will require announcing floating IPs and possibly public networks behind Neutron routers to an upstream router. It does not necessarily require learning routes from the upstream router.



MPLS/BGP

Note: This section is based on my own understanding of the blueprint. I would like feedback on it to be sure that I understand correctly.

This use case is described in its own blueprint. From the blueprint: "This BP implements partial RFC4364 BGP/MPLS IP Virtual Private Networks (VPNs) support for interconnecting existing network and OpenStack cloud."

This was discussed in the Icehouse summit as part of VPNaaS. It could be argued that this use case is part of VPNaaS rather than dynamic routing.

VPNaaS currently has experimental implementations for IPSec and TLS. The VPN endpoint is inserted in to a neutron router. I assume that this implementation of VPNaaS will follow the same model.


Scenario 1: The Neutron Router is the CE

Following the pattern of the other VPNaaS implementations, I imagine that the neutron router will play the role of a CE. It will be a BGP peer to some PE and the route_target / import_target / export_target are exchanged between CE and PE as described in the last paragraph of section 4.3.1 of the RFC.

This type of VPN does not use strong endpoint authentication or encryption to ensure the privacy of the network. Instead it relies on an established relationship between the provider and the customer and a guarantee that the connection between the two is secured. This has some implications on the feasibility of bringing this in to the cloud if the cloud provider happens to be a third party distinct from the provider and the customer.

For example, consider the case where many neutron routers are connected to the same external network. Now, we connect a PE to the network with the goal to establish an MPLS/BPG link between one of the neutron routers acting as the CE to the PE. In this case, how will the provider securely determine the appropriate VRF for the customer’s traffic?

If I were the customer, I would be very nervous about using this shared external network as the “attachment circuit” for my virtual private network. I would insist on a private external network that is only accessible by my tenant. This is an extra constraint on this implementation of VPNaaS that the other implementations do not have. It already requires some configuration outside of Openstack which lessens the value of implementing any of this inside Openstack.


Scenario 2: The VMs are the CEs

This scenario does not follow the pattern of the other VPNaaS implementations. In this scenario the CEs are the VMs themselves and the PE is connected to the same provider network. The Neutron system establishes a single BGP connection with the provider and publishes routes to the known Neutron ports on the network. I’ll have to think through this scenario some more in order to expound.