Jump to: navigation, search

Networking-vpp

Revision as of 16:58, 6 December 2016 by Ijw (talk | contribs) (The networking-vpp ML2 driver for OpenStack Neutron)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Networking-VPP

Overview

The networking-vpp project is an ML2 mechanism driver that controls the FD.io VPP software switch.

What is VPP and why would I want it?

VPP is a very fast software switch particularly suited to highly network-intensive applications.

It is a user-space switch running on top of DPDK, which means that instead of using the kernel drivers to get packets from the hardware it takes direct hardware control to speed up the packet path - fewer kernel calls, meaning fewer context switches, meaning faster processing. DPDK also helps optimise the use of memory within the system, using hugepages, ensuring everything is NUMA-local and providing acceleration functionality for the day to day jobs of a switch including copying packets. VPP adds to this with a special algorithm to ensure that the fewest cycles possible are spent on packet processing by processing multiple packets in batches - this ensures that the CPU's caches remain hot and cache misses are avoided. (Consider: at top rate at 10G, you have 14.8M packets to process every second, and that leaves you with about 150 clock cycles per packet. Cache misses can take 17 cycles apiece to process.)

One thing to note: VPP is dedicated to its task of top speed networking. To do this, it - like all DPDK based systems - will use one core for its task at all times, even when there is no traffic going. This is normal and expected, so don't be surprised when you see it as the first line in 'top'. For that cost, you can guarantee low latency for all packets at all times.

What is networking-vpp good for?

networking-vpp is a work in progress. We are constantly adding functionality and aligning with upstream releases of VPP. As of the time of writing (Dec 2016) networking-vpp networks VMs together and interoperates correctly with ML2's drivers for L3, DHCP and metadata; it's written to be redundant, so that single-point failures in the system are not critical and are recoverable; and we've given some thought to common maintenance operations like upgrades so that you can upgrade the code for the controller and even the code for the forwarder without taking your system down. (Best of luck upgrading kernel forwarders like OVS like that!) It's not completely at par with stock networking drivers in Neutron such as OVS or Linuxbridge yet - security groups are a notable missing feature - but we hope to address most of the outlying pieces with our first production release against VPP 17.01 in January 2017.

How do I get involved?

We welcome contributions from anyone who's interested in helping - simply push a patch up for review using the standard OpenStack review system in our repository, https://github.com/openstack/networking-vpp. Pending patches live at https://launchpad.net/networking-vpp - feel free to join in with reviews, and comment on anything you think doesn't work, doesn't look tidy or that you don't understand - our code should be simple enough ad well commented enough that it's easy to read and fix, so no question is a bad question. If you want help, you should be able to find someone on the openstack-dev@openstack.org. mailing list, or you can ping us on irc.freenode.net #openstack-neutron. Our bug list is kept in Launchpad - start at https://launchpad.net/networking-vpp if you find a problem. (We welcome bug reports of any sort; if VPP doesn't work, networking-vpp doesn't work, so file it in there and we can add and follow any required VPP fixes as we go.)

Where can I help the most?

We welcome any contributions. If you have simple fixes or feature additions, just post them. If you want to make a more drastic change and you want a second opinion on the idea before you start coding, then you can always file a bug with a title starting 'RFE:' and explain your thinking before you begin, and we'll give you a second opinion on your idea and suggestions for how best to implement it and how it might interact with other changes in the pipeline.