Jump to: navigation, search

Magnum/Networking

< Magnum
Revision as of 19:47, 9 December 2015 by Danehans (talk | contribs) (Magnum Networking Overview)

Magnum Networking Overview

Networking within Magnum is separated into two parts, node/master networking and container networking. Magnum nodes/masters are implemented as Nova virtual machines, bare-metal machines will be supported at a future date with this blueprint. Magnum leverages Heat templates to orchestrate required resources to instantiate a bay (e.g. clustered container environment). For example, Neutron ports are created, nodes/masters Nova instances are spawned and attached to Neutron ports, Kubernetes services and associated configuration files are managed, etc.. Masters run control plane services such as kube-api, kube-scheduler, etc., while nodes run worker services such as the kubelet. The exact services running on a master/node are specific to the coe (Container Orchestration Engine) baymodel attribute.

Magnum nodes communicate over a Neutron network. This allows, for example, the kubelet service running on a node to communicate with the kube-api service running on a master. In addition to node networking, container networking is used to interconnect containers running on Magnum nodes. The details for providing this connectivity depends on the container networking implementation used by the COE. Container networking follows the Magnum Container Networking Model spec. In general, each COE has a default container networking driver that allows Magnum users to instantiate bays without the need to specify any container networking information.

The default container networking implementation of a COE follows the upstream project, e.g. Flannel is the default container networking driver for Kubernetes. The network-driver attribute can be passed to a baymodel to select a container networking driver other than the default. Note: Not every network driver supports every COE, use the Network Driver Matrix to learn more about container networking driver support. In addition to specifying a container networking driver, labels can be passed to a baymodel to modify default settings of the driver. Use the Label Matrix to better understand the labels that each driver supports.

The Magnum wiki and Quick-Start Guide provides additional background on Magnum concepts and how to setup a Magnum development environment using DevStack.

Magnum Networking Details

As previously mentioned, masters/nodes host necessary services such as docker, kube-api, kubelet, etc. required to run containers. Nodes are connected to a Neutron network, which is connected to a Neutron router. The Neutron router provides connectivity between the node network and a pre-existing Neutron external network. The external-network-id attribute is used to specify the Neutron external network during a Magnum baymodel-create:

$ magnum baymodel-create --name k8sbaymodel \
                       --image-id fedora-21-atomic-5 \
                       --keypair-id testkey \
                       --external-network-id public \
                       --dns-nameserver 8.8.8.8 \
                       --flavor-id m1.small \
                       --docker-volume-size 5 \
                       --network-driver flannel \
                       --coe kubernetes

Floating-ip's are automatically assigned from the external network to the masters and nodes during a bay-create and are used for externally accessing the nodes (e.g. SSH management). When a bay is created, baymodel attributes such as the external-network-id are used to specify required and optional settings:

$ magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1

References

networking secgroups master port and float Minion port and float

Magnum Networking Workflow