Jump to: navigation, search

Difference between revisions of "Magnum/Networking"

(Created page with "== Magnum Networking Overview == Networking within Magnum is separated into two parts, node/master networking and container networking. Magnum nodes/masters are implemented a...")
 
m (Magnum Networking Overview)
Line 3: Line 3:
 
Networking within Magnum is separated into two parts, node/master networking and container networking. Magnum nodes/masters are implemented as Nova virtual machines, bare-metal machines will be supported at a future date with [https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support this blueprint]. When Magnum instantiates a bay, nodes/masters are created to realize a clustered container deployment. Masters run control plane services such as kube-api, kube-scheduler, etc., while nodes run worker services such as the kubelet. The exact services running on a master/node are specific to the ''coe'' (Container Orchestration Engine) baymodel attribute.
 
Networking within Magnum is separated into two parts, node/master networking and container networking. Magnum nodes/masters are implemented as Nova virtual machines, bare-metal machines will be supported at a future date with [https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support this blueprint]. When Magnum instantiates a bay, nodes/masters are created to realize a clustered container deployment. Masters run control plane services such as kube-api, kube-scheduler, etc., while nodes run worker services such as the kubelet. The exact services running on a master/node are specific to the ''coe'' (Container Orchestration Engine) baymodel attribute.
  
Magnum nodes communicate over a Neutron network. This allows, for example, the kubelet service running on a node to communicate with the kube-api service running on a master. In addition to node networking, container networking is used to interconnect containers running on Magnum nodes. The details for providing this connectivity depends on the container networking implementation used by the COE. Container networking follows the Magnum Container Networking Model [https://review.openstack.org/#/c/204686/ spec]. In general, each COE has a default container networking driver that allows Magnum users to instantiate bays without the need to specify any container networking information. The default container networking implementation of a COE follows the upstream project, e.g. [https://github.com/coreos/flannel Flannel] is the default container networking driver for [http://kubernetes.io/ Kubernetes]. The ''network-driver'' attribute can be passed to the baymodel API to select a container networking driver other than the default. '''Note:''' Not every network driver supports every COE, use the [https://wiki.openstack.org/wiki/Magnum/NetworkDriverMatrix Network Driver Matrix] to learn more about container networking driver support. In addition to specifying a container networking driver, labels can be passed to a baymodel to modify default settings of the driver. Use the [https://wiki.openstack.org/wiki/Magnum/LabelMatrix Label Matrix] to better understand the labels  that each driver supports.
+
Magnum nodes communicate over a Neutron network. This allows, for example, the kubelet service running on a node to communicate with the kube-api service running on a master. In addition to node networking, container networking is used to interconnect containers running on Magnum nodes. The details for providing this connectivity depends on the container networking implementation used by the COE. Container networking follows the Magnum Container Networking Model [https://review.openstack.org/#/c/204686/ spec]. In general, each COE has a default container networking driver that allows Magnum users to instantiate bays without the need to specify any container networking information. The default container networking implementation of a COE follows the upstream project, e.g. [https://github.com/coreos/flannel Flannel] is the default container networking driver for [http://kubernetes.io/ Kubernetes]. The ''network-driver'' attribute can be passed to a baymodel to select a container networking driver other than the default. '''Note:''' Not every network driver supports every COE, use the [https://wiki.openstack.org/wiki/Magnum/NetworkDriverMatrix Network Driver Matrix] to learn more about container networking driver support. In addition to specifying a container networking driver, labels can be passed to a baymodel to modify default settings of the driver. Use the [https://wiki.openstack.org/wiki/Magnum/LabelMatrix Label Matrix] to better understand the labels  that each driver supports.
  
 
The [https://wiki.openstack.org/wiki/Magnum Magnum wiki] and [http://docs.openstack.org/developer/magnum/dev/dev-quickstart.html Quick-Start Guide] provides additional background on Magnum concepts and how to setup a Magnum development environment using [http://docs.openstack.org/developer/devstack/ DevStack].
 
The [https://wiki.openstack.org/wiki/Magnum Magnum wiki] and [http://docs.openstack.org/developer/magnum/dev/dev-quickstart.html Quick-Start Guide] provides additional background on Magnum concepts and how to setup a Magnum development environment using [http://docs.openstack.org/developer/devstack/ DevStack].
 
  
 
== Magnum Networking Details ==
 
== Magnum Networking Details ==

Revision as of 19:25, 9 December 2015

Magnum Networking Overview

Networking within Magnum is separated into two parts, node/master networking and container networking. Magnum nodes/masters are implemented as Nova virtual machines, bare-metal machines will be supported at a future date with this blueprint. When Magnum instantiates a bay, nodes/masters are created to realize a clustered container deployment. Masters run control plane services such as kube-api, kube-scheduler, etc., while nodes run worker services such as the kubelet. The exact services running on a master/node are specific to the coe (Container Orchestration Engine) baymodel attribute.

Magnum nodes communicate over a Neutron network. This allows, for example, the kubelet service running on a node to communicate with the kube-api service running on a master. In addition to node networking, container networking is used to interconnect containers running on Magnum nodes. The details for providing this connectivity depends on the container networking implementation used by the COE. Container networking follows the Magnum Container Networking Model spec. In general, each COE has a default container networking driver that allows Magnum users to instantiate bays without the need to specify any container networking information. The default container networking implementation of a COE follows the upstream project, e.g. Flannel is the default container networking driver for Kubernetes. The network-driver attribute can be passed to a baymodel to select a container networking driver other than the default. Note: Not every network driver supports every COE, use the Network Driver Matrix to learn more about container networking driver support. In addition to specifying a container networking driver, labels can be passed to a baymodel to modify default settings of the driver. Use the Label Matrix to better understand the labels that each driver supports.

The Magnum wiki and Quick-Start Guide provides additional background on Magnum concepts and how to setup a Magnum development environment using DevStack.

Magnum Networking Details

As previously mentioned, masters/nodes host necessary services such as docker, kube-api, kubelet, etc. required to run containers. Nodes are connected to a Neutron network, which is connected to a Neutron router. The Neutron router provides connectivity between the node network and a pre-existing Neutron external network. The external-network-id attribute is used to specify the Neutron external network during a Magnum baymodel-create:

magnum baymodel-create --name k8sbaymodel \
                       --image-id fedora-21-atomic-5 \
                       --keypair-id testkey \
                       --external-network-id public \
                       --dns-nameserver 8.8.8.8 \
                       --flavor-id m1.small \
                       --docker-volume-size 5 \
                       --network-driver flannel \
                       --coe kubernetes

Floating-ip's are automatically assigned from the external network to nodes during a bay-create and are used for externally accessing the nodes (e.g. SSH management). When a bay is created, baymodel attributes such as the external-network-id are used to specify

References

networking secgroups master port and float Minion port and float

Magnum Networking Workflow