Difference between revisions of "Magnum/Networking"
m (→Magnum Networking Details) |
m (→Magnum Networking Details) |
||
(10 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
== Magnum Networking Overview == | == Magnum Networking Overview == | ||
− | Networking within Magnum is separated into two parts, node/master networking and container networking. Magnum nodes/masters are implemented as Nova virtual machines, bare-metal machines will be supported at a future date with [https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support this blueprint]. | + | Networking within Magnum is separated into two parts, node/master networking and container networking. Magnum nodes/masters are implemented as Nova virtual machines, bare-metal machines will be supported at a future date with [https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support this blueprint]. Magnum leverages [https://github.com/openstack/magnum/tree/master/magnum/templates Heat templates] to orchestrate required resources to instantiate a bay. For example, Neutron ports are created, nodes/masters Nova instances are spawned and attached to Neutron ports, Kubernetes services and associated configuration files are managed, etc.. Masters run control plane services such as kube-api, kube-scheduler, etc., while nodes run worker services such as the kubelet. The exact services running on a master/node are specific to the ''coe'' (Container Orchestration Engine) baymodel attribute. |
Magnum nodes communicate over a Neutron network. This allows, for example, the kubelet service running on a node to communicate with the kube-api service running on a master. In addition to node networking, container networking is used to interconnect containers running on Magnum nodes. The details for providing this connectivity depends on the container networking implementation used by the COE. Container networking follows the Magnum Container Networking Model [https://review.openstack.org/#/c/204686/ spec]. In general, each COE has a default container networking driver that allows Magnum users to instantiate bays without the need to specify any container networking information. | Magnum nodes communicate over a Neutron network. This allows, for example, the kubelet service running on a node to communicate with the kube-api service running on a master. In addition to node networking, container networking is used to interconnect containers running on Magnum nodes. The details for providing this connectivity depends on the container networking implementation used by the COE. Container networking follows the Magnum Container Networking Model [https://review.openstack.org/#/c/204686/ spec]. In general, each COE has a default container networking driver that allows Magnum users to instantiate bays without the need to specify any container networking information. | ||
Line 11: | Line 11: | ||
== Magnum Networking Details == | == Magnum Networking Details == | ||
− | As previously mentioned, masters/nodes host necessary services such as docker, kube-api, kubelet, etc. required to run containers. Nodes are connected to a Neutron network, which is connected to a Neutron router. The Neutron router provides connectivity between the node network and a pre-existing Neutron external network. The ''external-network-id'' attribute is used to specify the Neutron external network during a Magnum baymodel-create: | + | As previously mentioned, Magnum masters/nodes host necessary services such as docker, kube-api, kubelet, etc. required to run containers. Nodes are connected to a Neutron network, which is connected to a Neutron router. The Neutron router provides connectivity between the node network and a pre-existing Neutron external network. The ''external-network-id'' attribute is used to specify the Neutron external network during a Magnum baymodel-create. As previously mentioned, Magnum supports multiple COE's. This guide will use the Kubernetes COE for all examples: |
− | |||
<pre> | <pre> | ||
− | + | magnum baymodel-create --name k8sbaymodel \ | |
--image-id fedora-21-atomic-5 \ | --image-id fedora-21-atomic-5 \ | ||
--keypair-id testkey \ | --keypair-id testkey \ | ||
--external-network-id public \ | --external-network-id public \ | ||
− | |||
− | |||
− | |||
− | |||
--coe kubernetes | --coe kubernetes | ||
</pre> | </pre> | ||
− | Floating-ip's are automatically assigned from the external network to the masters and nodes during a bay-create and are used for externally accessing the nodes (e.g. SSH management). | + | Floating-ip's are automatically assigned from the external network to the masters and nodes during a bay-create and are used for externally accessing the nodes (e.g. SSH management). The baymodel is referenced during a ''bay-create'' to specify attributes such as the Neutron ''external-network-id'': |
+ | <pre> | ||
+ | $ magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1 | ||
+ | </pre> | ||
+ | |||
+ | === Magnum Heat Template Workflow === | ||
+ | |||
+ | Let's breakdown the [https://github.com/openstack/magnum/tree/master/magnum/templates/kubernetes Kubernetes Heat templates] responsible for orchestrating the bay, including the master, node and container networking. Since our example uses a baymodel with the [http://www.projectatomic.io/ Atomic] image, we will focus on the [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubecluster.yaml kubecluster], [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubemaster.yaml kubemaster] and [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubeminion.yaml kubeminion] templates. Kubecluster is the top-level template, where cluster-wide resources and parameters are defined. Masters and nodes are implemented within kubecluster as resource groups. For more information on Heat resource groups, read Steve Hardy's [http://hardysteven.blogspot.com/2014/09/using-heat-resourcegroup-resources.html blog post]. | ||
+ | |||
+ | [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubecluster.yaml kubecluster] creates a Neutron private network and subnet: | ||
+ | <pre> | ||
+ | fixed_network: | ||
+ | type: OS::Neutron::Net | ||
+ | properties: | ||
+ | name: private | ||
+ | |||
+ | fixed_subnet: | ||
+ | type: OS::Neutron::Subnet | ||
+ | properties: | ||
+ | cidr: {get_param: fixed_network_cidr} | ||
+ | network: {get_resource: fixed_network} | ||
+ | dns_nameservers: | ||
+ | - {get_param: dns_nameserver} | ||
+ | </pre> | ||
+ | |||
+ | The template then creates a Neutron router and attaches it to the private subnet and external network. Again, the external network is predefined in the baymodel and is not a resource managed by any of the templates: | ||
+ | <pre> | ||
+ | extrouter: | ||
+ | type: OS::Neutron::Router | ||
+ | properties: | ||
+ | external_gateway_info: | ||
+ | network: {get_param: external_network} | ||
+ | |||
+ | extrouter_inside: | ||
+ | type: OS::Neutron::RouterInterface | ||
+ | properties: | ||
+ | router_id: {get_resource: extrouter} | ||
+ | subnet: {get_resource: fixed_subnet} | ||
+ | </pre> | ||
+ | |||
+ | Security groups are then created: | ||
+ | <pre> | ||
+ | secgroup_base: | ||
+ | type: OS::Neutron::SecurityGroup | ||
+ | properties: | ||
+ | rules: | ||
+ | - protocol: icmp | ||
+ | - protocol: tcp | ||
+ | port_range_min: 22 | ||
+ | port_range_max: 22 | ||
+ | |||
+ | secgroup_kube_master: | ||
+ | type: OS::Neutron::SecurityGroup | ||
+ | properties: | ||
+ | rules: | ||
+ | - protocol: tcp | ||
+ | port_range_min: 7080 | ||
+ | port_range_max: 7080 | ||
+ | - protocol: tcp | ||
+ | port_range_min: 8080 | ||
+ | port_range_max: 8080 | ||
+ | - protocol: tcp | ||
+ | port_range_min: 2379 | ||
+ | port_range_max: 2379 | ||
+ | - protocol: tcp | ||
+ | port_range_min: 2380 | ||
+ | port_range_max: 2380 | ||
+ | - protocol: tcp | ||
+ | port_range_min: 6443 | ||
+ | port_range_max: 6443 | ||
+ | - protocol: tcp | ||
+ | port_range_min: 30000 | ||
+ | port_range_max: 32767 | ||
+ | |||
+ | secgroup_kube_minion: | ||
+ | type: OS::Neutron::SecurityGroup | ||
+ | properties: | ||
+ | rules: | ||
+ | - protocol: icmp | ||
+ | - protocol: tcp | ||
+ | - protocol: udp | ||
+ | </pre> | ||
+ | |||
+ | The Neutron LBaaS service is implemented for the Kubernetes API and [https://github.com/coreos/etcd Etcd] services. This provides high availability and allows scale-out of control plane services : | ||
+ | <pre> | ||
+ | api_monitor: | ||
+ | type: OS::Neutron::HealthMonitor | ||
+ | properties: | ||
+ | type: TCP | ||
+ | delay: 5 | ||
+ | max_retries: 5 | ||
+ | timeout: 5 | ||
+ | api_pool: | ||
+ | type: OS::Neutron::Pool | ||
+ | properties: | ||
+ | protocol: {get_param: loadbalancing_protocol} | ||
+ | monitors: [{get_resource: api_monitor}] | ||
+ | subnet: {get_resource: fixed_subnet} | ||
+ | lb_method: ROUND_ROBIN | ||
+ | vip: | ||
+ | protocol_port: {get_param: kubernetes_port} | ||
+ | |||
+ | etcd_monitor: | ||
+ | type: OS::Neutron::HealthMonitor | ||
+ | properties: | ||
+ | type: TCP | ||
+ | delay: 5 | ||
+ | max_retries: 5 | ||
+ | timeout: 5 | ||
+ | |||
+ | etcd_pool: | ||
+ | type: OS::Neutron::Pool | ||
+ | properties: | ||
+ | protocol: HTTP | ||
+ | monitors: [{get_resource: etcd_monitor}] | ||
+ | subnet: {get_resource: fixed_subnet} | ||
+ | lb_method: ROUND_ROBIN | ||
+ | vip: | ||
+ | protocol_port: 2379 | ||
+ | </pre> | ||
+ | |||
+ | A floating IP is created for the pool of Kubernetes API servers: | ||
<pre> | <pre> | ||
− | + | api_pool_floating: | |
+ | type: OS::Neutron::FloatingIP | ||
+ | depends_on: | ||
+ | - extrouter_inside | ||
+ | properties: | ||
+ | floating_network: {get_param: external_network} | ||
+ | port_id: {get_attr: [api_pool, vip, port_id]} | ||
+ | </pre> | ||
+ | |||
+ | The [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubecluster.yaml kubecluster] template then moves to the [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubecluster.yaml#L387-L427 kube_master resource group] to orchestrate Kubernetes master-specific resources. The [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubemaster.yaml kubemaster template] creates a Neutron port, floating-ip and associates the port to the Kubernetes API and Etcd load-balancing pools. | ||
+ | <pre> | ||
+ | kube_master_eth0: | ||
+ | type: OS::Neutron::Port | ||
+ | properties: | ||
+ | network: {get_param: fixed_network} | ||
+ | security_groups: | ||
+ | - {get_param: secgroup_base_id} | ||
+ | - {get_param: secgroup_kube_master_id} | ||
+ | fixed_ips: | ||
+ | - subnet: {get_param: fixed_subnet} | ||
+ | replacement_policy: AUTO | ||
+ | |||
+ | kube_master_floating: | ||
+ | type: OS::Neutron::FloatingIP | ||
+ | properties: | ||
+ | floating_network: {get_param: external_network} | ||
+ | port_id: {get_resource: kube_master_eth0} | ||
+ | |||
+ | api_pool_member: | ||
+ | type: OS::Neutron::PoolMember | ||
+ | properties: | ||
+ | pool_id: {get_param: api_pool_id} | ||
+ | address: {get_attr: [kube_master_eth0, fixed_ips, 0, ip_address]} | ||
+ | protocol_port: {get_param: kubernetes_port} | ||
+ | |||
+ | etcd_pool_member: | ||
+ | type: OS::Neutron::PoolMember | ||
+ | properties: | ||
+ | pool_id: {get_param: etcd_pool_id} | ||
+ | address: {get_attr: [kube_master_eth0, fixed_ips, 0, ip_address]} | ||
+ | protocol_port: 2379 | ||
</pre> | </pre> | ||
− | + | Heat then goes through a series of steps to configure the Kubernetes master: | |
+ | <pre> | ||
+ | kube_master_init: | ||
+ | type: OS::Heat::MultipartMime | ||
+ | properties: | ||
+ | parts: | ||
+ | - config: {get_resource: disable_selinux} | ||
+ | - config: {get_resource: write_heat_params} | ||
+ | - config: {get_resource: configure_etcd} | ||
+ | - config: {get_resource: kube_user} | ||
+ | - config: {get_resource: write_kube_os_config} | ||
+ | - config: {get_resource: make_cert} | ||
+ | - config: {get_resource: configure_kubernetes} | ||
+ | - config: {get_resource: add_proxy} | ||
+ | - config: {get_resource: enable_services} | ||
+ | - config: {get_resource: write_network_config} | ||
+ | - config: {get_resource: network_config_service} | ||
+ | - config: {get_resource: network_service} | ||
+ | - config: {get_resource: kube_examples} | ||
+ | - config: {get_resource: master_wc_notify} | ||
+ | </pre> | ||
+ | Each get_resource is a Heat SoftwareConfig resource. Review Steve Hardy's [http://hardysteven.blogspot.com/2015/05/heat-softwareconfig-resources.html blog post] for more details on SoftwareConfig resources. In general, the SoftwareConfig resources within Magnum are a series of cloud-config files or scripts to further configure the master/node. The Kubernetes SoftwareConfig files can be viewed [https://github.com/openstack/magnum/tree/master/magnum/templates/kubernetes/fragments here]. | ||
+ | === Container Networking Resource Groups === | ||
+ | Let's take a closer look at the three resource groups responsible for configuring container networking, [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/fragments/write-network-config.sh write_network_config], [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/fragments/network-config-service.sh network_config_service] and [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/fragments/network-service.sh network_service] SoftwareConfig resources. The [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/fragments/write-network-config.sh write_network_config] resource is responsible for managing the Flannel configuration files. The [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/fragments/network-config-service.sh network_config_service] resource is responsible for managing the Flannel configuration service binary, systemd unit file and starts the service. The [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/fragments/network-service.sh network_service] resource is responsible for exposing the flannel service to the Docker daemon. | ||
− | + | === What About the Nodes? === | |
− | |||
− | |||
− | |||
− | + | Magnum nodes are also known as Minions within the Kubernetes COE. '''Note:''' The Kubernetes project has completed the naming convention change from minion to node, but this change has not been implemented in Magnum yet. Magnum nodes follow a similar workflow as the masters. Similar to the master template, the kubeminion template creates resources for a [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubeminion.yaml#L321-L357 Neutron port, floating-ip and Nova virtual machine]. The template also uses resource groups to further configure the node. The node uses only one resource , called [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubeminion.yaml#L316 network_service] to configure container networking. This is because the Flannel configuration service only runs on masters. Nodes simply configure the Flannel daemon, start the daemon which reads the required configuration from etcd. |
Latest revision as of 17:56, 10 December 2015
Contents
Magnum Networking Overview
Networking within Magnum is separated into two parts, node/master networking and container networking. Magnum nodes/masters are implemented as Nova virtual machines, bare-metal machines will be supported at a future date with this blueprint. Magnum leverages Heat templates to orchestrate required resources to instantiate a bay. For example, Neutron ports are created, nodes/masters Nova instances are spawned and attached to Neutron ports, Kubernetes services and associated configuration files are managed, etc.. Masters run control plane services such as kube-api, kube-scheduler, etc., while nodes run worker services such as the kubelet. The exact services running on a master/node are specific to the coe (Container Orchestration Engine) baymodel attribute.
Magnum nodes communicate over a Neutron network. This allows, for example, the kubelet service running on a node to communicate with the kube-api service running on a master. In addition to node networking, container networking is used to interconnect containers running on Magnum nodes. The details for providing this connectivity depends on the container networking implementation used by the COE. Container networking follows the Magnum Container Networking Model spec. In general, each COE has a default container networking driver that allows Magnum users to instantiate bays without the need to specify any container networking information.
The default container networking implementation of a COE follows the upstream project, e.g. Flannel is the default container networking driver for Kubernetes. The network-driver attribute can be passed to a baymodel to select a container networking driver other than the default. Note: Not every network driver supports every COE, use the Network Driver Matrix to learn more about container networking driver support. In addition to specifying a container networking driver, labels can be passed to a baymodel to modify default settings of the driver. Use the Label Matrix to better understand the labels that each driver supports.
The Magnum wiki and Quick-Start Guide provides additional background on Magnum concepts and how to setup a Magnum development environment using DevStack.
Magnum Networking Details
As previously mentioned, Magnum masters/nodes host necessary services such as docker, kube-api, kubelet, etc. required to run containers. Nodes are connected to a Neutron network, which is connected to a Neutron router. The Neutron router provides connectivity between the node network and a pre-existing Neutron external network. The external-network-id attribute is used to specify the Neutron external network during a Magnum baymodel-create. As previously mentioned, Magnum supports multiple COE's. This guide will use the Kubernetes COE for all examples:
magnum baymodel-create --name k8sbaymodel \ --image-id fedora-21-atomic-5 \ --keypair-id testkey \ --external-network-id public \ --coe kubernetes
Floating-ip's are automatically assigned from the external network to the masters and nodes during a bay-create and are used for externally accessing the nodes (e.g. SSH management). The baymodel is referenced during a bay-create to specify attributes such as the Neutron external-network-id:
$ magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1
Magnum Heat Template Workflow
Let's breakdown the Kubernetes Heat templates responsible for orchestrating the bay, including the master, node and container networking. Since our example uses a baymodel with the Atomic image, we will focus on the kubecluster, kubemaster and kubeminion templates. Kubecluster is the top-level template, where cluster-wide resources and parameters are defined. Masters and nodes are implemented within kubecluster as resource groups. For more information on Heat resource groups, read Steve Hardy's blog post.
kubecluster creates a Neutron private network and subnet:
fixed_network: type: OS::Neutron::Net properties: name: private fixed_subnet: type: OS::Neutron::Subnet properties: cidr: {get_param: fixed_network_cidr} network: {get_resource: fixed_network} dns_nameservers: - {get_param: dns_nameserver}
The template then creates a Neutron router and attaches it to the private subnet and external network. Again, the external network is predefined in the baymodel and is not a resource managed by any of the templates:
extrouter: type: OS::Neutron::Router properties: external_gateway_info: network: {get_param: external_network} extrouter_inside: type: OS::Neutron::RouterInterface properties: router_id: {get_resource: extrouter} subnet: {get_resource: fixed_subnet}
Security groups are then created:
secgroup_base: type: OS::Neutron::SecurityGroup properties: rules: - protocol: icmp - protocol: tcp port_range_min: 22 port_range_max: 22 secgroup_kube_master: type: OS::Neutron::SecurityGroup properties: rules: - protocol: tcp port_range_min: 7080 port_range_max: 7080 - protocol: tcp port_range_min: 8080 port_range_max: 8080 - protocol: tcp port_range_min: 2379 port_range_max: 2379 - protocol: tcp port_range_min: 2380 port_range_max: 2380 - protocol: tcp port_range_min: 6443 port_range_max: 6443 - protocol: tcp port_range_min: 30000 port_range_max: 32767 secgroup_kube_minion: type: OS::Neutron::SecurityGroup properties: rules: - protocol: icmp - protocol: tcp - protocol: udp
The Neutron LBaaS service is implemented for the Kubernetes API and Etcd services. This provides high availability and allows scale-out of control plane services :
api_monitor: type: OS::Neutron::HealthMonitor properties: type: TCP delay: 5 max_retries: 5 timeout: 5 api_pool: type: OS::Neutron::Pool properties: protocol: {get_param: loadbalancing_protocol} monitors: [{get_resource: api_monitor}] subnet: {get_resource: fixed_subnet} lb_method: ROUND_ROBIN vip: protocol_port: {get_param: kubernetes_port} etcd_monitor: type: OS::Neutron::HealthMonitor properties: type: TCP delay: 5 max_retries: 5 timeout: 5 etcd_pool: type: OS::Neutron::Pool properties: protocol: HTTP monitors: [{get_resource: etcd_monitor}] subnet: {get_resource: fixed_subnet} lb_method: ROUND_ROBIN vip: protocol_port: 2379
A floating IP is created for the pool of Kubernetes API servers:
api_pool_floating: type: OS::Neutron::FloatingIP depends_on: - extrouter_inside properties: floating_network: {get_param: external_network} port_id: {get_attr: [api_pool, vip, port_id]}
The kubecluster template then moves to the kube_master resource group to orchestrate Kubernetes master-specific resources. The kubemaster template creates a Neutron port, floating-ip and associates the port to the Kubernetes API and Etcd load-balancing pools.
kube_master_eth0: type: OS::Neutron::Port properties: network: {get_param: fixed_network} security_groups: - {get_param: secgroup_base_id} - {get_param: secgroup_kube_master_id} fixed_ips: - subnet: {get_param: fixed_subnet} replacement_policy: AUTO kube_master_floating: type: OS::Neutron::FloatingIP properties: floating_network: {get_param: external_network} port_id: {get_resource: kube_master_eth0} api_pool_member: type: OS::Neutron::PoolMember properties: pool_id: {get_param: api_pool_id} address: {get_attr: [kube_master_eth0, fixed_ips, 0, ip_address]} protocol_port: {get_param: kubernetes_port} etcd_pool_member: type: OS::Neutron::PoolMember properties: pool_id: {get_param: etcd_pool_id} address: {get_attr: [kube_master_eth0, fixed_ips, 0, ip_address]} protocol_port: 2379
Heat then goes through a series of steps to configure the Kubernetes master:
kube_master_init: type: OS::Heat::MultipartMime properties: parts: - config: {get_resource: disable_selinux} - config: {get_resource: write_heat_params} - config: {get_resource: configure_etcd} - config: {get_resource: kube_user} - config: {get_resource: write_kube_os_config} - config: {get_resource: make_cert} - config: {get_resource: configure_kubernetes} - config: {get_resource: add_proxy} - config: {get_resource: enable_services} - config: {get_resource: write_network_config} - config: {get_resource: network_config_service} - config: {get_resource: network_service} - config: {get_resource: kube_examples} - config: {get_resource: master_wc_notify}
Each get_resource is a Heat SoftwareConfig resource. Review Steve Hardy's blog post for more details on SoftwareConfig resources. In general, the SoftwareConfig resources within Magnum are a series of cloud-config files or scripts to further configure the master/node. The Kubernetes SoftwareConfig files can be viewed here.
Container Networking Resource Groups
Let's take a closer look at the three resource groups responsible for configuring container networking, write_network_config, network_config_service and network_service SoftwareConfig resources. The write_network_config resource is responsible for managing the Flannel configuration files. The network_config_service resource is responsible for managing the Flannel configuration service binary, systemd unit file and starts the service. The network_service resource is responsible for exposing the flannel service to the Docker daemon.
What About the Nodes?
Magnum nodes are also known as Minions within the Kubernetes COE. Note: The Kubernetes project has completed the naming convention change from minion to node, but this change has not been implemented in Magnum yet. Magnum nodes follow a similar workflow as the masters. Similar to the master template, the kubeminion template creates resources for a Neutron port, floating-ip and Nova virtual machine. The template also uses resource groups to further configure the node. The node uses only one resource , called network_service to configure container networking. This is because the Flannel configuration service only runs on masters. Nodes simply configure the Flannel daemon, start the daemon which reads the required configuration from etcd.