Difference between revisions of "Magnum/Networking"
m (→Magnum Networking Details) |
m (→References) |
||
Line 218: | Line 218: | ||
Let's take a closer look at the [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/fragments/write-network-config.sh write_network_config], [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/fragments/network-config-service.sh network_config_service] and [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/fragments/network-service.sh network_service] SoftwareConfig resources. | Let's take a closer look at the [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/fragments/write-network-config.sh write_network_config], [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/fragments/network-config-service.sh network_config_service] and [https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/fragments/network-service.sh network_service] SoftwareConfig resources. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Magnum Networking Workflow == | == Magnum Networking Workflow == |
Revision as of 20:41, 9 December 2015
Magnum Networking Overview
Networking within Magnum is separated into two parts, node/master networking and container networking. Magnum nodes/masters are implemented as Nova virtual machines, bare-metal machines will be supported at a future date with this blueprint. Magnum leverages Heat templates to orchestrate required resources to instantiate a bay. For example, Neutron ports are created, nodes/masters Nova instances are spawned and attached to Neutron ports, Kubernetes services and associated configuration files are managed, etc.. Masters run control plane services such as kube-api, kube-scheduler, etc., while nodes run worker services such as the kubelet. The exact services running on a master/node are specific to the coe (Container Orchestration Engine) baymodel attribute.
Magnum nodes communicate over a Neutron network. This allows, for example, the kubelet service running on a node to communicate with the kube-api service running on a master. In addition to node networking, container networking is used to interconnect containers running on Magnum nodes. The details for providing this connectivity depends on the container networking implementation used by the COE. Container networking follows the Magnum Container Networking Model spec. In general, each COE has a default container networking driver that allows Magnum users to instantiate bays without the need to specify any container networking information.
The default container networking implementation of a COE follows the upstream project, e.g. Flannel is the default container networking driver for Kubernetes. The network-driver attribute can be passed to a baymodel to select a container networking driver other than the default. Note: Not every network driver supports every COE, use the Network Driver Matrix to learn more about container networking driver support. In addition to specifying a container networking driver, labels can be passed to a baymodel to modify default settings of the driver. Use the Label Matrix to better understand the labels that each driver supports.
The Magnum wiki and Quick-Start Guide provides additional background on Magnum concepts and how to setup a Magnum development environment using DevStack.
Magnum Networking Details
As previously mentioned, masters/nodes host necessary services such as docker, kube-api, kubelet, etc. required to run containers. Nodes are connected to a Neutron network, which is connected to a Neutron router. The Neutron router provides connectivity between the node network and a pre-existing Neutron external network. The external-network-id attribute is used to specify the Neutron external network during a Magnum baymodel-create:
$ magnum baymodel-create --name k8sbaymodel \ --image-id fedora-21-atomic-5 \ --keypair-id testkey \ --external-network-id public \ --dns-nameserver 8.8.8.8 \ --flavor-id m1.small \ --docker-volume-size 5 \ --network-driver flannel \ --coe kubernetes
Floating-ip's are automatically assigned from the external network to the masters and nodes during a bay-create and are used for externally accessing the nodes (e.g. SSH management). When a bay is created, baymodel attributes such as the external-network-id are used to specify required and optional settings:
$ magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1
Let's break down how the Kubernetes Heat templates orchestrate networking. Since our example uses a baymodel with the Atomic image, we will focus on the kubecluster, kubemaster and kubeminion templates. Kubecluster is the top-level template, where cluster-wide resources and parameters are defined and masters/nodes are implemented as resource groups. For more information on Heat resource groups, read Steve Hardy's blog post.
kubecluster creates a Neutron private network and subnet:
fixed_network: type: OS::Neutron::Net properties: name: private fixed_subnet: type: OS::Neutron::Subnet properties: cidr: {get_param: fixed_network_cidr} network: {get_resource: fixed_network} dns_nameservers: - {get_param: dns_nameserver}
The template then creates a Neutron router and attaches it to the private subnet and external network. Again, the external network is predefined in the baymodel and is not a resource managed by any of the templates:
extrouter: type: OS::Neutron::Router properties: external_gateway_info: network: {get_param: external_network} extrouter_inside: type: OS::Neutron::RouterInterface properties: router_id: {get_resource: extrouter} subnet: {get_resource: fixed_subnet}
Security groups are then created:
secgroup_base: type: OS::Neutron::SecurityGroup properties: rules: - protocol: icmp - protocol: tcp port_range_min: 22 port_range_max: 22 secgroup_kube_master: type: OS::Neutron::SecurityGroup properties: rules: - protocol: tcp port_range_min: 7080 port_range_max: 7080 - protocol: tcp port_range_min: 8080 port_range_max: 8080 - protocol: tcp port_range_min: 2379 port_range_max: 2379 - protocol: tcp port_range_min: 2380 port_range_max: 2380 - protocol: tcp port_range_min: 6443 port_range_max: 6443 - protocol: tcp port_range_min: 30000 port_range_max: 32767 secgroup_kube_minion: type: OS::Neutron::SecurityGroup properties: rules: - protocol: icmp - protocol: tcp - protocol: udp
The Neutron LBaaS service is implemented for the Kubernetes API and Etcd services. This provides high availability and allows us to scale-out control plane services :
api_monitor: type: OS::Neutron::HealthMonitor properties: type: TCP delay: 5 max_retries: 5 timeout: 5 api_pool: type: OS::Neutron::Pool properties: protocol: {get_param: loadbalancing_protocol} monitors: [{get_resource: api_monitor}] subnet: {get_resource: fixed_subnet} lb_method: ROUND_ROBIN vip: protocol_port: {get_param: kubernetes_port} etcd_monitor: type: OS::Neutron::HealthMonitor properties: type: TCP delay: 5 max_retries: 5 timeout: 5 etcd_pool: type: OS::Neutron::Pool properties: protocol: HTTP monitors: [{get_resource: etcd_monitor}] subnet: {get_resource: fixed_subnet} lb_method: ROUND_ROBIN vip: protocol_port: 2379
A floating IP is created for the pool of Kubernetes API servers:
api_pool_floating: type: OS::Neutron::FloatingIP depends_on: - extrouter_inside properties: floating_network: {get_param: external_network} port_id: {get_attr: [api_pool, vip, port_id]}
The kubecluster template then moves to the kube_master resource group to orchestrate Kubernetes Master-specific resources. The https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubemaster.yaml kubemaster template] creates a Neutron port, floating-ip and associates the port to the Kubernetes API and Etcd load-balancing pools.
kube_master_eth0: type: OS::Neutron::Port properties: network: {get_param: fixed_network} security_groups: - {get_param: secgroup_base_id} - {get_param: secgroup_kube_master_id} fixed_ips: - subnet: {get_param: fixed_subnet} replacement_policy: AUTO kube_master_floating: type: OS::Neutron::FloatingIP properties: floating_network: {get_param: external_network} port_id: {get_resource: kube_master_eth0} api_pool_member: type: OS::Neutron::PoolMember properties: pool_id: {get_param: api_pool_id} address: {get_attr: [kube_master_eth0, fixed_ips, 0, ip_address]} protocol_port: {get_param: kubernetes_port} etcd_pool_member: type: OS::Neutron::PoolMember properties: pool_id: {get_param: etcd_pool_id} address: {get_attr: [kube_master_eth0, fixed_ips, 0, ip_address]} protocol_port: 2379
Heat then goes through a series of steps to configure the Kubernetes master:
kube_master_init: type: OS::Heat::MultipartMime properties: parts: - config: {get_resource: disable_selinux} - config: {get_resource: write_heat_params} - config: {get_resource: configure_etcd} - config: {get_resource: kube_user} - config: {get_resource: write_kube_os_config} - config: {get_resource: make_cert} - config: {get_resource: configure_kubernetes} - config: {get_resource: add_proxy} - config: {get_resource: enable_services} - config: {get_resource: write_network_config} - config: {get_resource: network_config_service} - config: {get_resource: network_service} - config: {get_resource: kube_examples} - config: {get_resource: master_wc_notify}
Each get_resource is a Heat SoftwareConfig resource. Review Steve Hardy's blog post for more details on SoftwareConfig resources. In general, the SoftwareConfig resources within Magnum are a series of cloud-config files or scripts to further configure the master/node. The Kubernetes SoftwareConfig files can be viewed here.
Let's take a closer look at the write_network_config, network_config_service and network_service SoftwareConfig resources.