StarlingX/Networking

= StarlingX Networking Sub-project =

Team Information

 * Project Lead: Steven Webster 
 * Technical Lead: Steven Webster 
 * Contributors: Steven Webster ; Matt Peters ; Ghada Khalil ; Teresa Ho ;  Cole Walker ;  Douglas Koerich ; Andre Kantek 
 * Past Contributors: Ruijing Guo ; Brent Rowsell ; Allain Legacy ; Joseph Richard ; Patrick Bonnell ; Kailun Qin ; Huifeng Le ; Chenjie Xu <chenjie.xu@intel.com>; Le Yao <le.yao@intel.com>; Forrest Zhao <forrest.zhao@intel.com>; Yi C Wang <yi.c.wang@intel.com>

Team Meeting

 * Bi-weekly Meetings:
 * Thursday 9:15am Eastern / 6:15am Pacific / 9:15pm(summer) or 10:15pm(winter) China
 * Zoom Details: https://wiki.openstack.org/wiki/Starlingx/Meetings#6:15am_Pacific_-_Networking_Team_Call_.28Bi-weekly.29
 * Meeting Agenda and Minutes:
 * https://etherpad.openstack.org/p/stx-networking

Team Objective / Priorities

 * Responsible for developing features and addressing bugs related to StarlingX networking

Tags
All story board stories and launchpad bugs created for this team should use the tag "stx.networking".

Team Work Items

 * Story Board
 * All
 * Active Stories
 * Merged Stories
 * Launchpad Bugs
 * All
 * Open Bugs
 * Fixed Bugs

Status

 * See etherpad for weekly status updates: https://etherpad.openstack.org/p/stx-networking

Useful Networking Commands

 * When deploying OVS-DPDK, VMs must be configured to use a flavor with property: hw:mem_page_size=large


 * Configuring Networking Features
 * Consult the configuration section of openstack networking guide on how to configure the different features supported by neutron
 * For StarlingX, the configuration must be specified using helm-overrides. A direct change to the neutron.conf file is not supported for StarlingX. See examples below.
 * Useful References:
 * StarlingX Armada manifest
 * openstack-helm neutron values
 * For more information on helm-overrides in StarlingX, consult the StarlngX containers FAQ

cat > neutron-overrides.yaml <<EOF conf: neutron: DEFAULT: service_plugins: - router - network_segment_range - qos plugins: ml2_conf: ml2: extension_drivers: - port_security - qos openvswitch_agent: agent: extensions: - qos EOF source /etc/platform/openrc system helm-override-update stx-openstack neutron openstack --values neutron-overrides.yaml system application-apply stx-openstack export OS_CLOUD=openstack_helm openstack network qos policy create bw-limit
 * Using helm-overrides to enable the qos extension for neutron
 * 1) create a yaml file to enable the qos extension for neutron
 * 1) update the neutron overrides and apply to stx-openstack
 * 1) in a separate shell, create the qos policy

cat > neutron-overrides.yaml <<EOF conf: neutron: DEFAULT: service_plugins: - router - network_segment_range - trunk EOF source /etc/platform/openrc system helm-override-update stx-openstack neutron openstack --values neutron-overrides.yaml system application-apply stx-openstack export OS_CLOUD=openstack_helm openstack extension list --network | grep -i trunk
 * Using helm-overrides to enable the trunk extension for neutron
 * 1) create a yaml file to enable the trunk extension for neutron
 * 1) update the neutron overrides and apply to stx-openstack
 * 1) In a separate shell, verify that the Trunk Extension and Trunk port details extensions are enabled

cat > neutron-overrides.yaml <<EOF conf: neutron: DEFAULT: dns_domain: example.ca plugins: ml2_conf: ml2: extension_drivers: - port_security - dns EOF source /etc/platform/openrc system helm-override-update stx-openstack neutron openstack --values neutron-overrides.yaml system application-apply stx-openstack
 * Using helm-overrides to enable internal dns
 * 1) create a yaml file to enable internal dns resolution for neutron
 * 1) update the neutron overrides and apply to stx-openstack

cat > neutron-overrides.yaml <<EOF conf: neutron: DEFAULT: rpc_response_max_timeout: 600 EOF source /etc/platform/openrc system helm-override-update stx-openstack neutron openstack --values neutron-overrides.yaml system application-apply stx-openstack kubectl get pod -n openstack | grep neutron kubectl exec -it $neutron-server -n openstack bash cat /etc/neutron/neutron.conf | grep rpc_response_max_timeout
 * Using helm-overrides to add configuration rpc_response_max_timeout in neutron.conf
 * 1) Maximum rpc timeout is now configurable by rpc_response_max_timeout from Neutron config instead of being calculated as 10 * rpc_response_timeout. This configuration can be used to change the maximum rpc timeout. If maximum rpc timeout is too big, some requests which should fail will be held for a long time before the server returns failure. If this value is too small and the server is very busy, the requests may need longer time than maximum rpc timeout and the requests will fail though they can succeed with a bigger maximum rpc timeout.
 * 1) create a yaml file to add configuration rpc_response_max_timeout in neutron.conf
 * 1) update the neutron overrides and apply to stx-openstack
 * 1) verify that configuration rpc_response_max_time has been added in neutron.conf

kubectl apply -f - <<EOF apiVersion: crd.projectcalico.org/v1 kind: GlobalNetworkPolicy metadata: name: allow-vim-webserver spec: ingress: - action: Allow destination: ports: - 32323    protocol: TCP order: 500 selector: has(iftype) && iftype == 'oam' types: - Ingress EOF
 * Using Calico global network policy to allow access to a host service
 * 1) create GlobalNetworkPolicy for VIM webserver access

export COMPUTE=controller-0 PHYSNET0='physnet0' system host-lock ${COMPUTE} system datanetwork-add ${PHYSNET0} vlan system host-if-list -a ${COMPUTE} system host-if-modify -m 1500 -n sriov -c pci-sriov -N 5 ${COMPUTE} ${DATA0IFUUID} system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} system interface-datanetwork-list ${COMPUTE} system host-unlock ${COMPUTE} system application-list ADMINID=`openstack project list | grep admin | awk '{print $2}'` PHYSNET0='physnet0' PUBLICNET='public-net0' PUBLICSUBNET='public-subnet0' openstack network segment range create ${PHYSNET0}-a --network-type vlan --physical-network ${PHYSNET0} --minimum 400 --maximum 499 --private --project ${ADMINID} openstack network create --project ${ADMINID} --provider-network-type=vlan --provider-physical-network=${PHYSNET0} --provider-segment=400 ${PUBLICNET} openstack subnet create --project ${ADMINID} ${PUBLICSUBNET} --network ${PUBLICNET} --subnet-range 192.168.101.0/24 openstack image create --container-format bare --disk-format qcow2 --file cirros-0.3.4-x86_64-disk.img cirros openstack image list net_id=`neutron net-show ${PUBLICNET} | grep "\ id\ " | awk '{ print $4 }'` port_id=`neutron port-create $net_id --name sriov_port --binding:vnic_type direct | grep "\ id\ " | awk '{ print $4 }'` openstack server create --flavor m1.tiny --image cirros --nic port-id=$port_id test-sriov
 * Configure SR-IOV with OpenStack
 * 1) Configure SR-IOV on your interface (such as enp65s0f0)
 * 1) Create an instance on SR-IOV interface (make sure stx-openstack has been re-applied successfully)

system host-lock ${COMPUTE} system interface-datanetwork-list ${COMPUTE} system interface-datanetwork-remove ${PHYSNETDATA0UUID} system datanetwork-add phy-flat flat system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} phy-flat system host-unlock ${COMPUTE} system application-list kubectl -n openstack get pod | grep -v Running | grep -v Complete export OS_CLOUD=openstack_helm ADMINID=`openstack project list | grep admin | awk '{print $2}'` openstack network create --project ${ADMINID} --provider-network-type=flat --provider-physical-network=phy-flat netflat openstack subnet create --project ${ADMINID} netflat-subnet --network netflat --subnet-range 192.168.103.0/24 openstack server create --image cirros --flavor m1.tiny --network netflat vm1
 * Configure the flat network.
 * 1) If the interface has been bound to other datanetwork, need to remove the binding.
 * 1) Create a flat datanetwork and bind it to the interface.
 * 1) Check if the application has been re-applied successfully.
 * 1) Check pod are initialized correctly
 * 1) Create a flat network connected to the phy-flat datanetwork.
 * 1) Create a server and ping to check the network is set up correctly if needed.


 * Configure the vxlan network.

DATA0IF=eth1000 export COMPUTE=controller-0 PHYSNET0='physnet0' SPL=/tmp/tmp-system-port-list SPIL=/tmp/tmp-system-host-if-list source /etc/platform/openrc system host-port-list ${COMPUTE} --nowrap > ${SPL} system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') system datanetwork-add ${PHYSNET0} vxlan --multicast_group 224.0.0.1 --ttl 255 --port_num 4789 system host-if-modify --ipv4-mode static -m 1574 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} system host-addr-add ${COMPUTE} ${DATA0IFUUID} 192.168.100.30 24 system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} export OS_CLOUD=openstack_helm ADMINID=`openstack project list | grep admin | awk '{print $2}'` openstack network segment range create net-vxlan-a --network-type vxlan --minimum 400 --maximum 499 --private --project ${ADMINID} openstack network create --project ${ADMINID} --provider-network-type=vxlan --provider-segment=400 netvxlan openstack subnet create --project ${ADMINID} netvxlan-subnet --network netvxlan --subnet-range 192.168.102.0/24 openstack server create --image cirros --flavor m1.tiny --network netvxlan vm1 openstack server create --image cirros --flavor m1.tiny --network netvxlan vm2
 * 1) Configure vxlan on your interface (such as eth1000)
 * 1) Change the IP for other controller node, but must be on the same subnet. e.g. 192.168.100.31 24
 * 1) Create a vxlan network on openstack
 * 1) Create two servers and ping to check the network is set up correctly if needed.

export COMPUTE=controller-0 PHYSNET0='physnet0' PHYSNET1='physnet1' system host-lock ${COMPUTE} system datanetwork-add ${PHYSNET0} vlan system datanetwork-add ${PHYSNET1} vlan system host-if-list -a ${COMPUTE} system host-if-modify -m 1500 -n pcipass -c pci-passthrough ${COMPUTE} ${DATA0IFUUID} system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} system interface-datanetwork-list ${COMPUTE} system host-unlock ${COMPUTE} # make sure stx-openstack has been re-applied successfully system application-list mkdir -p /home/sysadmin/.ssh/ vi /home/sysadmin/.ssh/id_rsa openstack keypair create key1 --private-key /home/sysadmin/.ssh/id_rsa openstack security group create security1 openstack security group rule create --ingress --protocol icmp --remote-ip 0.0.0.0/0 security1 openstack security group rule create --ingress --protocol tcp --remote-ip 0.0.0.0/0 security1 openstack security group rule create --ingress --protocol udp --remote-ip 0.0.0.0/0 security1 export OS_CLOUD=openstack_helm ADMINID=`openstack project list | grep admin | awk '{print $2}'` PHYSNET0='physnet0' PHYSNET1='physnet1' PUBLICNET0='public-net0' PUBLICNET1='public-net1' PUBLICSUBNET0='public-subnet0' PUBLICSUBNET1='public-subnet1' openstack network segment range create ${PHYSNET0}-a --network-type vlan --physical-network ${PHYSNET0} --minimum 400 --maximum 499 --private --project ${ADMINID} openstack network segment range create ${PHYSNET1}-a --network-type vlan --physical-network ${PHYSNET1} --minimum 500 --maximum 599 --private --project ${ADMINID} openstack network create --project ${ADMINID} --provider-network-type=vlan --provider-physical-network=${PHYSNET0} --provider-segment=400 ${PUBLICNET0} openstack network create --project ${ADMINID} --provider-network-type=vlan --provider-physical-network=${PHYSNET1} --provider-segment=500 ${PUBLICNET1} openstack subnet create --project ${ADMINID} ${PUBLICSUBNET0} --network ${PUBLICNET0} --subnet-range 192.168.101.0/24 openstack subnet create --project ${ADMINID} ${PUBLICSUBNET1} --network ${PUBLICNET1} --subnet-range 192.168.102.0/24 wget http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img openstack image create --container-format bare --disk-format qcow2 --file xenial-server-cloudimg-amd64-disk1.img ubuntu openstack image list net_id=`neutron net-show ${PUBLICNET0} | grep "\ id\ " | awk '{ print $4 }'` port_id=`neutron port-create $net_id --name pf-port --binding:vnic_type direct-physical | grep "\ id\ " | awk '{ print $4 }'` openstack server create --image ubuntu --flavor m1.small --nic port-id=$port_id --network ${PUBLICNET1} --security-group security1 --key-name key1 test-pci
 * Pass through a physical NIC to VM by binding a port with vnic_type direct_physical to the VM.
 * 1) This method should be used only for the NIC which supports SR-IOV. If the file "/sys/class/net/$IF_NAME/device/sriov_numvfs" exists, the NIC should support SR-IOV. For NIC which doesn't support SR-IOV such as i210 NIC, the way "Pass through a physical NIC to VM by using PCI PASSTHROUGH" should be used and the device_type should be "type-PCI".
 * 1) Use system command to configure interface. One is used for PCI PASSTHROUGH and the other is a normal interface.
 * 1) Create keypair and security group
 * 1) Create networks and subnets. Upload the ubuntu image.
 * 1) Create PF port whose vnic_type is direct-physical
 * 1) Create VM with one PF port and one normal port which is used to ssh to the VM

# Use system command to configure interface. One is used for PCI PASSTHROUGH and the other is a normal interface. export COMPUTE=controller-0 PHYSNET0='physnet0' PHYSNET1='physnet1' system host-lock ${COMPUTE} system datanetwork-add ${PHYSNET0} vlan system datanetwork-add ${PHYSNET1} vlan system host-if-list -a ${COMPUTE} system host-if-modify -m 1500 -n pcipass -c pci-passthrough ${COMPUTE} ${DATA0IFUUID} system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} system interface-datanetwork-list ${COMPUTE} # create a yaml file to configure pci alias in nova.conf. cat > nova-overrides.yaml <<EOF conf: nova: DEFAULT: debug: True pci: alias: type: multistring values: - '{"vendor_id": "8086", "product_id": "37d2","device_type":"type-PF","name": "intel-X722-pf"}' EOF lspci -nn | grep -i eth 41:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection X722 for 10GBASE-T [8086:37d2] (rev 04) 8086 is the vendor_id and 37d2 is product_id. device_type can be one of three values: type-PCI, type-PF and type-VF. type-PCI: for NIC which doesn't support SR-IOV such as i210 NIC. type-PF: for NIC which supports SR-IOV. type-PF allows you to pass through the PF to be controlled by the VMs. This is sometimes useful in NFV use-cases. type-VF: for NIC which supports SR-IOV. type-VF allows you to pass through VFs to the VMs. system helm-override-update stx-openstack nova openstack --values nova-overrides.yaml system host-unlock ${COMPUTE} system application-list mkdir -p /home/sysadmin/.ssh/ vi /home/sysadmin/.ssh/id_rsa openstack keypair create key1 --private-key /home/sysadmin/.ssh/id_rsa openstack security group create security1 openstack security group rule create --ingress --protocol icmp --remote-ip 0.0.0.0/0 security1 openstack security group rule create --ingress --protocol tcp --remote-ip 0.0.0.0/0 security1 openstack security group rule create --ingress --protocol udp --remote-ip 0.0.0.0/0 security1 openstack flavor create --ram 4096 --disk 100 --vcpus 2 m1.medium.pci_passthrough openstack flavor set --property "pci_passthrough:alias"="intel-X722-pf:1" m1.medium.pci_passthrough export OS_CLOUD=openstack_helm ADMINID=`openstack project list | grep admin | awk '{print $2}'` PHYSNET1='physnet1' PUBLICNET1='public-net1' PUBLICSUBNET1='public-subnet1' openstack network segment range create ${PHYSNET1}-a --network-type vlan --physical-network ${PHYSNET1} --minimum 500 --maximum 599 --private --project ${ADMINID} openstack network create --project ${ADMINID} --provider-network-type=vlan --provider-physical-network=${PHYSNET1} --provider-segment=500 ${PUBLICNET1} openstack subnet create --project ${ADMINID} ${PUBLICSUBNET1} --network ${PUBLICNET1} --subnet-range 192.168.102.0/24 wget http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img openstack image create --container-format bare --disk-format qcow2 --file xenial-server-cloudimg-amd64-disk1.img ubuntu openstack image list openstack server create --image ubuntu --flavor m1.medium.pci_passthrough --network ${PUBLICNET1} --security-group security1 --key-name key1 test-pci
 * Pass through a physical NIC to VM by using PCI PASSTHROUGH.
 * 1) You can retrieve the vendor_id, product_id by command "lspci -nn | grep -i eth". For example:
 * 1) update the nova overrides and apply to stx-openstack
 * 1) Unlock host and make sure stx-openstack has been re-applied successfully
 * 1) Create keypair and security group
 * 1) Create flavor and set property "pci_passthrough:alias". When you create VM using flavor with property "pci_passthrough:alias", Nova will know this VM shall be passed a physical NIC selected by the alias.
 * 1) Create networks and subnets. Upload the ubuntu image.
 * 1) Create VM by following command:

Deploying and Running TSN application in STX Virtual Machine Workload Mode
Reference page: Deploying and Running TSN Application