Jump to: navigation, search

Difference between revisions of "StarlingX/Networking/TSN"

(Created page with "= Deploying and Running TSN application in StarlingX Virtual Machine Workload Mode = === Introduction === === Requirements === === Walk through to deploying TSN application...")
 
 
(8 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
= Deploying and Running TSN application in StarlingX Virtual Machine Workload Mode =
 
= Deploying and Running TSN application in StarlingX Virtual Machine Workload Mode =
 +
  
 
=== Introduction ===
 
=== Introduction ===
 +
Embedded sectors such as Automotive, Industrial, professional audio/video networking as well as blockchain and high frequency trading have emphasized the need for real-time networks. While Common LAN models are based on Internet Protocols and the IEEE 802 architecture and most of operation is best-effort which is not suitable for use cases (special for edge computing) that require high /known/deterministic availability.
 +
 +
Time Sensitive Networking (TSN) is a set of evolving standards developed by IEEE 802.1 Working Group to cover a group of vendor-neutral and IEEE standards with the aim of guaranteeing determinism in delivering time-sensitive traffic with low and bounded latency within an infinitesimal packet loss over the network, while allowing non time-sensitive traffic to be carried through the same network. It is also a key technology that targets for above edge computing segments.
 +
 +
StarlingX (STX) is a complete cloud infrastructure software stack for the edge and it provides running workloads on both virtual machine and container environment.
 +
 +
This Wiki introduces how to deploy and run TSN application in STX virtual machine workload, sample TSN reference applications are taken from [1]  with focus on 2 key use cases:
 +
 +
(1) IEEE 802.1Qav or Credit Based Shaper (CBS)
 +
 +
Ensure bounded transmission latency for time sensitive, loss-sensitive real-time data stream in some critical user scenarios. For instance, when time sensitive traffic and best effort traffic are transmitted together, users require the bandwidth and latency of time-sensitive traffic is protected in the midst of overloaded traffic on the same network, i.e. ensure time-sensitive traffic to have constant transmission rate and latency.
 +
 +
(2) IEEE 802.1Qbv or Time Aware Shaper (TAS)
 +
 +
Create a protected transmission window for scheduled traffic, which requires low and bounded transmission latency. Scheduled traffic is the term used in IEEE 802.1Qbv to refer to periodic traffic such as industrial automation control frames. This type of traffic is short in frame length and requires immediate transmission when its schedule starts.
 +
  
 
=== Requirements ===
 
=== Requirements ===
 +
The TSN reference application had been verified on the following environments:
 +
{| class="wikitable"
 +
|-
 +
| Hardware ||
 +
* Edge cloud platform: meet STX requirements: [https://docs.starlingx.io/deploy_install_guides/current/index.html https://docs.starlingx.io/deploy_install_guides/current/index.html]
 +
* Edge device: Intel Core® Processor
 +
* [https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/i210-ethernet-controller-datasheet.pdf Intel® Ethernet Controller I210]
 +
|-
 +
| Software  ||
 +
* Linux Kernel 4.19.04 +
 +
|}
 +
 +
Demo Environment Setup
 +
 +
The following diagram shows the setup for the demo:
 +
 +
[[File:Stx-tsn-demo-environment.png]]
 +
 +
The demo includes below hardware and software components:
 +
 +
* Edge cloud platform: building based on STX All-In-One(AIO) which provides IaaS infrastructure for edge cloud environment
 +
 +
* Edge device: device in edge side to generate TSN data for processing
 +
 +
* Intel® Ethernet Controller I210: installed on both edge cloud platform node and edge device and connected by CAT-5E Ethernet cable. Igb.ko is its Linux kernel driver which had been included in Linux
 +
 +
* Edge Gateway: Virtual Machine created by STX and worked as gateway to collect TSN data, perform edge computing then send to data center. The I210 adapter is exposed to edge gateway from host node through OpenStack Nova’s PCI pass-through support
 +
 +
* p2p4l, phc2sys: utility from LinuxPTP project to support the time synchronization over the PTP
 +
 +
* tc: utility/command used to configure Traffic Control in the Linux kernel
 +
 +
* TSN Sender/Receiver: TSN reference application from [1] to send/receive TSN data for processing
 +
 +
 +
=== Walk through to deploy TSN applications on STX ===
 +
 +
* <big>Edge Cloud Platform</big>
 +
1. Install STX environment: following the guide [2] to install one STX environment (e.g. STX AIO)
 +
 +
2. Preparing a Virtual Machine image
 +
 +
Create image tsn_ubuntu_19_04.img based on Ubuntu 19.04 with below required binaries:
 +
# linuxPTP
 +
git clone https://git.code.sf.net/p/linuxptp/code linuxptp-code
 +
cd linuxptp-code
 +
make
 +
make install
 +
# iproute2
 +
apt install bison flex elfutils -y
 +
git clone http://git.kernel.org/pub/scm/network/iproute2/iproute2.git
 +
cd iproute2
 +
make
 +
make install
 +
 +
Upload image to OpenStack Glance and enable pci passthrough in flavor property
 +
# upload image
 +
openstack image create --container-format bare --disk-format qcow2 --file tsn_ubuntu_19_04.img --public tsn-ubuntu-19-04
 +
# add pci-passthrough property to flavor (e.g. m1.medium), “h210-1” is the alias name of the PCI device configured in nova.config
 +
openstack flavor set m1.medium --property pci_passthrough:alias=h210-1:1
 +
3. Configure OpenStack Nova to allow for PCI pass-through
 +
 +
Create nova-tsn-pt.yaml file to allow PCI pass-through for i210 adapter (e.g. device id: 8086:1533)
 +
conf:
 +
nova:
 +
  pci:
 +
    alias:
 +
        type: multistring
 +
        values:
 +
        - '{"vendor_id": "8086", "product_id": "1533","device_type":"type-PCI","name": "h210-1"}'
 +
    passthrough_whitelist:
 +
        type: multistring
 +
        values:
 +
        - '{"class_id": "8086", "product_id":"1533"}'
 +
overrides:
 +
  nova_compute:
 +
    hosts:
 +
    - conf:
 +
        nova:
 +
          DEFAULT:
 +
            my_ip: {host_ip}
 +
            shared_pcpu_map: '""'
 +
            vcpu_pin_set: '"2-5"'
 +
          libvirt:
 +
            images_type: default
 +
            live_migration_inbound_addr: {host_ip}
 +
          pci:
 +
            passthrough_whitelist:
 +
              type: multistring
 +
              values:
 +
              - '{"class_id": "8086", "product_id": "1533"}'
 +
          vnc:
 +
            vncserver_listen: 0.0.0.0
 +
            vncserver_proxyclient_address: {host_ip}
 +
      name: {controller_name}
 +
 +
'''Note''': other configurations besides pci in nova_compute hosts such as libvirt, vnc (which are configuration generated based on STX compute node information) are also required due to the override mechanism of openstack-helm for list (e.g. hosts) which will replace all content of the list instead of replace single configuration item for each list element.
 +
 +
Enable nova pci-passthrough configuration in STX
 +
# set pci-passthrough config
 +
system helm-override-update  stx-openstack nova openstack --values nova-tsn-pt.yaml
 +
system application-apply stx-openstack
 +
 +
4. Create Virtual Machine instance and install TSN reference application
 +
 +
Create virtual machine instance:
 +
openstack server create --image tsn-ubuntu-19-04 --network ${network_uuid} --flavor m1.medium tsn-demo
 +
 +
Install TSN reference application and other test applications:
 +
 +
Following the instructions on [1] to compile and install below applications in the VM instances:
 +
 +
(1) iperf3: running in server mode to receive best effort traffic
 +
 +
(2) simple_listener: TSN test application which receives IEEE1722 class A traffic, and it is used to test IEEE 802.1Qav or Credit Based Shaper use case
 +
 +
(3) sample-app-taprio: TSN test application which receives traffic and measure Tx latency, and it is used to test IEEE 802.1Qbv or Time Aware Shaper use case
 +
 +
 +
* <big>Edge Device</big>
 +
 +
1. Install OS and required libraries
 +
 +
Install Ubuntu 19.04
 +
 +
Install required libraries
 +
# linuxPTP
 +
git clone https://git.code.sf.net/p/linuxptp/code linuxptp-code
 +
cd linuxptp-code
 +
make
 +
make install
 +
# iproute2
 +
apt install bison flex elfutils -y
 +
git clone http://git.kernel.org/pub/scm/network/iproute2/iproute2.git
 +
cd iproute2
 +
make
 +
make install
 +
 +
2. Install TSN reference applications and other test applications
 +
 +
Following the instructions on [1] to compile and install below applications in the device:
 +
 +
(1) iperf3: running in server mode to receive best effort traffic
 +
 +
(2) simple_talker-cmsg: TSN test application which sends IEEE1722 class A traffic, and it is used to test IEEE 802.1Qav or Credit Based Shaper use case
 +
 +
(3) sample-app-taprio: TSN test application which sends scheduled traffic, and it is used to test IEEE 802.1Qbv or Time Aware Shaper use case
  
=== Walk through to deploying TSN applications on STX ===
 
  
 
=== Demos ===
 
=== Demos ===
 +
* <big>IEEE 802.1Qav or Credit Based Shaper (CBS)</big>
 +
 +
This demo focuses on the use of Credit Based Shaper (CBS) and the LaunchTime feature of Intel® Ethernet Controller I210 to ensure bounded and low latency for time sensitive streams.
 +
 +
It includes 2 scenarios: (1) CBS and LaunchTime are disabled (2) Both CBS and LaunchTime are enabled. ptp4l daemon are running on both Edge gateway and Edge device to sync PTP clock based on IEEE 802.1AS Generalized Precision Time Protocol (gPTP) with Edge gateway serves as grandmaster clock and Edge device serves as slave clock. Phc2sys daemon are also running on both devices to synchronize system clock with PTP clock. simple_talker-cmsg application runs on edge device serves as the source of 8000 packet/s SR Class A audio frames in IEEE 1722 format. Simple_listener application runs on Edge gateway to receive time-sensitive traffic. Iperf3 are running on both devices to transfer best-effort traffic to stress system communication, tc utility is used to setup cbs and etf (for LaunchTime feature) qdisc capabilities.
 +
 +
(1) CBS and LaunchTime are disabled
 +
 +
[[File:Stx-tsn-demo-result1.png]]
 +
 +
In this case, both best effort traffic (blue line) and time-sensitive (IEEE 1722 audio frames) traffic (red line) into the same transmit queue, the traffic transmission of time-sensitive traffic has unbounded transmission latency and the transmission rate varies greatly (e.g. 7900~8100 packets/s). Without CBS or LaunchTime enabled, the network sees a burst of IEEE 1722 audio frames as driven by the simple_talker-cmsg application.
 +
 +
(2) Both CBS and LaunchTime enabled (mqprio with etf qdisc and per-packet TX time)
 +
 +
Enable CBS with below commands:
 +
 +
tc qdisc replace dev [iface]  \
 +
        parent root handle 100 mqprio num_tc 3 \
 +
        map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \
 +
        queues 1@0 1@1 2@2 hw 0
 +
tc qdisc replace dev [iface] \
 +
        parent 100:1 cbs \
 +
        idleslope  7808 \
 +
        sendslope  -992192 \
 +
        hicredit 12 \
 +
        locredit  -97  \
 +
        offload 1
 +
Enable LaunchTime with below command:
 +
 +
tc qdisc replace dev [iface] \
 +
      parent 200:1 \
 +
      etf delta \
 +
      clockid CLOCK_TAI \
 +
      offload 1
 +
 +
[[File:Stx-tsn-demo-result2.png]]
 +
 +
In this case, after enabling CBS and LaunchTime features, different transmit queues are used to separate best effort and time-sensitive traffic, the CBS capability ensures time-sensitive traffic is bounded to the sawing-effect of credit-based shaping in the case of a heavily loaded transmission path and the LaunchTime capability ensures time-deterministic transmission by setting the per-packet TX descriptor LaunchTime field. As result, the traffic transmission of SR Class A audio stream has constant transmission latency and the transmission rate is a constant 8000 packets/second, independent of when interfering best effort traffic enters the system.
 +
 +
 +
* <big>IEEE 802.1Qbv or Time Aware Shaper (TAS)</big>
 +
 +
This demo focuses on the use of Time Aware Shaper (TAS) and the LaunchTime feature of Intel® Ethernet Controller I210 to ensure much more bounded and low latency for period control applications.
 +
 +
It includes 3 scenarios: (1) Both TAS and LaunchTime disabled (2) TAS enabled (3) Both TAS and LaunchTime enabled. ptp4l daemon are running on both Edge gateway and Edge device to sync PTP clock based on IEEE 802.1AS Generalized Precision Time Protocol (gPTP) with Edge gateway serves as grandmaster clock and Edge device serves as slave clock. Phc2sys daemon are also running on both devices to synchronize system clock with PTP clock. Sample-app-taprio application are running on both devices to transfer scheduled traffic, Iperf3 are also running on both devices to transfer best-effort traffic to stress system communication, tc utility is used to setup mqprio, taprio and etf (for LaunchTime feature) qdisc capabilities.
 +
 +
(1) Both TAS and LaunchTime disabled (use mqprio qdisc only)
 +
 +
Create mqprio with below command:
 +
tc qdisc add dev [iface]  parent root mqprio num_tc 4 \
 +
        map 3 3 3 0 3 1 3 2 3 3 3 3 3 3 3 3 \
 +
        queues 1@0 1@1 1@2 1@3 \
 +
        hw 0
 +
[[File:Stx-tsn-demo-result3.png]]
 +
 +
In this case, the distribution of the inter-packet latency for both scheduled traffic (VLAN priority = 5 and 3) has a high sample count at 500 µs, which is the inter-packet cycle time used in this demo. While high sample counts observed outside of the chosen inter-packet cycle time indicate poor precision in hitting the expected 500 µs inter-packet cycle time.
 +
 +
(2) TAS enabled (use taprio qdisc only)
 +
 +
Enable TAS with below command:
 +
tc -d qdisc replace dev [iface] parent root handle 100 taprio num_tc 4 \
 +
        map 3 3 3 1 3 0 3 2 3 3 3 3 3 3 3 3 \
 +
        queues 1@0 1@1 1@2 1@3 \
 +
        base-time 1559471513000000000 \
 +
        sched-entry S 08 100000 \
 +
        sched-entry S 01 100000 \
 +
        sched-entry S 02 100000 \
 +
        sched-entry S 04 200000 \
 +
        sched-entry S 08 100000 \
 +
        sched-entry S 01 100000 \
 +
        sched-entry S 02 100000 \
 +
        sched-entry S 04 200000 \
 +
        clockid CLOCK_TAI
 +
[[File:Stx-tsn-demo-result4.png]]
 +
 +
In this case, most of the samples happen at and close to 500 µs. The sample count quickly drops to a single digit value when it is further away from the 500 µs inter-packet cycle time. Compare to case (1), a majority of the scheduled traffic is received at close to 500 µs which shows that taprio qdisc helps traffic shape the transmission of scheduled traffic in the time domain.
 +
 +
(3) Both TAS and LaunchTime enabled (taprio with etf qdisc and per-packet TX time)
 +
 +
Enable LaunchTime with below command:
 +
tc qdisc replace dev [iface] parent [parent] 1 etf \
 +
        clockid CLOCK_TAI \
 +
        delta [DELTA_nsec] \
 +
        offload
 +
[[File:Stx-tsn-demo-result5.png]]
 +
 +
In this case, the inter-packet latency distribution for both scheduled traffic reduces greatly compared to previous cases. This result is consistent with the fact that LaunchTime technology ensures scheduled traffic is pre-fetched ahead of time from system memory into the Ethernet MAC controller for transmission at the defined time. The transmission gating effect of taprio qdisc provides a protected transmission window for scheduled traffic from interfering Best Effort traffic. As a result, combining these two technologies ensures that Ethernet frames for scheduled traffic are sent out in a protected transmission window at accurate times.
 +
 +
 +
=== References ===
 +
[1] TSN Reference Software for Linux: https://github.com/intel/iotg_tsn_ref_sw
 +
 +
[2] StarlingX Installation and Deployment Guides: https://docs.starlingx.io/deploy_install_guides/index.html

Latest revision as of 15:13, 24 August 2019

Deploying and Running TSN application in StarlingX Virtual Machine Workload Mode

Introduction

Embedded sectors such as Automotive, Industrial, professional audio/video networking as well as blockchain and high frequency trading have emphasized the need for real-time networks. While Common LAN models are based on Internet Protocols and the IEEE 802 architecture and most of operation is best-effort which is not suitable for use cases (special for edge computing) that require high /known/deterministic availability.

Time Sensitive Networking (TSN) is a set of evolving standards developed by IEEE 802.1 Working Group to cover a group of vendor-neutral and IEEE standards with the aim of guaranteeing determinism in delivering time-sensitive traffic with low and bounded latency within an infinitesimal packet loss over the network, while allowing non time-sensitive traffic to be carried through the same network. It is also a key technology that targets for above edge computing segments.

StarlingX (STX) is a complete cloud infrastructure software stack for the edge and it provides running workloads on both virtual machine and container environment.

This Wiki introduces how to deploy and run TSN application in STX virtual machine workload, sample TSN reference applications are taken from [1] with focus on 2 key use cases:

(1) IEEE 802.1Qav or Credit Based Shaper (CBS)

Ensure bounded transmission latency for time sensitive, loss-sensitive real-time data stream in some critical user scenarios. For instance, when time sensitive traffic and best effort traffic are transmitted together, users require the bandwidth and latency of time-sensitive traffic is protected in the midst of overloaded traffic on the same network, i.e. ensure time-sensitive traffic to have constant transmission rate and latency.

(2) IEEE 802.1Qbv or Time Aware Shaper (TAS)

Create a protected transmission window for scheduled traffic, which requires low and bounded transmission latency. Scheduled traffic is the term used in IEEE 802.1Qbv to refer to periodic traffic such as industrial automation control frames. This type of traffic is short in frame length and requires immediate transmission when its schedule starts.


Requirements

The TSN reference application had been verified on the following environments:

Hardware
Software
  • Linux Kernel 4.19.04 +

Demo Environment Setup

The following diagram shows the setup for the demo:

Stx-tsn-demo-environment.png

The demo includes below hardware and software components:

  • Edge cloud platform: building based on STX All-In-One(AIO) which provides IaaS infrastructure for edge cloud environment
  • Edge device: device in edge side to generate TSN data for processing
  • Intel® Ethernet Controller I210: installed on both edge cloud platform node and edge device and connected by CAT-5E Ethernet cable. Igb.ko is its Linux kernel driver which had been included in Linux
  • Edge Gateway: Virtual Machine created by STX and worked as gateway to collect TSN data, perform edge computing then send to data center. The I210 adapter is exposed to edge gateway from host node through OpenStack Nova’s PCI pass-through support
  • p2p4l, phc2sys: utility from LinuxPTP project to support the time synchronization over the PTP
  • tc: utility/command used to configure Traffic Control in the Linux kernel
  • TSN Sender/Receiver: TSN reference application from [1] to send/receive TSN data for processing


Walk through to deploy TSN applications on STX

  • Edge Cloud Platform

1. Install STX environment: following the guide [2] to install one STX environment (e.g. STX AIO)

2. Preparing a Virtual Machine image

Create image tsn_ubuntu_19_04.img based on Ubuntu 19.04 with below required binaries:

# linuxPTP
git clone https://git.code.sf.net/p/linuxptp/code linuxptp-code
cd linuxptp-code
make
make install
# iproute2
apt install bison flex elfutils -y
git clone http://git.kernel.org/pub/scm/network/iproute2/iproute2.git 
cd iproute2
make
make install

Upload image to OpenStack Glance and enable pci passthrough in flavor property

# upload image
openstack image create --container-format bare --disk-format qcow2 --file tsn_ubuntu_19_04.img --public tsn-ubuntu-19-04
# add pci-passthrough property to flavor (e.g. m1.medium), “h210-1” is the alias name of the PCI device configured in nova.config
openstack flavor set m1.medium --property pci_passthrough:alias=h210-1:1

3. Configure OpenStack Nova to allow for PCI pass-through

Create nova-tsn-pt.yaml file to allow PCI pass-through for i210 adapter (e.g. device id: 8086:1533)

conf:
nova:
 pci:
   alias:
       type: multistring
       values:
       - '{"vendor_id": "8086", "product_id": "1533","device_type":"type-PCI","name": "h210-1"}'
   passthrough_whitelist:
       type: multistring
       values:
       - '{"class_id": "8086", "product_id":"1533"}'
overrides:
  nova_compute:
    hosts:
    - conf:
        nova:
         DEFAULT:
           my_ip: {host_ip}
           shared_pcpu_map: '""'
           vcpu_pin_set: '"2-5"'
         libvirt:
           images_type: default
           live_migration_inbound_addr: {host_ip}
         pci:
           passthrough_whitelist:
             type: multistring
             values:
             - '{"class_id": "8086", "product_id": "1533"}'
         vnc:
           vncserver_listen: 0.0.0.0
           vncserver_proxyclient_address: {host_ip}
      name: {controller_name}

Note: other configurations besides pci in nova_compute hosts such as libvirt, vnc (which are configuration generated based on STX compute node information) are also required due to the override mechanism of openstack-helm for list (e.g. hosts) which will replace all content of the list instead of replace single configuration item for each list element.

Enable nova pci-passthrough configuration in STX

# set pci-passthrough config
system helm-override-update  stx-openstack nova openstack --values nova-tsn-pt.yaml
system application-apply stx-openstack

4. Create Virtual Machine instance and install TSN reference application

Create virtual machine instance:

openstack server create --image tsn-ubuntu-19-04 --network ${network_uuid} --flavor m1.medium tsn-demo

Install TSN reference application and other test applications:

Following the instructions on [1] to compile and install below applications in the VM instances:

(1) iperf3: running in server mode to receive best effort traffic

(2) simple_listener: TSN test application which receives IEEE1722 class A traffic, and it is used to test IEEE 802.1Qav or Credit Based Shaper use case

(3) sample-app-taprio: TSN test application which receives traffic and measure Tx latency, and it is used to test IEEE 802.1Qbv or Time Aware Shaper use case


  • Edge Device

1. Install OS and required libraries

Install Ubuntu 19.04

Install required libraries

# linuxPTP
git clone https://git.code.sf.net/p/linuxptp/code linuxptp-code
cd linuxptp-code
make
make install
# iproute2
apt install bison flex elfutils -y
git clone http://git.kernel.org/pub/scm/network/iproute2/iproute2.git 
cd iproute2
make
make install

2. Install TSN reference applications and other test applications

Following the instructions on [1] to compile and install below applications in the device:

(1) iperf3: running in server mode to receive best effort traffic

(2) simple_talker-cmsg: TSN test application which sends IEEE1722 class A traffic, and it is used to test IEEE 802.1Qav or Credit Based Shaper use case

(3) sample-app-taprio: TSN test application which sends scheduled traffic, and it is used to test IEEE 802.1Qbv or Time Aware Shaper use case


Demos

  • IEEE 802.1Qav or Credit Based Shaper (CBS)

This demo focuses on the use of Credit Based Shaper (CBS) and the LaunchTime feature of Intel® Ethernet Controller I210 to ensure bounded and low latency for time sensitive streams.

It includes 2 scenarios: (1) CBS and LaunchTime are disabled (2) Both CBS and LaunchTime are enabled. ptp4l daemon are running on both Edge gateway and Edge device to sync PTP clock based on IEEE 802.1AS Generalized Precision Time Protocol (gPTP) with Edge gateway serves as grandmaster clock and Edge device serves as slave clock. Phc2sys daemon are also running on both devices to synchronize system clock with PTP clock. simple_talker-cmsg application runs on edge device serves as the source of 8000 packet/s SR Class A audio frames in IEEE 1722 format. Simple_listener application runs on Edge gateway to receive time-sensitive traffic. Iperf3 are running on both devices to transfer best-effort traffic to stress system communication, tc utility is used to setup cbs and etf (for LaunchTime feature) qdisc capabilities.

(1) CBS and LaunchTime are disabled

Stx-tsn-demo-result1.png

In this case, both best effort traffic (blue line) and time-sensitive (IEEE 1722 audio frames) traffic (red line) into the same transmit queue, the traffic transmission of time-sensitive traffic has unbounded transmission latency and the transmission rate varies greatly (e.g. 7900~8100 packets/s). Without CBS or LaunchTime enabled, the network sees a burst of IEEE 1722 audio frames as driven by the simple_talker-cmsg application.

(2) Both CBS and LaunchTime enabled (mqprio with etf qdisc and per-packet TX time)

Enable CBS with below commands:

tc qdisc replace dev [iface]  \
       parent root handle 100 mqprio num_tc 3 \
       map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \
       queues 1@0 1@1 2@2 hw 0
tc qdisc replace dev [iface] \
       parent 100:1 cbs \
       idleslope  7808 \
       sendslope  -992192 \
       hicredit 12 \
       locredit  -97  \
       offload 1

Enable LaunchTime with below command:

tc qdisc replace dev [iface] \
     parent 200:1 \
     etf delta \
     clockid CLOCK_TAI \
     offload 1

Stx-tsn-demo-result2.png

In this case, after enabling CBS and LaunchTime features, different transmit queues are used to separate best effort and time-sensitive traffic, the CBS capability ensures time-sensitive traffic is bounded to the sawing-effect of credit-based shaping in the case of a heavily loaded transmission path and the LaunchTime capability ensures time-deterministic transmission by setting the per-packet TX descriptor LaunchTime field. As result, the traffic transmission of SR Class A audio stream has constant transmission latency and the transmission rate is a constant 8000 packets/second, independent of when interfering best effort traffic enters the system.


  • IEEE 802.1Qbv or Time Aware Shaper (TAS)

This demo focuses on the use of Time Aware Shaper (TAS) and the LaunchTime feature of Intel® Ethernet Controller I210 to ensure much more bounded and low latency for period control applications.

It includes 3 scenarios: (1) Both TAS and LaunchTime disabled (2) TAS enabled (3) Both TAS and LaunchTime enabled. ptp4l daemon are running on both Edge gateway and Edge device to sync PTP clock based on IEEE 802.1AS Generalized Precision Time Protocol (gPTP) with Edge gateway serves as grandmaster clock and Edge device serves as slave clock. Phc2sys daemon are also running on both devices to synchronize system clock with PTP clock. Sample-app-taprio application are running on both devices to transfer scheduled traffic, Iperf3 are also running on both devices to transfer best-effort traffic to stress system communication, tc utility is used to setup mqprio, taprio and etf (for LaunchTime feature) qdisc capabilities.

(1) Both TAS and LaunchTime disabled (use mqprio qdisc only)

Create mqprio with below command:

tc qdisc add dev [iface]  parent root mqprio num_tc 4 \
       map 3 3 3 0 3 1 3 2 3 3 3 3 3 3 3 3 \
       queues 1@0 1@1 1@2 1@3 \
       hw 0

Stx-tsn-demo-result3.png

In this case, the distribution of the inter-packet latency for both scheduled traffic (VLAN priority = 5 and 3) has a high sample count at 500 µs, which is the inter-packet cycle time used in this demo. While high sample counts observed outside of the chosen inter-packet cycle time indicate poor precision in hitting the expected 500 µs inter-packet cycle time.

(2) TAS enabled (use taprio qdisc only)

Enable TAS with below command:

tc -d qdisc replace dev [iface] parent root handle 100 taprio num_tc 4 \
       map 3 3 3 1 3 0 3 2 3 3 3 3 3 3 3 3 \
       queues 1@0 1@1 1@2 1@3 \
       base-time 1559471513000000000 \
       sched-entry S 08 100000 \
       sched-entry S 01 100000 \
       sched-entry S 02 100000 \
       sched-entry S 04 200000 \
       sched-entry S 08 100000 \
       sched-entry S 01 100000 \
       sched-entry S 02 100000 \
       sched-entry S 04 200000 \
       clockid CLOCK_TAI

Stx-tsn-demo-result4.png

In this case, most of the samples happen at and close to 500 µs. The sample count quickly drops to a single digit value when it is further away from the 500 µs inter-packet cycle time. Compare to case (1), a majority of the scheduled traffic is received at close to 500 µs which shows that taprio qdisc helps traffic shape the transmission of scheduled traffic in the time domain.

(3) Both TAS and LaunchTime enabled (taprio with etf qdisc and per-packet TX time)

Enable LaunchTime with below command:

tc qdisc replace dev [iface] parent [parent] 1 etf \
       clockid CLOCK_TAI \
       delta [DELTA_nsec] \
       offload

Stx-tsn-demo-result5.png

In this case, the inter-packet latency distribution for both scheduled traffic reduces greatly compared to previous cases. This result is consistent with the fact that LaunchTime technology ensures scheduled traffic is pre-fetched ahead of time from system memory into the Ethernet MAC controller for transmission at the defined time. The transmission gating effect of taprio qdisc provides a protected transmission window for scheduled traffic from interfering Best Effort traffic. As a result, combining these two technologies ensures that Ethernet frames for scheduled traffic are sent out in a protected transmission window at accurate times.


References

[1] TSN Reference Software for Linux: https://github.com/intel/iotg_tsn_ref_sw

[2] StarlingX Installation and Deployment Guides: https://docs.starlingx.io/deploy_install_guides/index.html