Jump to: navigation, search

Interop Challenge

Interop Challenge

"Status: Making decisions on workloads second round Interop Challenge (as of Dec 7, 2016)

Scope

     Barcelona Summit Keynote
The interop challenge was started in July 2016 to create a set of common workloads/tests to be executed across multiple OpenStack distributions and/or cloud deployment models. The participants in this challenge will work together to prove once and for all that OpenStack-Powered clouds are interoperable.

Original presentation used during team creation: Interoperability Challenge Kick-Off

OpenStack Barcelona Summit Interoperability Challenge: OpenStack Keynote Video


Project repository:
https://github.com/openstack/interop-workloads
https://git.openstack.org/cgit/openstack/interop-workloads

Project launch pad:
https://launchpad.net/interop-workloads

Bug tracking:
https://bugs.launchpad.net/interop-workloads

Duration

The team will complete the Interop Challenge prior to the next summit working together with partners in the OpenStack Foundation. The OpenStack Summit Barcelona has marked the complettion of this phase. Please see pictures and videos of the effort by clicking on the links in the Scope section.
The expanded team is now working on the second round of challenge to provide even more work loads to test more OpenStack functions. The workloads will be show cased in Summit Boston in the spring of 2017.

Meeting Information

Schedule: Wednesdays at 1400 UTC
IRC Channel (on freenode): #openstack-meeting-5
Logs from past meeting: http://eavesdrop.openstack.org/meetings/interop_challenge/

Next Meeting: Wednesday, May 3rd, 2017 at 1400 UTC
Etherpad For Next Meeting: 2017-05-03

Agenda
* see the etherpad 

Previous Etherpads: 2017-04-26

China Chapter Meeting Information

Schedule: Wednesdays at 0130 UTC
IRC Channel (on freenode): #openstack-meeting-5
Logs from past meeting: http://eavesdrop.openstack.org/meetings/interop_challenge/

Next Meeting: Wednesday, May 3rd, 2017 at 0130 UTC
Etherpad For Next Meeting: 2017-05-03

Agenda
* see the etherpad

Previous Etherpads: 2017-04-05

Communication methods

  • Members will meet regularly using the OpenStack InterOP Working Group channel (#openstack-meeting-5) on IRC
  • The team will communicate on the OpenStack Interop Working Group Mailing List and add the [interop] tag to the subject of messages related to the interop challenge.
  • The team may also have discussions in Gerrit on specific tools/workloads

Milestones

Milestone Goal References Status
1 Create interop challenge and team Completed
2 Identify tools/scripts that will be used to validate interoperability Completed
3 Execute tests, resolve/document any issues, and share results Completed
4 Create read-out for Interop Challenge Completed
5 Share findings at the OpenStack Summit in Barcelona Completed
6 Defining new work loads for Boston summit (k8s and NFV) In progress

Boston Summit On Stage Keynote K8S Demo Commited Parties

Up to 16 parties total.

All participates will have to make sure that k8s workload runs successuflly on their public or private cloud

Current Participants

Participant OpenStack version References Status Tested by Keynote Demo Owner
IBM Mitaka & Newton Pass Tong Li (IBM) Tong Li (IBM)
VMware Mitaka Passed (both standalone & join global cluster) Mark Voelker Mark Voelker
Huawei Mitaka Passed Jason Shi Jason Shi
ZTE Mitaka PASSED Yumeng Bao Yumeng Bao
SUSE Newton PASSED Roman Arcea Roman Arcea
EasyStack Mitaka PASSED Wei Liu Wei Liu
T2Cloud Newton PASSED Hanchen Lin Hanchen Lin
Red Hat Newton PASSED (both standalone & join global cluster) Daniel Mellado Daniel Mellado/Victoria Martinez de la Cruz
Rackspace Newton Egle Sigler
Canonical Ocata PASSED Ryan Beisner, Andrew McLeod Ryan Beisner
VEXXHOST Newton PASSED Mohammed Naser Mohammed Naser
Deutsche Telekom Mitaka works :-) Daniela Ebert Daniela Ebert/Kurt Garloff
Platform9 Newton Madhura Maskasky
Wind River Mitaka PASSED Greg Waines Brent Rowsell
NetApp Mitaka PASSED Sumit Kumar Sumit Kumar

Scripts/Tools used for Interop Challenge

The interop challenge requires that we use common testing scripts and validation methods across multiple clouds. The team has agreed to post all scripts to the osops-tools-contrib repository since the templates/scripts used for the Interop Challenge are also useful outside of this context and could serve as examples on how to use various tools to deploy applications on OpenStack-Powered Clouds.

The current list of proposed tooling includes:

  • Ansible
  • Terraform
  • OpenStack Heat

The current list of workloads includes:

  • LAMP (Linux, Apache, MySQL, PHP) Stack
  • Dockerswarm-coreos
  • NFV

How to Propose/Submit new tools/workloads

Any participating member of the Interop Challenge can submit additional scripts/workloads for review by the team. The script to leverage the new tool and/or deploy the new workload should be posted to the osops-tools-contrib repository. Information on how to contribute to OpenStack repositories can be found in the OpenStack Developer's Guide.

Once you have posted the code for review, please send an email to the OpenStack DefCore mailing list with the following subject: "[interop] New tool/workload proposal: brief description" and provide a link to the code review page for the change along with an overview of the tool or where people can find more information. The proposal will be reviewed by the team and, if necessary, added to the agenda for an upcoming meeting.

Doodle Pool Results

A doodle pool was conducted in the month of December 2016. The pool was openeded on 12/01/2016 and closed on 12/14/2016. The pool was to decide what kind of workload the community will work on in the next interop challenge. The decision has been made to develop Kubernetes and NFV workloads. Here is the pool results:
Doodlepoolresults.png

Directory Structure

Repo used by the Interop-challenge: https://github.com/openstack/osops-tools-contrib

   Directory structure is proposed below:
   /heat  - use heat template to create workload
       lampstack
       dockerswarm-coreo
       ...
   /terraform - use terraform to create workload
       lampstack
       dockerswarme-coreos
       ...
   /xxx - use xxx to create workload

Test Candidate Process

(Proposed; subject to change after 08-10-2016 meeting)

For test assistance or if you would like Tong Li to run the tests on your clouds, please contact Tong Li (IRC: tongli, email: litong01@us.ibm.com )

RefStack Testing and Results Upload

Information on how to run RefStack tests can be found at:https://github.com/openstack/refstack-client/blob/master/README.rst

  • Slide 15, 16, 17 in this RefStack Austin Summit presentation provides some information about customizing a tempest.conf file for RefStack testing.


Once tests are run, test results can be uploaded to the official RefStack server by following the instruction described in https://github.com/openstack/refstack/blob/master/doc/source/uploading_private_results.rst.

  • RefStack team highly recommends to upload test results with signatures and not anonymously. By default, the privately uploaded data isn't shared, but authorized users can decide to share the results with the community anonymously.


For Interop Challenge, DefCore team recommends to run complete API tests (not just the must-pass tests). Following is the command to run the tests:

./refstack-client test -c <Path of the tempest configuration file to use> -v


For questions, please contact us at IRC channels #refstack or #openstack-defcore, or send email to the OpenStack DefCore mailing list

Where/How to Share Test Results

Rather than collecting binary "pass/fail" results, one of our goals for the challenge is to start gathering some information about what makes a workload portable or not. Once you've run each of the workloads above, we ask that you copy/paste the following template into an email and send it to defcore-committee@lists.openstack.org with "[interop-challenge] Workload Results" in the subject line.


1.) Your name:  
2.) Your email: 
3.) Reporting on behalf of Company/Organization:   
4.) Name and version (if applicable) of the product you tested:
5.) Version of OpenStack the product uses: 
6.) Link to RefStack results for this product: 
7.) Workload 1: LAMP Stack with Ansible (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/ansible/lampstack)
  A.) Did the workload run successfully? 
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 
  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. 
  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?
8.) Workload 2: Docker Swarm with Terraform (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/terraform/dockerswarm-coreos)
  A.) Did the workload run successfully? 
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 
  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. 
  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?
9.) Workload 3: NFV (URL TBD)
  A.) Did the workload run successfully?  
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 
  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc.
  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?

Here's a fictional example of what the email template might look like when filled out:

1.) Your name: Jane Doe  
2.) Your email: jdoe@supercoolsoftware.com 
3.) Reporting on behalf of Company/Organization: XYZ, Inc.
4.) Name and version (if applicable) of the product you tested: SuperCool Private Cloud 4.5
5.) Version of OpenStack the product uses: Liberty
6.) Link to RefStack results for this product: https://refstack.openstack.org/#/results/fc80592b-4503-481c-8aa6-49d414961f2d 
7.) Workload 1: LAMP Stack with Ansible (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/ansible/lampstack)
  A.) Did the workload run successfully? No
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 

    "Error in fetching the floating IP's: no floating IP addresses available"

  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. 

    SuperCoolCloud doesn't use floating IP addresses.  Instead, we recommend that cloud admins create a shared provider network with external routing connectivity.

  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?

    Not very.  Basically we just need to add a config variable that allows the user to specify whether the cloud uses floating IP's (and a small if loop in a few places that says "if we're configured to not use floating IP's, assume it's ok to use the instance's fixed IP instead").

8.) Workload 2: Docker Swarm with Terraform (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/terraform/dockerswarm-coreos)
  A.) Did the workload run successfully?  Yes
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 

  N/A

  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. 
 
  N/A

  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?

  N/A

9.) Workload 3: NFV (URL TBD)
  A.) Did the workload run successfully?  Yes
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 

  N/A

  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc.

  N/A

  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?

  N/A