Jump to: navigation, search

Difference between revisions of "Interop Challenge"

(How to Join)
Line 87: Line 87:
 
| VMware || Mitaka || || Success || Xiangfei Zhu/Mark Voelker ||Mark Voelker ||
 
| VMware || Mitaka || || Success || Xiangfei Zhu/Mark Voelker ||Mark Voelker ||
 
|-
 
|-
| DreamHost ||  ||  || || || Stefano Maffuli
+
| DreamHost || Mitaka ||  || Pass LAMPstack Ansible || || Stefano Maffuli
 
|-
 
|-
 
|}
 
|}

Revision as of 23:05, 11 October 2016

Interop Challenge

Status: Collecting workload scripts, milestone-1 (as of Aug 8, 2016)

Scope

The interop challenge was started in July 2016 to create a set of common workloads/tests to be executed across multiple OpenStack distributions and/or cloud deployment models. The participants in this challenge will work together to prove once and for all that OpenStack-Powered clouds are interoperable.

Original presentation used during team creation: Interoperability Challenge Kick-Off

Duration

The team will complete the Interop Challenge prior to the next summit working together with partners in the OpenStack Foundation.

Meeting Information

Schedule: Wednesdays at 1400 UTC
IRC Channel (on freenode): #openstack-meeting-cp
Logs from past meeting: http://eavesdrop.openstack.org/meetings/interop_challenge/

Next Meeting: Wednesday, October 12th, 2016 at 1400 UTC Etherpad For Next Meeting: 2016-10-12

Agenda
* see the etherpad 

Previous Etherpads: 2016-10-05

Communication methods

  • Members will meet regularly using the OpenStack DefCore channel on IRC
  • The team will communicate on the OpenStack DefCore mailing list and add the [interop] tag to the subject of messages related to the interop challenge.
  • The team may also have discussions in Gerrit on specific tools/workloads

Milestones

Milestone Goal References Status
1 Create interop challenge and team Completed
2 Identify tools/scripts that will be used to validate interoperability Completed
3 Execute tests, resolve/document any issues, and share results In-Progress
4 Create read-out for Interop Challenge Pending
5 Share findings at the OpenStack Summit in Barcelona Pending

How to Join

If you are interested in joining the interop, please join the the OpenStack DefCore mailing list and send a message with the tag "[interop]" in the subject line. Please identify one business and technical leader from your organization in the introductory email. Welcome aboard!

Current Participants Alphabetical Order

Participant OpenStack version References Status Tested by Keynote 2 Demo Owner
AT&T Elise Eiden
Canonical Mitaka and Newton Mark Baker
Cisco Liberty Success Rohit Agarwalla Rohit Agarwalla
Fujitsu Kilo Tested with Ansible Success Dror Gensler, Daisuke Butsuda Dror Gensler
Hitachi
HPE Mitaka RefStack Pass. Tested with Ansible Success Ghe Rivero
Huawei Juno DockerSwarm tested with Terraform, LAMPStack tested with Ansible Success Jichun Liu(Huawei) Zhenyu Zheng
DT OTC (public cloud) Daniela Ebert
IBM Liberty and Mitaka Success Tong Li (IBM) Tong Li (IBM)
Linaro Newton on AArch64 Success Gema Gomez (Linaro) Gema Gomez
Mirantis Mitaka
NetApp
Osic (Intel+Rackspace) Liberty RefStack Pass. Tested with Ansible Success Tong LI (IBM), Luz Cazares (Intel) Luz Cazares
OVH Juno/Kilo Success Tong Li (IBM) pilgrimstack
Rackspace Liberty RefStack Pass. Tested with Ansible Success Tong LI (IBM), Luz Cazares (Intel) Egle Sigler
Red Hat Mitaka RefStack Pass - Tested with Ansible Success Victoria Martinez de la Cruz (Red Hat), Daniel Mellado (Red Hat) Daniel Mellado
SUSE Liberty and Mitaka Tested with Terraform Success contact Roman Arcea (SUSE) Pete Chadwick - placeholder
VMware Mitaka Success Xiangfei Zhu/Mark Voelker Mark Voelker
DreamHost Mitaka Pass LAMPstack Ansible Stefano Maffuli

Scripts/Tools used for Interop Challenge

The interop challenge requires that we use common testing scripts and validation methods across multiple clouds. The team has agreed to post all scripts to the osops-tools-contrib repository since the templates/scripts used for the Interop Challenge are also useful outside of this context and could serve as examples on how to use various tools to deploy applications on OpenStack-Powered Clouds.

The current list of proposed tooling includes:

  • Ansible
  • Terraform
  • OpenStack Heat

The current list of workloads includes:

  • LAMP (Linux, Apache, MySQL, PHP) Stack
  • Dockerswarm-coreos
  • NFV

How to Propose/Submit new tools/workloads

Any participating member of the Interop Challenge can submit additional scripts/workloads for review by the team. The script to leverage the new tool and/or deploy the new workload should be posted to the osops-tools-contrib repository. Information on how to contribute to OpenStack repositories can be found in the OpenStack Developer's Guide.

Once you have posted the code for review, please send an email to the OpenStack DefCore mailing list with the following subject: "[interop] New tool/workload proposal: brief description" and provide a link to the code review page for the change along with an overview of the tool or where people can find more information. The proposal will be reviewed by the team and, if necessary, added to the agenda for an upcoming meeting.

Directory Structure

Repo used by the Interop-challenge: https://github.com/openstack/osops-tools-contrib

   Directory structure is proposed below:
   /heat  - use heat template to create workload
       lampstack
       dockerswarm-coreo
       ...
   /terraform - use terraform to create workload
       lampstack
       dockerswarme-coreos
       ...
   /xxx - use xxx to create workload

Test Candidate Process

(Proposed; subject to change after 08-10-2016 meeting)

For test assistance or if you would like Tong Li to run the tests on your clouds, please contact Tong Li (IRC: tongli, email: litong01@us.ibm.com )

RefStack Testing and Results Upload

Information on how to run RefStack tests can be found at:https://github.com/openstack/refstack-client/blob/master/README.rst

  • Slide 15, 16, 17 in this RefStack Austin Summit presentation provides some information about customizing a tempest.conf file for RefStack testing.


Once tests are run, test results can be uploaded to the official RefStack server by following the instruction described in https://github.com/openstack/refstack/blob/master/doc/source/uploading_private_results.rst.

  • RefStack team highly recommends to upload test results with signatures and not anonymously. By default, the privately uploaded data isn't shared, but authorized users can decide to share the results with the community anonymously.


For Interop Challenge, DefCore team recommends to run complete API tests (not just the must-pass tests). Following is the command to run the tests:

./refstack-client test -c <Path of the tempest configuration file to use> -v


For questions, please contact us at IRC channels #refstack or #openstack-defcore, or send email to the OpenStack DefCore mailing list

Where/How to Share Test Results

Rather than collecting binary "pass/fail" results, one of our goals for the challenge is to start gathering some information about what makes a workload portable or not. Once you've run each of the workloads above, we ask that you copy/paste the following template into an email and send it to defcore-committee@lists.openstack.org with "[interop-challenge] Workload Results" in the subject line.


1.) Your name:  
2.) Your email: 
3.) Reporting on behalf of Company/Organization:   
4.) Name and version (if applicable) of the product you tested:
5.) Version of OpenStack the product uses: 
6.) Link to RefStack results for this product: 
7.) Workload 1: LAMP Stack with Ansible (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/ansible/lampstack)
  A.) Did the workload run successfully? 
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 
  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. 
  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?
8.) Workload 2: Docker Swarm with Terraform (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/terraform/dockerswarm-coreos)
  A.) Did the workload run successfully? 
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 
  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. 
  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?
9.) Workload 3: NFV (URL TBD)
  A.) Did the workload run successfully?  
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 
  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc.
  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?

Here's a fictional example of what the email template might look like when filled out:

1.) Your name: Jane Doe  
2.) Your email: jdoe@supercoolsoftware.com 
3.) Reporting on behalf of Company/Organization: XYZ, Inc.
4.) Name and version (if applicable) of the product you tested: SuperCool Private Cloud 4.5
5.) Version of OpenStack the product uses: Liberty
6.) Link to RefStack results for this product: https://refstack.openstack.org/#/results/fc80592b-4503-481c-8aa6-49d414961f2d 
7.) Workload 1: LAMP Stack with Ansible (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/ansible/lampstack)
  A.) Did the workload run successfully? No
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 

    "Error in fetching the floating IP's: no floating IP addresses available"

  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. 

    SuperCoolCloud doesn't use floating IP addresses.  Instead, we recommend that cloud admins create a shared provider network with external routing connectivity.

  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?

    Not very.  Basically we just need to add a config variable that allows the user to specify whether the cloud uses floating IP's (and a small if loop in a few places that says "if we're configured to not use floating IP's, assume it's ok to use the instance's fixed IP instead").

8.) Workload 2: Docker Swarm with Terraform (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/terraform/dockerswarm-coreos)
  A.) Did the workload run successfully?  Yes
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 

  N/A

  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. 
 
  N/A

  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?

  N/A

9.) Workload 3: NFV (URL TBD)
  A.) Did the workload run successfully?  Yes
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 

  N/A

  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc.

  N/A

  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?

  N/A