Jump to: navigation, search

Difference between revisions of "Interop Challenge"

(Milestones)
(Meeting Information)
Line 17: Line 17:
 
'''Etherpad For Next Meeting: [https://etherpad.openstack.org/p/interop-challenge-meeting-2016-09-07 2016-09-07]'''
 
'''Etherpad For Next Meeting: [https://etherpad.openstack.org/p/interop-challenge-meeting-2016-09-07 2016-09-07]'''
 
  '''Agenda'''
 
  '''Agenda'''
  * Review action items from [http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-08-17-14.00.html previous meeting]
+
  * Review action items from previous meeting
  ** Open reviews: https://review.openstack.org/#/q/status:open+project:openstack/osops-tools-contrib
+
** http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-08-31-14.02.html
  * Three work load test
+
  ** Test environment, send account information to tong (litong01@us.ibm.com) if wants tong to test scripts against your cloud
  ** ansible lampstack
+
  ** ATT NFV script
  ** terraform docker swarm
+
  * Update from meeting with OpenStack Foundation (Lauren, Jonathan, Mark, and hogepodge)
  ** ansible NFV ??
+
* Test runs and results
  * Run tests and share results
+
  ** Tong has run the two available work load tests against IBM Blue Box cloud. Results have been sent out to the mailing list.
  * Open Discussion
+
  * Keynote session, looking for couple more clouds to be used in the keynote session for live demo.
 
+
  ** IBM Bluebox
 +
  * Open discussion
 +
 
Previous Etherpads: [https://etherpad.openstack.org/p/interop-challenge-meeting-2016-08-31 2016-08-31]
 
Previous Etherpads: [https://etherpad.openstack.org/p/interop-challenge-meeting-2016-08-31 2016-08-31]
  

Revision as of 06:40, 7 September 2016

Interop Challenge

Status: Collecting workload scripts, milestone-1 (as of Aug 8, 2016)

Scope

The interop challenge was started in July 2016 to create a set of common workloads/tests to be executed across multiple OpenStack distributions and/or cloud deployment models. The participants in this challenge will work together to prove once and for all that OpenStack-Powered clouds are interoperable.

Original presentation used during team creation: Interoperability Challenge Kick-Off

Duration

The team will complete the Interop Challenge prior to the next summit working together with partners in the OpenStack Foundation.

Meeting Information

Schedule: Wednesdays at 1400 UTC
IRC Channel (on freenode): #openstack-defcore
Logs from past meeting: http://eavesdrop.openstack.org/meetings/interop_challenge/

Next Meeting: Wednesday, September 7th, 2016 at 1400 UTC Etherpad For Next Meeting: 2016-09-07

Agenda
* Review action items from previous meeting
** http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-08-31-14.02.html
** Test environment, send account information to tong (litong01@us.ibm.com) if wants tong to test scripts against your cloud
** ATT NFV script
* Update from meeting with OpenStack Foundation (Lauren, Jonathan, Mark, and hogepodge)
* Test runs and results
** Tong has run the two available work load tests against IBM Blue Box cloud. Results have been sent out to the mailing list.
* Keynote session, looking for couple more clouds to be used in the keynote session for live demo.
** IBM Bluebox
* Open discussion

Previous Etherpads: 2016-08-31

Communication methods

  • Members will meet regularly using the OpenStack DefCore channel on IRC
  • The team will communicate on the OpenStack DefCore mailing list and add the [interop] tag to the subject of messages related to the interop challenge.
  • The team may also have discussions in Gerrit on specific tools/workloads

Milestones

Milestone Goal References Status
1 Create interop challenge and team Completed
2 Identify tools/scripts that will be used to validate interoperability Completed
3 Execute tests, resolve/document any issues, and share results In-Progress
4 Create read-out for Interop Challenge Pending
5 Share findings at the OpenStack Summit in Barcelona Pending

How to Join

If you are interested in joining the interop, please join the the OpenStack DefCore mailing list and send a message with the tag "[interop]" in the subject line. Please identify one business and technical leader from your organization in the introductory email. Welcome aboard!

Scripts/Tools used for Interop Challenge

The interop challenge requires that we use common testing scripts and validation methods across multiple clouds. The team has agreed to post all scripts to the osops-tools-contrib repository since the templates/scripts used for the Interop Challenge are also useful outside of this context and could serve as examples on how to use various tools to deploy applications on OpenStack-Powered Clouds.

The current list of proposed tooling includes:

  • Terraform
  • OpenStack Heat

The current list of workloads includes:

  • LAMP (Linux, Apache, MySQL, PHP) Stack

How to Propose/Submit new tools/workloads

Any participating member of the Interop Challenge can submit additional scripts/workloads for review by the team. The script to leverage the new tool and/or deploy the new workload should be posted to the osops-tools-contrib repository. Information on how to contribute to OpenStack repositories can be found in the OpenStack Developer's Guide.

Once you have posted the code for review, please send an email to the OpenStack DefCore mailing list with the following subject: "[interop] New tool/workload proposal: brief description" and provide a link to the code review page for the change along with an overview of the tool or where people can find more information. The proposal will be reviewed by the team and, if necessary, added to the agenda for an upcoming meeting.

Directory Structure

Repo used by the Interop-challenge: https://github.com/openstack/osops-tools-contrib

   Directory structure is proposed below:
   /heat  - use heat template to create workload
       lampstack
       dockerswarm-coreo
       ...
   /terraform - use terraform to create workload
       lampstack
       dockerswarme-coreos
       ...
   /xxx - use xxx to create workload

Test Candidate Process

(Proposed; subject to change after 08-10-2016 meeting)

RefStack Testing and Results Upload

  • Information on how to run RefStack tests can be found at:
  https://github.com/openstack/refstack-client/blob/master/README.rst
  Note: slide 15, 16, 17 in this RefStack Austin Summit presentation
  provides some information about customizing a tempest.conf file for RefStack testing.
  • Once tests are run, test results can be uploaded to the official RefStack server by following the instruction described in:
  https://github.com/openstack/refstack/blob/master/doc/source/uploading_private_results.rst
  • For Interop Challenge, DefCore team recommends to run all API tests (not just the must-pass tests). Following is the command to all RefStack API tests.
 ./refstack-client test -c <Path of the tempest configuration file to use> -v
  • For questions, please contact us at IRC channels #refstack or #openstack-defcore, or send email to the OpenStack DefCore mailing list

Where/How to Share Test Results

Rather than collecting binary "pass/fail" results, one of our goals for the challenge is to start gathering some information about what makes a workload portable or not. Once you've run each of the workloads above, we ask that you copy/paste the following template into an email and send it to defcore-committee@lists.openstack.org with "[interop-challenge] Workload Results" in the subject line.


1.) Your name:  
2.) Your email: 
3.) Reporting on behalf of Company/Organization:   
4.) Name and version (if applicable) of the product you tested:
5.) Version of OpenStack the product uses: 
6.) Link to RefStack results for this product: 
7.) Workload 1: LAMP Stack with Ansible (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/ansible/lampstack)
  A.) Did the workload run successfully? 
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 
  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. 
  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?
8.) Workload 2: Docker Swarm with Terraform (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/terraform/dockerswarm-coreos)
  A.) Did the workload run successfully? 
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 
  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. 
  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?
9.) Workload 3: NFV (URL TBD)
  A.) Did the workload run successfully?  
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 
  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc.
  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?

Here's a fictional example of what the email template might look like when filled out:

1.) Your name: Jane Doe  
2.) Your email: jdoe@supercoolsoftware.com 
3.) Reporting on behalf of Company/Organization: XYZ, Inc.
4.) Name and version (if applicable) of the product you tested: SuperCool Private Cloud 4.5
5.) Version of OpenStack the product uses: Liberty
6.) Link to RefStack results for this product: https://refstack.openstack.org/#/results/fc80592b-4503-481c-8aa6-49d414961f2d 
7.) Workload 1: LAMP Stack with Ansible (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/ansible/lampstack)
  A.) Did the workload run successfully? No
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 

    "Error in fetching the floating IP's: no floating IP addresses available"

  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. 

    SuperCoolCloud doesn't use floating IP addresses.  Instead, we recommend that cloud admins create a shared provider network with external routing connectivity.

  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?

    Not very.  Basically we just need to add a config variable that allows the user to specify whether the cloud uses floating IP's (and a small if loop in a few places that says "if we're configured to not use floating IP's, assume it's ok to use the instance's fixed IP instead").

8.) Workload 2: Docker Swarm with Terraform (http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/terraform/dockerswarm-coreos)
  A.) Did the workload run successfully?  Yes
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 

  N/A

  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc. 
 
  N/A

  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?

  N/A

9.) Workload 3: NFV (URL TBD)
  A.) Did the workload run successfully?  Yes
  B.) If not, did you encounter any end-user visible error messages?  Please copy/paste them here and provide any context you think would help us understand what happened. 

  N/A

  C.) Were you able to determine why the workload failed on this product?  If so, please describe.  Examples: the product is missing a feature that the workload assumes is available, the product limits an action by policy that the workload requires, the workload assumes a particular image type or processor architecture, etc.

  N/A

  D.) (optional) In your estimation, how hard would it be to modify this workload to get it running on this product?  Can you describe what would need to be done?

  N/A