Jump to: navigation, search

HorizonUsability Testing

Revision as of 16:27, 18 August 2016 by Julim (talk | contribs) (Overview)

Overview

The following is a list of user research activities that have been conducted on behalf of the community. The sessions associated with each study are streamed to the community and the results are available by clicking the link at the end of each description.

The studies are ordered with the most recent at the top.

List of current efforts underway may be seen at Research Priorities for Newton.

June-August 2016 Information Needs of Operators Study

Overview

The purpose of this study was to understand the complex nature of how operators diagnose and find solutions to the problems they face.

Study Notes

Link to the Information Needs of Operators Study Etherpad/Notes.

Results

Link to the recorded presentation of the findings on YouTube.

Main findings:

  • Overall, operators tend to follow a similar process for information gathering:
1) Google or internal information search to isolate the problem
2) Reach out to an external source for alternate ideas ? usually another co-worker or an operator in the community (indirectly via IRC or mailing list)
3) Comb through the code to check for more granular clues to pinpoint the issue
  • While participants acknowledged that total consolidation of all OpenStack information was out of scope, a common theme among ideal solutions was better organization of information.

Potential solutions (in no particular order):

* Create a knowledge base of problems encountered and solutions proposed
* Utilize a template to ensure a common structure for capturing environment info
* Collect ?best practices? for common tasks
* Optimize search for Google results
* (Smaller company) access to other problem-solving resources provided by (larger) companies
* More detailed OpenStack logging
* Having a greater number and/or better quality of operator-focused docs
* The ability to add comments to specific documentation pages
* Providing reference architectures as a learning tool

January 2016 Horizon User Dashboard Needs Finding Study

Overview

The purpose of this effort was to gather data to help inform the design of a new end user dashboard for Horizon. This study will occur at the end of January.

Study Design

Eight participants will be interview remotely.

Click here to view the study guide. (work in progress)

Results

This study will occur at the end of January.

January 2016 Ironic Needs-Finding Study

Overview

The purpose of this effort was to gather data to help inform the UX design of the new Ironic standalone.

HP, Red Hat, and Intel collaborated to execute this investigation.

Results

Data is currently being analyzed

December 2015 Nova Networks to Neutron Migration Study

Overview

The purpose of this effort was to gather data to help better understand the attributes of OpenStack users who remain on Nova Networking rather than migrating to Neutron, and their primary reasons for doing so.

HP, Red Hat, and OpenStack Foundation collaborated to execute this investigation.

Study Notes

Presentation

Click here to view the results of the study.


October 2015 Persona Workshop

Overview

The OpenStack UX project team will be meeting at IBM's Design Center to finalize a set of personas for the overall community.

Personas

Click here to view the latest OpenStack personas.


August 2015 Horizon Launch Instance Usability

Overview

The UX Team conducted an unmoderated usability study of Horizon in August 2015 to validate the proposed Launch Instance workflow. The intent is to feed the results of the study back to the community, make updates to the workflow and move it out of beta.
As part of this study, in order to find participants, a screener was created and distributed using Survey Monkey. More information on the screener may be found at:



May 2015 Menu Concept Usability

Overview

The UX Team conducted a usability study to evaluate a proposed menu concept in Horizon and recommend ways to improved it.


April 2015 Horizon Card Sort Validation

Why do another Horizon card sort?

The goal of this research was to validate the 5 category model, as well as the category names, that were arrived at based on the earlier open card sort of the top of the tree Horizon items.

  • This data can then be used to help inform future IA decisions.


A moderated closed card sort was used in order to be able to probe on why participants were making the choices they were making, assess their interpretation of items, and identify any cards for which multiple categories were being considered viable.

Sample

13 individuals participated in this study.

  • After the initial 3 participants, 1 of the category name was modified.
  • Therefore, data from 10 participants was included in the final analysis.


While we tried to recruit a range, we focused on users with Horizon experience.

  • 11/13 had used Horizon to create an instance in the past three months.
  • 10 worked or studied in environments where cloud computing services were offered.
  • All had experience with at least 1 cloud service provider.
  • Participants had a range of roles, including software development, software architects, CS students, system engineers, and support.
  • Participants all received a $25 Amazon gift card as a thank you for their participation.

Test Materials

Eighteen cards were used in the sort.

  • These were the same cards used in the earlier, open sort.
  • These cards represented the current top-of-tree for Horizon, or were recommended by the Horizon PTL.
  • Therefore, the cards were presented to participants as a term definition, followed by the term itself.

Five categories were used:

  1. Overview
  2. Compute
  3. Storage
  4. Networking
  5. Compute Services

The term "Compute Services" was changed to “Platform Services” based on finding from the 3 initial participants.
The Horizon Moderated Closed Card Sort Findings (April 2015) presentation can be viewed here


December 2014 Horizon Card Sort

Why bother with a card sort?

The goal of this research was to understand how users grouped the top of the tree items in Horizon, as well as how they labeled these groupings. This data can then be used to help inform future IA decisions. An open card sort was used to explore the category types users generated, in addition to allowing them to create the groupings that were most meaningful to them.

Sample

The sample included 45 participants with recent (past 3 months) Horizon experience and 20 without recent experience.

Test Materials

Eighteen cards were used in the sort. These cards used in the study are currently top-of-tree for Horizon or recommended by the Horizon PTL.

  • Initial pilot data indicated that the terms themselves were ambiguous, and not all participants reviewed the definitions.
  • Therefore, the cards were presented to participants as a term definition, followed by the term itself.


Click here to view the results of the Horizon card sort study.


February 2014 Horizon Usability Test

Overview

The UX Team conducted a usability study of Horizon starting the week of February 24 2014. The intent is to feed the results of the study back to the community and take actions to improve the Horizon user experience.
As part of this study, in order to find participants, a screener was created and distributed using Survey Monkey. More information on the screener may be found at:


Let us know if you, or anyone you know, would be interested in participating in the study. We are specifically looking for Cloud Operators, but do not need to have experience with OpenStack. If you and/or you know someone interested in participating, please contact the OpenStack Personas mailing list.



Return to OpenStack UX Wiki