HorizonUsability Testing

OpenStack User Research Overview
The following is a list of user research activities that have been conducted on behalf of the community. The sessions associated with each study are streamed to the community and the results are available by clicking the link at the end of each description.   The studies are ordered with the most recent at the top.

List of current efforts underway may be seen at Research Priorities for Newton.  

Overview
Validate the usability of the Searchlight feature and determine whether or not it is an improvement over the current Horizon search.
 * The Horizon team is developing a plugin that integrates a search feature into the GUI.
 * The introduction of search into Horizon changes many of the ways in which users can interact with the information about their cloud, so we're interested in finding out which interactions work the best.
 * We're particularly interested in performance (how fast operators can execute their workflows) and in usability (how well the information is presented).

Study Notes
Link to the Horizon Search Features Validation Study Etherpad.

Results
Presentation on the Horizon/Searchlight Integration Results and Finding (YouTube recording).

Overview
The purpose of this study was to understand the complex nature of how operators diagnose and find solutions to the problems they face.

Study Notes
Link to the Information Needs of Operators Study Etherpad/Notes.

Results
Link to the recorded presentation of the findings on YouTube.

Main findings:
 * Overall, operators tend to follow a similar process for information gathering:
 * 1) Google or internal information search to isolate the problem
 * 2) Reach out to an external source for alternate ideas ? usually another co-worker or an operator in the community (indirectly via IRC or mailing list)
 * 3) Comb through the code to check for more granular clues to pinpoint the issue


 * While participants acknowledged that total consolidation of all OpenStack information was out of scope, a common theme among ideal solutions was better organization of information.

Potential solutions (in no particular order):
 * * Create a knowledge base of problems encountered and solutions proposed
 * * Utilize a template to ensure a common structure for capturing environment info
 * * Collect ?best practices? for common tasks
 * * Optimize search for Google results
 * * (Smaller company) access to other problem-solving resources provided by (larger) companies
 * * More detailed OpenStack logging
 * * Having a greater number and/or better quality of operator-focused docs
 * * The ability to add comments to specific documentation pages
 * * Providing reference architectures as a learning tool

 

Overview
The OpenStack UX team conducted a series of interviews with Cloud Operators to identify the difficulties of quota management and scope solutions based on Operator feedback.

Study Notes
Link to the Quota Management Study Etherpad.

Results

 * This study summarizes results from a series of interviews intended to understand how operators manage quotas at scale as well as the pain points associated with that process.
 * The study was conducted by Danielle Mundle (IRC: uxdanielle) and included operators from CERN, Pacific Northwest National Laboratory, Workday, Intel and Universidade Federal de Campina Grande and others.
 * Quota Management Study Report

Overview
The purpose of this study was to identify difficulties novice Horizon users face when launching an instance.

Study Design

 * 17 participants performed 9 tasks individually.

Results
 
 * Link to the recorded presentation of the findings on YouTube.

Overview
The purpose of this effort was to gather data to help inform the design of a new end user dashboard for Horizon. This study will occur at the end of January.

Study Design
Eight participants will be interview remotely.

Click here to view the study guide. (work in progress)

Results
This study will occur at the end of January.  

Overview
The purpose of this effort was to gather data to help inform the UX design of the new Ironic standalone.

HP, Red Hat, and Intel collaborated to execute this investigation.

Study Notes

 * Ironic Study Notes

Results
Data is currently being analyzed

Overview
The purpose of this effort was to gather data to help better understand the attributes of OpenStack users who remain on Nova Networking rather than migrating to Neutron, and their primary reasons for doing so.

HP, Red Hat, and OpenStack Foundation collaborated to execute this investigation.

Study Notes

 * Preliminary Nova-Neutron cloud operator interviews findings (July-August 2015)
 * OpenStack Nova Network to Neutron Migration: Survey Results (1 Dec 2015)

Presentation
Click here to view the results of the study.   

Overview
The OpenStack UX project team will be meeting at IBM's Design Center to finalize a set of personas for the overall community.
 * October 2015 Persona Workshop Planning

Personas
Click here to view the latest OpenStack personas.   

Overview
The UX Team conducted an unmoderated usability study of Horizon in August 2015 to validate the proposed Launch Instance workflow. The intent is to feed the results of the study back to the community, make updates to the workflow and move it out of beta. <Br> As part of this study, in order to find participants, a screener was created and distributed using Survey Monkey. More information on the screener may be found at:
 * Horizon Launch Instance Validation Scenario and Tasks
 * Horizon Launch Instance Validation Issues

<Br> <Br>

Overview
The UX Team conducted a usability study to evaluate a proposed menu concept in Horizon and recommend ways to improved it. <Br> <Br>

Why do another Horizon card sort?
The goal of this research was to validate the 5 category model, as well as the category names, that were arrived at based on the earlier open card sort of the top of the tree Horizon items. <Br>
 * This data can then be used to help inform future IA decisions.

<Br> A moderated closed card sort was used in order to be able to probe on why participants were making the choices they were making, assess their interpretation of items, and identify any cards for which multiple categories were being considered viable.

Sample
13 individuals participated in this study.
 * After the initial 3 participants, 1 of the category name was modified.
 * Therefore, data from 10 participants was included in the final analysis.

<Br> While we tried to recruit a range, we focused on users with Horizon experience.
 * 11/13 had used Horizon to create an instance in the past three months.
 * 10 worked or studied in environments where cloud computing services were offered.
 * All had experience with at least 1 cloud service provider.
 * Participants had a range of roles, including software development, software architects, CS students, system engineers, and support.
 * Participants all received a $25 Amazon gift card as a thank you for their participation.

Test Materials
Eighteen cards were used in the sort.
 * These were the same cards used in the earlier, open sort.
 * These cards represented the current top-of-tree for Horizon, or were recommended by the Horizon PTL.
 * Therefore, the cards were presented to participants as a term definition, followed by the term itself.

Five categories were used: The term "Compute Services" was changed to “Platform Services” based on finding from the 3 initial participants. <Br> The Horizon Moderated Closed Card Sort Findings (April 2015) presentation can be viewed here <Br> <Br> <Br>
 * 1) Overview
 * 2) Compute
 * 3) Storage
 * 4) Networking
 * 5) Compute Services

Why bother with a card sort?
The goal of this research was to understand how users grouped the top of the tree items in Horizon, as well as how they labeled these groupings. This data can then be used to help inform future IA decisions. An open card sort was used to explore the category types users generated, in addition to allowing them to create the groupings that were most meaningful to them. <Br>

Sample
The sample included 45 participants with recent (past 3 months) Horizon experience and 20 without recent experience. <Br>

Test Materials
Eighteen cards were used in the sort. These cards used in the study are currently top-of-tree for Horizon or recommended by the Horizon PTL. <Br>
 * Initial pilot data indicated that the terms themselves were ambiguous, and not all participants reviewed the definitions.
 * Therefore, the cards were presented to participants as a term definition, followed by the term itself.

<Br> Click here to view the results of the Horizon card sort study. <Br> <Br> <Br>

Overview
The UX Team conducted a usability study of Horizon starting the week of February 24 2014. The intent is to feed the results of the study back to the community and take actions to improve the Horizon user experience. <Br> As part of this study, in order to find participants, a screener was created and distributed using Survey Monkey. More information on the screener may be found at:
 * Etherpad for Collaboration for the Horizon Usability Test Screener (DRAFT)
 * Horizon Usability Test Screener (Final)
 * Horizon Usability Test Tasks
 * Horizon Usability Test Results

<Br> Let us know if you, or anyone you know, would be interested in participating in the study. We are specifically looking for Cloud Operators, but do not need to have experience with OpenStack. If you and/or you know someone interested in participating, please contact the OpenStack Personas mailing list.

<Br> <Br> Return to OpenStack UX Wiki