- 1 Introduction
- 2 Monday May 21, 2018
- 3 Tuesday May 22, 2018
- 4 Wednesday May 23, 2018
This page summarizes the discussions had during the Cinder related Forum sessions at OpenStack Summit in Vancouver, May 21-24, 2018.
The full list of Etherpads for Forum sessions may be seen here.
Monday May 21, 2018
Standalone Cinder Introduction
There was a lot of interest in this topic and the session was well attended. John Griffith did a great job of providing an overview of what Standalone Cinder is, how it is being used and how it might be used in the future.
- What is Standalone Cinder for?
- It is a way for users to take the robust technology that is in Cinder and use it to support other storage solutions.
- How is it different from the CSI plugin for Cinder?
- Unlike the plugin it doesn't require the whole OpenStack infrastructure to be installed.
- Can be used with no-auth to function without Keystone.
- All management functionality between Cinder and Standalone Cinder is the same.
- Cinder is modular. Whatever additional pieces that a person may need can be plugged in.
- Standalone Cinder currently has a lot more functionality than CSI.
- Challenges going forward are with how to handle attachment/detachment in standalone environments.
YouTube Link: https://www.youtube.com/watch?v=jh9HSp0FURs
Planning to use Placement in Cinder
A placement service has been under development for quite some time now. The goal is to be able to provide better control over where compute, networking and storage resources are placed in the user's environment. This is a concept that has not yet been adopted by Cinder. The goal of the discussion was to make the Cinder team aware of what is available and to start discussion on how we could/could not use the placement service.
- Cinder team shared that our current scheduler design keeps status or resource in-memory:
- As a result we can only achieve an HA environment by doing active/passive scheduler configuration.
- Design is racey and has issues in HA environments.
- Concerns were raised about if the placement service could be deployed as an HA Active/Active service.
- CERN is currently running with 20 instances today and seeing no issues.
- All state is kept in the database for the placement service.
- Locking is handled through SQL and should be robust.
- Potentially resolves the other HA Active/Active issues we have been looking at in Cinder.
- The negative:
- Makes Cinder dependent on another service.
- This, obviously, impacts Standalone Cinder
- Take the discussion back to the team and highlight the positives/negatvies.
- Decide if we want to go forward using the placement service.
- Determine how we will handle the impact on Standalone Cinder if we do choose to use it.
YouTube Link: N/A (wasn't able to get a recording of this session)
Tuesday May 22, 2018
Cinder High Availability (HA) Discussion
There was notable interest in this Forum session as well. People want Active/Active HA Cinder support. The feel it is important to get it working if it can be done without making things more complex. People have been designing their own workaround to try to make this work so it seems like we should invest some time into getting something officially supported in place.
- Why do people want this?
- They want HA access to their volume processes but do not want to have to deal with using tools like Pacemaker.
- Interesting secondary item to come out of this:
- People are using multiple Cinder volume processes against one backend to load balance and avoid individual processes from getting overwhelmed.
- Sounds like this works but something we may want to look into further in the future.
- No backends are currently supporting this.
- Need documentation
- Would be nice to have some best practices coming from those trying this out.
- There were questions about the backup service
- Doesn't seem like this really needs to be HA.
- Main concern was with consistency of state after a failure which appears to be accounted for.
- Improve documentation
- At least create some stub documentation.
- Use planet.openstack.org to aggregate existing blogs.
- Start participating in the Self Healing SIG as they are covering a number of these topics.
YouTube Link: https://www.youtube.com/watch?v=feXaheKV8i4
Multi-attach Introduction and Future Direction
Operators were very interested in this discussion. The main reasons for interest were to do shared volumes (read-only) that could be used by Kubernetes and to have Raw block access for HA DBs. It was clear that the time and effort put into multi-attach thus far was not wasted effort. Everyone seemed pleased with the direction we had taken and the progress made.
- Team shared what existed for support including how to use multi-attach support.
- What's next?
- PowerVM support in Stein hopefully.
- Add read-only and read/write attachment modes
- Add support for creating multiple servers attach to the same volume in a single request.
- Operators would like to see support for more drivers.
- There is interest in getting support in the RBD driver as well.
- Get the read-only or read/write changes in place.
- Determine if we will be able to get RBD support in the future.
- Get tempest tests in place that better exercise this functionality:
YouTube Link: https://www.youtube.com/watch?v=XP_aQsUbgrI&t=4s
Wednesday May 23, 2018
Cinder's Documentation Discussion
This session was more lightly attended but was still very productive as a small focused work session. We had a documentation specialist from RedHat in attendance who I think will be a good contact for future work.
- We need to start taking documentation for Standalone Cinder into account.
- Perhaps a decision tree as to whether you want a full install or standalone?\
- Don't have much documentation for Standalone Cinder, brick-cinderclient, etc.
- Better documentation on the different way to install with direction to the associated documentation would be helpful.
- We don't have good documentation on how to work with the config file. Something we should look into further.
- How many vendors have good pointers to their documentation?
- We need a way to organize this work.
- Do we have parity between the documentation and what drivers are actually still in tree?
- There might be benefit to creating a basic troubleshooting guide.
- We have a good bit of this information from past Summit presentations. We could collect it into one place.
- Create a blueprint to start collecting up the To Dos.
- Ensure the current documentation matches actually existing drivers.
- Propose a more user friendly landing page for new contributors.
- Start developing content for missing components.