Jump to: navigation, search

Difference between revisions of "Large Scale SIG/CephAndOpenStack"

(Created page with "Ceph is often used in combination with OpenStack, which raises a number of questions when scaling. == FAQ == '''Q: Is it better to have a single large Ceph cluster for a give...")
 
 
Line 1: Line 1:
Ceph is often used in combination with OpenStack, which raises a number of questions when scaling.
+
Please update your links! The Large Scale SIG documentation has now moved to:
  
== FAQ ==
+
=== https://docs.openstack.org/large-scale/ ===
'''Q: Is it better to have a single large Ceph cluster for a given datacenter or availability zone, or multiple smaller clusters to limit the impact in case of failure ?'''
 
  
A:
+
You can propose changes to the content through the [https://opendev.org/openstack/large-scale openstack/large-scale] git repository.
 
 
'''Q: How to optimize Ceph cluster performance when it's composed of high-performance SSD disks and standard HDDs ?'''
 
 
 
A:
 
 
 
'''Q: Should erasure coding be used to increase resilience ? What's its impact on performance in a VM use case ?'''
 
 
 
A:
 
 
 
'''Q: How to optimize a Ceph cluster (number of nodes, configration, enabled features...) in large OpenStack clusters (> 100 compute nodes) ?'''
 
 
 
A:
 
 
 
 
 
== Resources ==
 
* https://www.openstack.org/videos/summits/sydney-2017/the-dos-and-donts-for-ceph-and-openstack
 
* https://www.youtube.com/watch?v=OopRMUYiY5E
 
* https://www.youtube.com/watch?v=21LF2LC58MM
 
* https://www.youtube.com/watch?v=0i7ew3XXb7Q
 

Latest revision as of 09:42, 1 September 2022

Please update your links! The Large Scale SIG documentation has now moved to:

https://docs.openstack.org/large-scale/

You can propose changes to the content through the openstack/large-scale git repository.