Large Scale SIG/CephAndOpenStack
Ceph is often used in combination with OpenStack, which raises a number of questions when scaling.
Q: Is it better to have a single large Ceph cluster for a given datacenter or availability zone, or multiple smaller clusters to limit the impact in case of failure ?
Q: How to optimize Ceph cluster performance when it's composed of high-performance SSD disks and standard HDDs ?
Q: Should erasure coding be used to increase resilience ? What's its impact on performance in a VM use case ?
Q: How to optimize a Ceph cluster (number of nodes, configration, enabled features...) in large OpenStack clusters (> 100 compute nodes) ?