Difference between revisions of "Large Scale SIG/Monitor"
Line 11: | Line 11: | ||
== Resources == | == Resources == | ||
− | * oslo.metrics [ | + | * oslo.metrics [https://opendev.org/openstack/oslo.metrics/ code] and [https://docs.openstack.org/oslo.metrics/latest/ documentation] |
+ | * Learn about golden signals (latency, traffic, errors, saturation) in the [https://sre.google/sre-book/monitoring-distributed-systems/#xref_monitoring_golden-signals Google SRE book] | ||
Revision as of 14:04, 24 November 2020
The second stage in the Scaling Journey is Monitor.
Once you have properly configured your cluster to handle scale, you will need to properly monitor it for signs of load stress. Monitoring in OpenStack can be a bit overwhelming and it's sometimes hard to determine how to meaningfully monitor your deployment to get advance warning of when load is just too high. This page aims to help answer those questions.
Once meaningful monitoring is in place, you are ready to proceed to the third stage of the Scaling Journey: Scale Up.
FAQ
Q: How can I detect that RabbitMQ is a bottleneck ?
Resources
- oslo.metrics code and documentation
- Learn about golden signals (latency, traffic, errors, saturation) in the Google SRE book
Other SIG work on that stage
- Measurement of MQ behavior through oslo.metrics
- Approved spec for oslo.metrics: https://review.opendev.org/#/c/704733/
- Code up at https://opendev.org/openstack/oslo.metrics/
- Get to a 0.1 release
- Basic tests https://review.opendev.org/#/c/755069/ (ttx)
- Latest code (genekuo)
- Release jobs setup (ttx)
- Get to a 1.0 release
- oslo-messaging metrics code https://review.opendev.org/#/c/761848/ (genekuo)
- Enable bandit (issue to fix with predictable path for metrics socket ?)
- Improve tests to get closer to 100% coverage