Jump to: navigation, search

Difference between revisions of "Large Scale SIG/Monitor"

m
Line 9: Line 9:
 
'''Q: How can I detect that RabbitMQ is a bottleneck ?'''
 
'''Q: How can I detect that RabbitMQ is a bottleneck ?'''
  
A: oslo.metrics will introduce monitoring for rpc calls, currently under development
+
A: oslo.metrics will introduce monitoring for rpc calls, currently under development. RabbitMQ node CPU and RAM usage is also a indicator that your RabbitMQ cluster is overloaded, if you find CPU or RAM usage high, you should scale up/out RabbitMQ nodes.
  
 
'''Q: How can I detect that database is a bottleneck ?'''
 
'''Q: How can I detect that database is a bottleneck ?'''

Revision as of 14:08, 7 April 2021

The second stage in the Scaling Journey is Monitor.

Once you have properly configured your cluster to handle scale, you will need to properly monitor it for signs of load stress. Monitoring in OpenStack can be a bit overwhelming and it's sometimes hard to determine how to meaningfully monitor your deployment to get advance warning of when load is just too high. This page aims to help answer those questions.

Once meaningful monitoring is in place, you are ready to proceed to the third stage of the Scaling Journey: Scale Up.

FAQ

Q: How can I detect that RabbitMQ is a bottleneck ?

A: oslo.metrics will introduce monitoring for rpc calls, currently under development. RabbitMQ node CPU and RAM usage is also a indicator that your RabbitMQ cluster is overloaded, if you find CPU or RAM usage high, you should scale up/out RabbitMQ nodes.

Q: How can I detect that database is a bottleneck ?

A: oslo.metrics will also integrate oslo.db as the next step after oslo.messaging

Q: How can I track latency issues ?

A: If you have a load balancer or proxy in front of your OpenStack API servers (e.g. haproxy, nginx) you can monitor API latencies based on the metrics provided by those services.

Q: How can I track traffic issues ?

A:

Q: How do I track error rates ?

A:

  • For http requests error rates, you can check with the same method you track latency by using the metrics from the proxy.
  • For backend error rate, monitoring tools like Logstash or Fluentd are able to track error level outputs in OpenStack log file.


Q: How do I track saturation issues ?

A:

Resources


Other SIG work on that stage