- 1 Mission
- 2 OpenStack Ecosystem
- 3 Contributing
- 4 Meetings
- 5 Use Cases
- 6 Architecture
- 7 High Level Optimization Process
- 8 Project Launchpad
- 9 Licensing
- 10 Documentation
- 11 Source code
- 12 FAQs
Watcher provides a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds. Watcher provides a complete optimization loop—including everything from a metrics receiver, optimization processor and an action plan applier. This provides a robust framework to realize a wide range of cloud optimization goals, including the reduction of data center operating costs, increased system performance via intelligent virtual machine migration, increased energy efficiency—and more!
Not only does Watcher provide several out-of-box optimization routines for immediate value-add, but it also supports a pluggable architecture by which custom optimization algorithms, data metrics and data profilers can be developed and inserted into the Watcher framework. Additionally, Watcher enables two different modes of execution—advise mode (manual) or active mode (automatic), giving cloud administrators the runtime flexibilities that their clouds require.
Most importantly, administrators of OpenStack-based clouds equipped with Watcher will decrease their Total Cost of Ownership (TCO) by way of more efficient use of their infrastructure and less “hands on” (read: manual) administrator involvement to perform optimizations.
It is important to understand how Watcher is positioned relative to other projects in the OpenStack ecosystem. As mentioned previously, Watcher provides resource optimization for the cloud—complementing projects like Nova and Cinder which are more focused on the initial placement of resources (e.g., VMs and volumes, respectively). Over time, clouds can start running in a sub optimal fashion in terms of application performance, energy consumption, etc. However, this is where Watcher steps into the light and helps cloud administrators tune and re balance their clouds using advanced algorithms to build recommended action plans! This can lead to better application throughput, lower data center operating costs, longer hardware life, less involvement from cloud administrators to troubleshoot and tune the cloud—just to name a few.
The project is under active development by our Watcher Drivers Team.
If you want to contribute, please look at our guide contributing to Watcher.
You can also look at who is contributing to Watcher on Stackalytics report
We meet bi-weekly on Wednesdays at 08:00 UTC on odd weeks in the #openstack-meeting-alt IRC channel (meetings logs).
You can also use #openstack-watcher to contact the team (channel logs).
So, come on in and check out our Watcher Meeting Agenda.
The primary objective of this section is to articulate the primary use cases of Watcher and its reference architecture—and how it can be used to optimize an OpenStack cloud; it does not necessarily cover every conceivable use case that Watcher can fulfill.
Use Case 1
As a cloud administrator using Watcher, I want to optimize my data center by performing "tuning actions" (e.g., virtual machine migration) based on an optimization goal (e.g., outlet temperature, airflow inlet temperature and power consumption or any other platform-specific measurements or a function of other goals—i.e., multi-dimensional goals).
- Uses live migration
- Uses Nova scheduler filters (e.g., VM affinity, VM anti-affinity, etc.)
- Correct set of host metrics (e.g., all the temperatures)
Use Case 2
As a cloud administrator using Watcher, I want to specify a threshold (i.e., an objective function) or periodic interval that defines the point at which Watcher needs to optimize the environment.
Use Case 3
As a cloud administrator, I want to be able to define an optimization goal for a set of resources (e.g., using OpenStack's resource types, such as host aggregates), so that I can have multiple goals within my data center based on the optimization objective of each set of resources (e.g., “production” vs. “development” vs. “test” host systems).
Use Case 4
As a cloud administrator, I want to be able to configure Watcher to run in “advise mode” or "active", so that I can see what optimizations Watcher would make without necessarily allowing Watcher to perform them.
Use Case 5
As a cloud administrator, I want Watcher to understand (read: know about) out-of-the-box host metrics via Ceilometer (e.g., # of vcpus, CPU utilization %, memory used), so that I can create optimization goals very easily.
Use Case 6
As a cloud administrator, I want to be able to easily see what optimizations Watcher has made and understands its efficiency, so that I know what’s going on in my data center while I’m not necessarily watching Watcher.
- Watcher run as a set of services within an OpenStack control plane (i.e., be able to access other OpenStack services by way of RPC calls via the OpenStack message queue).
- Watcher is able to communicate with a number of cloud management control points (e.g., OpenStack Nova for VM migration operations, OpenStack Keystone for authentication services, etc.).
- Watcher provides well-defined interfaces for each of its logical components such that implementations of each module can be easily interchangeable.
- For each phase in the overall optimization process, Watcher provides extension points such that different implementations can be swapped in and out to achieve various optimization goals.
- Watcher generates AMQP notifications for any action that it performs via an asynchronous notification system (i.e., this is similar to the “Nova notifications” system), allowing downstream entities to be notified in an event-driven fashion for all Watcher-invoked actions.
The current architecture is available on the official documentation.
High Level Optimization Process
The various phases of the Watcher optimization process can be found here.
Watcher is licensed under the Apache License v2.
Watcher documentation: http://docs.openstack.org/developer/watcher/
Watcher CLI documentation: http://docs.openstack.org/developer/python-watcherclient
Watcher dashboard documentation: https://docs.openstack.org/watcher-dashboard/latest/
All source code is available on GitHub.
Source code of the module: https://github.com/openstack/watcher
The Watcher specs repository: https://github.com/openstack/watcher-specs
The Watcher client code: https://github.com/openstack/python-watcherclient
The Watcher Horizon dashboard plugin: https://github.com/openstack/watcher-dashboard
The Watcher Puppet Module: https://github.com/openstack/puppet-watcher
Is Watcher just another VM scheduler?
No, Watcher performs resource optimization post VM deployment to rebalance the environment over time. In fact, Watcher leverages a VM scheduler to help make its optimization decisions, they are actually highly complementary technologies.
How does Watcher leverage other OpenStack projects?
While Watcher brings intelligent resource optimization to the OpenStack table, it does so by leveraging services provided by other projects. For example, when Watcher determines that an active VM would be more appropriately located on a different host within the cloud, Watcher asks Nova to perform a live migration operation to actually move the VM.
Can I run Watcher into Docker containers
Yes. A dedicated project describes how to run Watcher within Docker containers.