- Launchpad Entry: NovaSpec:nova-quota-schema
- Created: May 8 2011
- Contributors: Mark Washenberger
Currently the quota table in the nova db has a column for every type of quota we manage. Each time we add a new type of quota to the system, we have to make a schema change. A key-value approach to the schema for quotas would allow new types of quotas to be added without requiring subsequent schema changes.
Adds the concept of unlimited quotas.
- It is acceptable to drop history in the quota table on a downgrade of this schema change. See migration notes below for more details.
- In the design prior to this change, it is invalid to have multiple non-deleted quota rows for a single project id, despite the fact that the schema supports multiple such rows. Again, see migration details below.
The current table schema for quotas looks like:
Unique Int id Datetime created_at Datetime updated_at Datetime deleted_at Boolean deleted String project_id Int instances Int cores Int gigabytes Int floating_ips Int metadata_items
A given project has only one quota row. The entries in the row are only overrides--if an entry is NULL that means to use the default as specified in a flag.
There are a few proposals to add more items to the schema, in particular RAM limits. (Link?) Changes such as this would not require database migrations if we switched to a schema like
Unique Int id Datetime created_at Datetime updated_at Datetime deleted_at Boolean deleted String project_id String resource Int limit
In this design, a given project would have a multiple rows--one for each non-default quota setting it has. If a project has the default limit for a given resource, it would not have a row for that resource. Furthermore, the design introduces the concept of unlimited quotas. When the limit associated with a given resource is NULL, then that resource is unlimited for that project.
- Schema change in nova/db/sqlalchemy/
- Database interface change in nova/db/api.py
- Slight changes to bin/nova-manage
- Make nova/quota.py work with the new database interface
- Slight changes to tests
- Database migration code, see below
The schema prior to this change allows multiple active rows for a given project. However, only one quota row could apply at any time, and there was something of a race condition as to which set of quotas for that project would win. The database upgrade path cannot deal with this problem as it doesn't know which row should win. Therefore, if it recognizes this condition, the upgrade script balks and does not proceed. It is the deployers responsibility to clean up these rows before running the migration. It appears that nova-manage, which is the main way to add quota information to a project, does not allow this type of ambiguity, so hopefully the problem will be rare.
The upgrade path is straightforward. For each non-null quota column in a given row, we create a new row in the new quotas table. If the quota value previously was NULL, meaning to use the default quota, no row is added to the new table. Timestamps and deleted status are copied from the old row to any new rows that are created.
The schema after this change has a problem similar to the schema before the change. We want to make each combination of (project_id, resource) unique, but we can't because the deleted column would interfere. Therefore, the downgrade script checks to make sure there are no duplicates of a given (project_id, resource) pair where deleted is False. If any such duplicates are found, the script aborts. Again the deployer must resolve this ambiguity before reattempting the downgrade migration.
Downgrade actions and consequences::
The downgrade path is more complicated. The precondition above ensure that there is only one row for a given (project_id, resource, deleted=False). These rows are grouped together into a single row in the old-style quotas table. The created_at timestamp for the old-style row is the earliest among the set for the project. The updated_at timestamp for the old-style row is the latest among the set for the project. Any limit whose resource name is not recognized as part of the old-style schema is not considered. In the downgrade migration, it is not clear what to do with deleted rows. For simplicity, these rows are dropped--that is, history is not preserved on a downgrade. If a deployer wishes to retain history for their records, they must take a snapshot of the database before the downgrade.
This need not be added or completed until the specification is nearing beta.
The boolean deleted column on each row in the database allows deployers to track change history. However, it also prevents enforcing uniqueness on other columns. Perhaps it would be better to move history tracking to another project or at least another table in the same database.
Should default settings move into the database?
BoF agenda and discussion
Use this section to take notes during the BoF; if you keep it in the approved spec, use it for summarising what was discussed and note any options that were rejected.