Graffiti/Architecture

Graffiti Architecture Concepts
At its most basic concept, Graffiti's intent is to enable better metadata collaboration across services and projects for OpenStack users. Graffiti has the initial intent of providing cross service metadata “tagging" and search aggregation for cloud resources.

Status
The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit. Since then quite a bit of the concepts have been adopted and implemented as part of multiple different OpenStack Projects (Glance, Searchlight, Horizon). The below provides legacy overview information to help understand how the various components come together. For more info, see https://wiki.openstack.org/wiki/Graffiti#Current_Status

Workflow and Components

 * 1) Load your custom metadata definitions (called property types or capability types)
 * 2) Into the Graffiti central dictionary
 * 3) Or configure Graffiti plugins to include / proxy existing definitions provided by the various services
 * 4) "Tag" the resources in the cloud with your properties and capabilities
 * 5) Let users find the resources with your desired properties and capabilities̈


 * Repeat across multiple clouds installations for Cloud capability portability.



Base Concepts

 * Various OpenStack services provide techniques to abstract low level resource selection to one level higher, such as flavors, volume types, or artifact types. These resource abstractions often allow "metadata" in terms of key-value pair properties to further specialize and describe instances of each resource type. However, collaborating on those properties is largely a disconnected and difficult process. This often involves searching wikis and opening the source code. It becomes more difficult as a cloud's scale grows. In addition, many times the properties can apply to resources from several different services. Graffiti makes this easier by creating the following concepts:
 * Capabilities and Requirements: The Graffiti concepts have embraced the idea that cloud resources may be described using the notion of capabilities, a concept influenced by some parts of OpenStack today as well as by industry specifications like OASIS TOSCA (Please note, Graffiti is NOT an orchestration engine, it only assists in describing and locating existing resources in the cloud.).
 * Dictionary: A common API for services, admins, and users to discover and share their metadata vocabulary. This is the basis for creating an agreement on how to describe the various capabilities the cloud provides. It allows for a consistent UI and CLI experience for describing and finding resources.
 * Resource Directory: A common API to "tag" and search across existing and new services for cloud content based on the dictionary (metadata definitions).
 * Resource Capability Registry: A persistent shared repository for services to publish information about cloud resources. This can optionally be used by services instead of or in addition to having their own local native storage to describe resources.

Use Case Exampleː Compute Capabilities
In Summary: The Graffiti concepts provide cross service and cross environment:
 * metadata definition aggregation and administration
 * resource metadata "tagging" aggregation
 * resource metadata search aggregation



Additional Details
The below provides an overview of the metadata aggregation, resource search optimization, and local resource registry concepts.



Graffiti API Benefits
When we first looked at a UI only solution, we found that it can be done to a certain extent with limitations. However, if we propose the idea of a new service integrated or built into the ecosystem the following additional benefits will be available:
 * Command line and REST API for cross service searching
 * Ability to import / export definitions across deployments
 * Common persistence DB for definitions in multi-node / HA deployments
 * Private tag / metadata libraries. Users / projects will be able to have their own vocabulary for "tagging" resources
 * Authoring - We will provide an authoring and administration UI for creating and managing namespaces, capability types, etc
 * Resource search performance optimizations. We would like to introduce a high performance indexing mechanism based that crosses service boundaries.

Resource Search Optimization
Ideasː
 * Lazy loading. Simple pre-fetch mechanism. Make a call to initiate session or on first request for a resource type, data is pulled into memory and held for a limited time. Subsequent searches are all done in in memory. RBAC is handled via token pass through.
 * Eager loading. The base idea is that cache provider plugin can be added under the API. Resources that are indexable (those whose service owner supports notifications) would then be indexed via a combination of startup seeding and service resource event notifications. For example, Glance supports sending notifications on certain image changes. The index itself could be based on elasticsearch and the plugin would translate queries in and out of elasticsearch. (Noteː This portion of the concept has been mostly implemented by Project Searchlight ).

Originally Proposed Horizon Concepts
These have been implemented in Horizonː


 * Horizon features
 * An admin UI for managing the catalog
 * (Admin —> Metadata Definitions) (Kilo)
 * A widget for associating metadata to different resources
 * (Update Metadata action on each row item below)
 * admin -> images (Juno)
 * admin -> flavors (Kilo)
 * admin —> Host Aggregates (Kilo)
 * project —> images (Liberty)
 * project —> instances (Mitaka)
 * The ability to add metadata at launch time
 * project —> Launch Instance (ng launch instance enabled) (Mitaka)

Legacy Infoː

We believe that the Graffiti concepts can be fulfilled in Horizon with reusable widgets that we can plug into Horizon as well as changes to screens like the launch instance wizard. The widgets will provide the ability to "tag" capabilities and TBDː requirements on various resources. They will also be able to generate filter queries based on resource capabilities and properties.

Terminology Note
We think the term "metadata" is a somewhat unapproachable term, so we have been exploring with the concept of a "capability". A capability can simply be thought of as a named "tag" which may or may not have properties. The idea is that a user can simply "tag" a capability onto various cloud resources such as images, volumes, host aggregates, flavors, and so on. To the end user, the exact mechanism for how the data is stored is handled for them. Some resource types may not support capabilities / tag that have properties.

Concept Screencasts
To explore and explain the ideas, HP and Intel have created a screencasts showing the concepts running under POC code. The styling is only representative of the point in time that the demo was recorded and has changed.


 * | Screencast - Concept Overview

Concept Flow Mockup
The basic proposed flow is that we will be able to add a widget on any resource management screen that we want to be able to "tag" capabilities. For example, the images, volumes, flavors, and host aggregate screens are all good candidates. The goal is that the only customization required will be for the code using the widget will send in information about the resource / resource type that is being tagged. The resource type is sent to the API which then returns back the capabilities applicable for that type of resource.

Launch Instance Example
̈ - Noteː Tagging other resource types and searching for them could work similarly.



Style Mockups
We have been playing with various style mockups, but aren't sure what makes sense or would be acceptable. The traditional look and feel in Horizon can be achieved, but we also aren't sure that Horizon today has a good example for handling tree browsing. The following are some of the mockups we've created.



Proposed Horizon Component Architecture
We would like there to be a common way in Horizon to support "tagging" simple named tags and key-value pairs that also will support the overall Graffiti concepts. In the proposed architecture, we will support Horizon gaining the value of Graffiti concepts through a thin API plugin layer directly in Horizon without the full "Dictionary" and "Resource Directory" APIs in the deployed environment. This will provide benefits to Horizon now, without requiring a new Graffiti service to either be incubated or be adopted into other projects (which we are actively seeking input and advice). The widgets will be built to work with a common simple "resource syntax" that the external service API would provide.

The entire concept can be run in a lightweight way through a thin filesystem provider on the Horizon server that allows reading dictionary definition files directly from the filesystem or from services that already provide schemas or tags. This would suffice for single node deployments or deployments that are managed through configuration management provider to ensure consistency of the definitions across Horizon nodes.

If a fully "Dictionary" / "Resource Directory" service API was available, the widgets wouldn't have to change even as new resource types and metadata definitions are added to the system. They still go to the Horizon Graffiti component, which would add the plugin to talk to the appropriate central "Dictionary" / "Resource Directory" service endpoint(s), which would provide the full benefits.

Limits of a Horizon Only Solution
The widgets and concepts can be partially built in Horizon as stated above and diagrammed below without changes to existing services. However, there are a number of limitations that require some external service work as well.


 * 1) Horizon is a stateless server by design at this point.  The only place any persistent data can exist is if you choose to store session information on the server in a database.  The default setup for Horizon now uses signed cookies to maintain session data and avoids a DB requirement.
 * 2) There is no privileged account running on the Horizon server and thus no way to build a persistent datastore only the admin can obtain. A persistent privileged session as this creates many security issues.
 * 3) Horizon can be set up in an HA manner, which would require either duplicate DB on multiple Horizon servers or another server dedicated to the DB backend for Horizon.
 * 4) The original scope discussed is only part of the picture, when the scope grows beyond the launch use case, the scope grows beyond usefulness for just Horizon.  Isolating in Horizon is limiting.