Jump to: navigation, search

Difference between revisions of "StructuredStateManagementDetails"

(Tolerant to upgrades)
(Tolerant to upgrades)
Line 266: Line 266:
  
 
==== How it will be addressed ====
 
==== How it will be addressed ====
 +
 +
The first step to solving in-flight operations being discarded is to make in-flight operations resumable. This is addressed by the resumption strategy discussed above, so that if the MQ or zookeeper is brought down (due to planned or unplanned reasons) the task-log associated with said workflow can be referenced and resumed from. This will solve the current situation where operations & resources are either abandoned, orphaned or placed into '''ERROR''' states. If this is not workable the persistent (or at least temporally persistent - aka persistent until the workflow finishes) task-log can also be referenced to determine what manual operations/cleanup needs to be done to resolve the state of a cluster, something which is hard to do right now without extensive code-knowledge. The second problem here is what to do with workflow modifications that have occurred during said upgrade, say if a piece of the workflow was removed or a new task was added during said upgrade. This I think can be addressed by always maintaining version N-1 workflow compatibility  which will ensure that a workflow started in the old version can be completed in the new version. This has some side-effects and likely means that we need to at least tag each task, the workflow and task-log with a given version to allow the selection of said version when said workflow is resumed. This means that the rollback of said task-log must accommodate the N-1 actions that were produced by the N-1 workflow and tasks. This will have to be enforced by standard practices (CI, CD, unit testing) and clear documentation on how to move from one version to another (and how to ensure that all N-1 workflows were completed successfully).
  
 
=== Cancellation ===
 
=== Cancellation ===

Revision as of 21:22, 29 April 2013

Details

In order to implement of this new orchestration layer the following key concepts must be built into the design from the start.

  1. Atomic task units.
  2. Combining atomic task units into a workflow.
  3. Task tracking and resumption.
  4. Resource locking and single workflow ownership.
  5. Minimal impact (allowing for this change to land).
  6. High availability.
  7. Tolerance to upgrades.
  8. Task cancellation (nice to have)

Atomic task units

Why it matters

Tasks that are created (either via code or other operation) must be atomic so that the task as a unit can be said to have been applied/completed or the task as a unit can be said to have failed (or cancelled). This allows for said task to be rolled back as a single unit. Typically since a workflow (for example composed of stages [1, 2, 3]) may fail at any stage, for example 3 there needs to be a clearly defined path to rollback 2 and 1 (in that order). This task rollback concept is required to cleanly and correctly undo actions that happen after a failure (or due to cancellation for example, which is not necessarily caused by a failure in the traditional sense).

How it will be addressed

States and state transition which previously were in the run_instance path (and other paths eventually) of nova will need to be refactored into clearly defined tasks (likely with an apply() method and a rollback() method as shown below). These tasks will be split up so that each task performs a clearly defined and easily understandable single piece of work in an atomic manner (aka not one big task that does many different things) wherever possible.

Note: that this will also help make testing of said task easier (since it will have a clear set of inputs and a clear set of expected outputs/side-effects, of which the rollback() method should undo), something which is not possible without this kind of task refactoring work. Also note that if downstream services (quantum for example) work in asynchronous manner that can not be cancelled then rollbacks will be much harder to correctly accomplish. It appears most of the downstream services like libvirt, nova-network and similar do provide acquire/release semantics (or a analogous mechanism), which is needed to correctly perform rollback (and cancellation).

For example this is the task base class created in the prototype:

class Task(base.Base):
    __metaclass__ = abc.ABCMeta

    def __str__(self):
        return "Task: %s" % (self.__class__.__name__)

    @abc.abstractmethod
    def apply(self, context, *args, **kwargs):
        # Do whatever is needed to fulfill this task...
        #
        # Return a useful object for others to use, this object
        # must be serializable so that it can be given back in-case of reversions.
        raise NotImplementedError()

    # Gets any context information (nova context) and the result
    # that this function returned when its apply succeeded (if it succeeded)
    # so that said task can examine what it produced and undo whatever it 
    # produced. The chain which caused this exception is also provided as well
    # as the exception itself and the state which causes this reversion.
    def revert(self, context, result, chain, excp, cause):
        pass

Combining atomic tasks into a workflow

Why it matters

A workflow in nova can be typically organized into a series of smaller tasks/states which can be applied as a single unit (and rolled back as a single unit). Since we need to be able to run individual tasks in a trackable manner (and we shouldn't duplicate this code in multiple places) a concept of a chain of tasks (for the mostly linear nova case) is needed to be able to run and rollback that set of tasks as a group in a easy to use manner (which makes the code easier to read as well!).

How it will be addressed

The following was created to address the issue of running (and rolling back) multiple tasks as a single unit.

Note: to start a linear unit is fine.

This is the 'chain' class created in the prototype:

class WorkflowChain(object):
    def __init__(self, name, tolerant=False, parents=None):
        # What tasks I have done that may need to be undone on failure.
        self.reversions = []
        self.name = name
        self.tolerant = tolerant
        # The order in which tasks will run is controlled by this - note its only linear.
        self.tasks = OrderedDict()
        self.results = OrderedDict()  # Useful for accessing the result of a given tasks after running...
        # A parent set of tasks that will need to be rolled back if this one fails.
        self.parents = parents
        # This functor is needed to be able to fetch results for tasks which have already occurred.
        #
        # It is required since we need to be able to rollback said tasks if it has already completed elsewhere
        # and one of the requirements we have put in place is that one rollback a task is given the result 
        # it returned when its apply method was called. 
        self.result_fetcher = None
        # This is a default tracker + other listeners which will always be notified when tasks start/complete/error. 
        #
        # They are useful for altering some external system that should be notified when a state changes.
        self.change_tracker = None
        self.listeners = []

    def __setitem__(self, name, performer):
        self.tasks[name] = performer

    def __getitem__(self, name):
        return self.results[name]

    def run(self, context, *args, **kwargs):
        for (name, performer) in self.tasks.items():
            try:
                self._on_state_start(context, performer, name)
                # See if we have already ran this... (resumption!)
                result = None
                if self.result_fetcher:
                    result = self.result_fetcher(context, name, self)
                if result is None:
                    result = performer.apply(context, *args, **kwargs)
                # Keep a pristine copy of the result in the results table
                # so that if said result is altered by other further states
                # the one here will not be. 
                #
                # Note: python is by reference objects, so someone else could screw with this,
                # which would be bad if we need to rollback and a result we created was modified by someone else...
                self.results[name] = copy.deepcopy(result)
                self._on_state_finish(context, performer, name, result)
            except Exception as ex:
                with excutils.save_and_reraise_exception():
                    try:
                        self._on_state_error(context, name, ex)
                    except:
                        pass
                    cause = (name, performer, (args, kwargs))
                    self.rollback(context, name, self, ex, cause)
        return self

    def _on_state_error(self, context, name, ex):
        if self.change_tracker:
            self.change_tracker(context, ERRORED, name, self)
        for i in self.listeners:
            i.notify(context, ERRORED, name, self, error=ex)

    def _on_state_start(self, context, performer, name):
        if self.change_tracker:
            self.change_tracker(context, STARTING, name, self)
        for i in self.listeners:
            i.notify(context, STARTING, name, self)

    def _on_state_finish(self, context, performer, name, result):
        # If a future state fails we need to ensure that we
        # revert the one we just finished.
        self.reversions.append((name, performer))
        if self.change_tracker:
            self.change_tracker(context, COMPLETED, name, self,
                                             result=result.to_dict())
        for i in self.listeners:
            i.notify(context, COMPLETED, name, self, result=result)

    def rollback(self, context, name, chain=None, ex=None, cause=None):
        if chain is None:
            chain = self
        for (i, (name, performer)) in enumerate(reversed(self.reversions)):
            try:
                performer.revert(context, self.results[name], chain, ex, cause)
            except excp.NovaException:
                # Ex: WARN: Failed rolling back stage 1 (validate_request) of
                # chain validation due to nova exception
                # WARN: Failed rolling back stage 2 (create_db_entry) of
                # chain init_db_entry due to nova exception
                msg = _("Failed rolling back stage %s (%s)"
                        " of chain %s due to nova exception.")
                LOG.warn(msg, (i + 1), performer.name, self.name)
                if not self.tolerant:
                    # This will log a msg AND re-raise the Nova exception if
                    # the chain does not tolerate exceptions
                    raise
            except Exception:
                # Ex: WARN: Failed rolling back stage 1 (validate_request) of
                # chain validation due to unknown exception
                # Ex. WARN: Failed rolling back stage 2 (create_db_entry) of
                # chain init_db_entry due to unknown exception
                msg = _("Failed rolling back stage %s (%s)"
                        " of chain %s, due to unknown exception.")
                LOG.warn(msg, (i + 1), performer.name, self.name)
                if not self.tolerant:
                    # Log a msg AND re-raise the generic exception if the
                    # chain does not tolerate exceptions in the first place.
                    raise
        if self.parents:
            # Rollback any parents chains
            for p in self.parents:
                p.rollback(context, name, chain, ex, cause)

Task tracking & resumption

Why it matters

Key to the ability to horizontal scaling your orchestration units is the ability to be able to resume the work of a failed orchestration unit on another orchestration unit. This makes it so that these orchestration units can survive individual program failure and not put the tasks they were working on into a ERROR like state, which should not be needed in a transactional workflow oriented system. This resumption & its associated tracking also provides a useful bonus feature where-in it becomes very easy to audit the tasks and workflows that a given resource/request has gone through in the lifetime of that request. This has many varying usages and creates potentially new interesting features for later.

How it will be addressed

To start a task log will need to be created and when each task is executed that log will need to be updated accordingly. Note that it is not enough to just store the name of the task that completed but enough metadata must also be stored with that log entry to be able to correctly revert that task. Typically in nova there is a concept of a resource that is being modified by each task, at the end of each task we can store the resource (which will pick up the modifications of that resource automatically). This task log and associated metadata can be stored in different ways. For example a database could be used, or something less durable could be used as well, since once a workflow completes there may not be a need to keep around the task log, of course a database (or something similarly durable, for example zookeeper persistent nodes) would allow for workflow auditing but for those who do not care about auditing they could use something like redis which has complex object non-durable expiry-based storage.

Issues: the admin_password entry is a part of this resource object, it needs to be stored securely (if we store it at all in the task log metadata). If zookeeper is used it is likely useful to look into integration of its acl management system (to avoid non-authorized parties from reading said resources). SSL if required may also have to be addressed and implemented (see [1]).

To accomplish resumption there needs to be a concept of workflow ownership and way to notice when a orchestration unit has failed and be able to release the ownership of that workflow so that other orchestration units can take over said ownership. Both of these problems can be easily solved by zookeeper, a battle-hardened program that is designed for ownership and triggering tasks, of which workflow ownership and releasing that ownership are prime examples of. The way to understand this is to notice that ownership of a workflow is a lock on an object (this object being the workflow) and zookeeper has the ability to notify others when said lock is lost (either due to failure or to cancellation) via a concept of watches, these two general concepts allow for resumption in a distributed and highly scalable manner. Zookeper even provides a few recipes which help in making this possible, see the kazoo recipe source code for a few examples (the locking queue concept is very very useful).

Example code to try this concept: https://github.com/harlowja/zkplayground

See: http://research.yahoo.com/node/1849 and http://research.yahoo.com/pub/3280

Other users of zookeeper: https://cwiki.apache.org/confluence/display/ZOOKEEPER/PoweredBy

Homepage: https://zookeeper.apache.org/

Note: that it is likely extremely/very hard (or would require a lot of jumping through hoops) to correctly have these semantics with a plain ole database (aka mysql), since databases do not provide watches/triggering or distributed locking (especially when replicated, sharded or clustered). Without these semantics it is likely not easily (or cleanly) possible to have automatic resumption capabilities without some concept of periodic table scanning tasks (which by-itself does not address the distributed locking acquisition/releasing semantics).

Resource locking and single workflow ownership

Why it matters

In order to ensure consistent resource state and consistent workflow state to start we need to be able to lock the resource (think instance) that we are working on and also lock the workflow that is being worked on. This ensures that only a single entity is allowed to process said workflow and resource at the same time, which will avoid state and resource inconsistencies and race conditions.

Note: in the future it might be possible to work on the same resource in parallel as long as the workflows do not having tasks which have conflicting goals.

How it will be addressed

Zookeeper provides distributed locking in a tested and easy to use package, we should use its distributed locking mechanism to correctly and consistently lock resources and lock workflows that are being worked on.

Minimal impact

Why it matters

In order to land any code we must have the minimal impact per-review/submission (but still have a clear aligned view of the larger picture) when doing said submission.

The types of problems that can happen are:

  • Path documentation/misunderstanding.
  • To big of changes to review at once (aka large commits).
  • Uncoordinated or misunderstandings of what the change is trying to do.
  • Conflicts on paths being altered while others are working on said paths.
  • Unit tests requiring refactoring (and very' likely additions) to ensure foundational change does not destabilize existing deployments.

How it will be addressed

Path documentation/misunderstanding

Before we alter a path of code we need to deeply understand and document the current workflow and propose how said workflow will be refactored into the new model. This will serve as very useful reference documentation as well as increasing awareness (and associated discussion) of what currently exists and what will/could exist to replace the existing model.

To big of changes to review at once (aka large commits)

This will be addressed by working and planning how to partition and organize the changes being done in a manner that is reviewable and understandable (thus not placing to much burden on the code reviewers). I believe this is a process and partitioning problem and should be easily addressable.

Uncoordinated or misunderstandings of what the change is trying to do

This will be addressed by ensuring that we are open (with documentation like this) about the change we are working on accomplishing (both at the micro level and at the macro level) and what the benefits of said change will be once completed (with documentation and discussion like this). Cross-project coordination will be required which will involve the leads of each project as well as the developers (and others) performing this work to be very open about what is changing and how they plan on accomplishing the macro level goal with clear documentation and communication with all members involved (even those not directly affected by said change).

Conflicts on paths being altered while others are working on said paths

Attempts should be made to avoid conflicting when refactoring paths that others are working on at the same time. We should use the previous coordination to avoid such types of conflicts wherever possible, and if they do occur, rework said refactoring to accommodate said conflicts on a case by case basis.

Unit tests

In order to avoid destabilization of the code base during said foundational changes, we need to ensure that whenever we alter a path that we ensure current unit tests and integration tests continue to work (if said unit test/integration test makes sense after said change) and that we add new unit tests to paths which have few to begin with (which likely will become easier to accomplish after said refactoring).

High availability

Why it matters

Upon failure of an entity processing a given workflow the workflow as a whole must not be placed into an ERROR state (which is common to what nova does right now). Instead said entity that is processing said workflow should be made highly available so that even upon individual entity failure a secondary (or 1 of N other entities) entity should be able to claim said workflow and continue where the failed one left off. Since the workflows are processed by this HA entity, this ensures the workflows being processed are also highly available and are tolerant to individual entity failure, thus increasing scale, reliability, stability, and decreasing the number of support requests and manual interventions required to move a workflow from the ERROR state to a recovered state.

How it will be addressed

Zookeeper allows us to notice when an entity processing a workflow has failed (due to its concept of heartbeats). This will solve the detection of a failed entity and will cause the releasing of the locks said entity has acquired. Now the question becomes how do we transition the workflow and its associated tasks to another entity that has not failed. One way to do this is use the zookeeper concept of a watch which can be set on a workflow 'node' by 2 or more other entities. These entities will attempt to acquire that workflow 'node' when the failed entity loses said acquisition. There could be a root directory node that will receive these child-nodes and when a new item appears in that root directory node, the other 2+ entities would set watches on said new nodes. This is one way of doing this. Another way that has a pre-built recipe in zookeeper already is to use the concept of a locked queue (see code) where a individual path/directory in zookeeper can be setup to receive work items (note here we could have different paths per job type for example). Then the above work associated with triggering watches and having the selection of the acquiring entity is done by zookeeper itself (similar in concept to what was described for the above process). This seems like a good approach, as long as we handle how session loss will be performed (likely per task checks that we still own the workflow).

Tolerant to upgrades

Why it matters

Currently in clouds that run OpenStack the upgrading process is wrought with trouble. Likely during said upgrade the API's are turned off and in-flight operations are typically dropped (aka the MQ is reset) and any in-flight operations either need to be manually cleaned up or need to wait around for periodic tasks to come around and attempt repair (which may or may not be correctly done, since its not usually possible to cleanup all things with a workflow with only local knowledge of what occurred). This usually involves a lot of painfulness since in-flight instances end up in the ERROR state, resources are typically orphaned (if not cleaned up locally by a periodic task) and must be manually reclaimed and end-users must now re-call the APIs to terminate there broken instances which is a bad end-user experience.

How it will be addressed

The first step to solving in-flight operations being discarded is to make in-flight operations resumable. This is addressed by the resumption strategy discussed above, so that if the MQ or zookeeper is brought down (due to planned or unplanned reasons) the task-log associated with said workflow can be referenced and resumed from. This will solve the current situation where operations & resources are either abandoned, orphaned or placed into ERROR states. If this is not workable the persistent (or at least temporally persistent - aka persistent until the workflow finishes) task-log can also be referenced to determine what manual operations/cleanup needs to be done to resolve the state of a cluster, something which is hard to do right now without extensive code-knowledge. The second problem here is what to do with workflow modifications that have occurred during said upgrade, say if a piece of the workflow was removed or a new task was added during said upgrade. This I think can be addressed by always maintaining version N-1 workflow compatibility which will ensure that a workflow started in the old version can be completed in the new version. This has some side-effects and likely means that we need to at least tag each task, the workflow and task-log with a given version to allow the selection of said version when said workflow is resumed. This means that the rollback of said task-log must accommodate the N-1 actions that were produced by the N-1 workflow and tasks. This will have to be enforced by standard practices (CI, CD, unit testing) and clear documentation on how to move from one version to another (and how to ensure that all N-1 workflows were completed successfully).

Cancellation

Why it matters

How it will be addressed