Jump to: navigation, search

StructuredWorkflowLocks

Revision as of 01:43, 25 May 2013 by Harlowja (talk | contribs) (Zookeeper)

Rationale

Locks (and semaphores) are a critical component of most typical applications. This is especially so for structuring workflows in a manner that allows the entity applying workflow to ensure that it is the only entity working on that workflow and its associated resources (unless the workflow is a read-only and only a reader/writer lock may be appropriate). Ensuring correct locking and locking order is typically a very difficult component of creating reliable and fault tolerant workflows. Even so they are absolutely necessary to ensure consistent workflow operations (for example, knowing that 2+ workflows/entities are not modifying the same resource at the same time). This is especially relevant & important in large scale distributed systems such as OpenStack which have many concurrent workflows being processed at the same time by many varying services (nova, cinder, quantum for example).

Requirements

Since different workflows will need different ['mutex', 'semaphore'] types there needs to be built-in flexibility of the providing solution that allows said developers using said solution to provide a set of desired requirements and get back a ['mutex', 'semaphore'] that will match (or closely match) there desired requirements.

Oslo (WIP)

  • Intra-process (thread) lock using eventlet
  • Across-process lock using the local filesystem

Ironic (WIP)

  • Exclusive when distributed across hosts (only one host can get it)
  • Shared and exclusive between threads (only one gets exclusive, other threads may take shared lock)
  • Reference from the lock to all holders (to allow for manual lock destruction)

Solution (WIP)

In order to accommodate the multiple varying requirements for different ['mutex', 'semaphore'] types this wiki proposes that there would be an API that would be created which would take in a set of ['mutex', 'semaphore'] requirements and provide back objects that would attempt to satisfy those requirements (or raise an exception if the requirements are not satisfiable). The providing API would be backed by varying & configurable implementations. Each backing implementation would have the ability to be queried about which requirements it can satisfy and the providing API would retrieve ['mutex', 'semaphore'] objects from the most compatible backend provider. This allows for deployers of this API to configure backends they feel comfortable with (memcache, redis, zookeeper, ... for example) while allowing for developers using said API to be only concerned that some backend matches said requirements that they desire (and if said requirement is not satisfiable the application should not work, or should fallback to using less strict requirements).

API

# Lock types that can be requested.
DISTRIBUTED = 1
INTER_PROCESS = 2
INTRA_PROCESS = 4

# Lock properties that can be requested.
#
MULTI_READER_SINGLE_WRITER = 1
HIGHLY_AVAILABLE = 2
REFERENCEABLE = 4
AUTOMATIC_RELEASE = 8

# These 2 can not be both requested.
ALWAYS_CONSISTENT = 32
USUALLY_CONSISTENT = 64

class InvalidLockSpecification(Exception):
    pass

def provide(requirement, resource_identifer):
    """Provides a lock object on the given resource identifier that attempts to
    meet the given requirement or combination of requirements. The requirement
    should be an a 'or' of different desired requirements that you want your
    lock to have."""
    if ((requirement & DISTRIBUTED) and
        (requirement & INTER_PROCESS or requirement & INTRA_PROCESS)):
        raise InvalidLockSpecification("A lock can not be distributed and "
                                       "inter-process or intra-process at the same"
                                       "time.")

Providers

Filesystem

Benefits:

  • Simple
  • Consistent to local systems processes.

Drawbacks:

  • Does not release automatically on lock consumer failure (but timeouts on locks are possible).
  • Inconsistent/not-possible on all file-systems (NFS for example).
  • Not partition tolerant (when machine hosting local system crashes, lock is gone) or highly available.

Relevant python libraries:

Built-in language: N/A

Zookeeper

Benefits:

  • Distributed.
  • Tolerant to backend failure - availability.
  • Automatic lock release on lock consumer failure - liveness.
  • Replicated via quorums.
  • Prefers consistency over partition tolerance (+2 for a locking service).
  • Can easily scaling up and down additional capacity.

Drawbacks:

  • Complex to deploy (its java).

Relevant python libraries:

Built-in language: Java

Memcached

Benefits:

  • Somewhat distributed (keys are hashed to different servers) - availability.
  • Simple to deploy.

Drawbacks:

  • Does not release automatically on lock consumer failure (but timeouts on locks are possible).
    • This is especially relevant if a multi-reader-single-writer lock is created and the writer fails before releasing (aka decrementing the key).
  • Does not handle network partitions (keys will be rehashed to a different server on failure).
  • Prefers partition tolerance over consistency.
  • Inconsistencies possible due to server flip-flopping problem.
    • Aka, server A goes up, gets lock, dies, server B takes over A's key range, server B dies, now server A has original key range but inconsistent values.
  • Can not easily scaling up and down additional capacity (consistent hashing helps).

Relevant python libraries:

Built-in language: C

Redis

Benefits:

  • Somewhat distributed (keys are hashed to different servers) - availability.
  • Can be setup to persist stored information using an AOF.
  • Can be setup to replicate stored information.
  • Simple to deploy.

Drawbacks:

  • Locks may be inconsistent due to backend failure (even when setup with replication).
  • Does not release automatically on lock consumer failure (but timeouts on locks are possible).
    • This is especially relevant if a multi-reader-single-writer lock is created and the writer fails before releasing (aka decrementing the key).
  • Does not handle network partitions (keys will be rehashed to a different server on failure).
  • Replication is not blocking so can not be depended upon to be consistent!!
  • Prefers partition tolerance over consistency.
  • Inconsistencies possible due to server flip-flopping problem.
    • Aka, server A goes up, gets lock, dies, server B takes over A's key range, server B dies, now server A has original key range but inconsistent values.

Relevant python libraries:

Built-in language: C

Databases

Not provided due to MVCC. Could be provided with limited semantics and limited capabilities if absolutely required.

Relevant Links