Jump to: navigation, search

Difference between revisions of "StructuredWorkflowLocks"

(Zookeeper)
(Providers)
Line 66: Line 66:
 
==== [http://memcached.org/ Memcached] ====
 
==== [http://memcached.org/ Memcached] ====
  
'''Capabilities:'''
+
'''Benefits:'''
 
*Somewhat distributed (keys are hashed to different servers).
 
*Somewhat distributed (keys are hashed to different servers).
 
*Simple to deploy.
 
*Simple to deploy.
  
'''Restrictions:'''
+
'''Drawbacks:'''
*Not tolerant to backend failure (even when setup with HA).
+
*Not tolerant to backend failure.
 
*Does not release automatically on lock ''consumer'' failure  (but timeouts on locks are possible).
 
*Does not release automatically on lock ''consumer'' failure  (but timeouts on locks are possible).
 
*Does not handle network partitions (keys will be rehashed to a different server on failure).
 
*Does not handle network partitions (keys will be rehashed to a different server on failure).
Line 84: Line 84:
 
==== [http://redis.io/ Redis] ====
 
==== [http://redis.io/ Redis] ====
  
'''Capabilities:'''
+
'''Benefits:'''
 
*Somewhat distributed (keys are hashed to different servers).
 
*Somewhat distributed (keys are hashed to different servers).
 
*Can be setup to [http://redis.io/topics/persistence persist] stored information using an AOF.
 
*Can be setup to [http://redis.io/topics/persistence persist] stored information using an AOF.
Line 90: Line 90:
 
*Simple to deploy.
 
*Simple to deploy.
  
'''Restrictions:'''
+
'''Drawbacks:'''
 
*Locks may be inconsistent due to backend failure (even when setup with replication).
 
*Locks may be inconsistent due to backend failure (even when setup with replication).
 
*Does not release automatically on lock ''consumer'' failure (but timeouts on locks are possible).
 
*Does not release automatically on lock ''consumer'' failure (but timeouts on locks are possible).

Revision as of 19:17, 24 May 2013

Rationale

Locks (and semaphores) are a critical component of most typical applications and especially so for structuring workflows in a manner that allows the entity applying said workflow to ensure that said entity is the only single entity working on said workflow and its associated resources (unless said workflow is a read-only workflow where only a set of read-writer lock/s may be required). Ensuring correct locking and correct locking order is typically a very difficult component of creating reliable and fault tolerant workflows but are absolutely necessary to ensure consistent workflow operations (for example, knowing that 2+ workflows/entities are not modifying the same resource at the same time). This is especially relevant & important in large scale distributed systems such as OpenStack which have many concurrent workflows being processed at the same time by many varying entities (nova, cinder, quantum for example).

Requirements

Since different workflows will need different ['mutex', 'semaphore'] types there needs to be built-in flexibility of the providing solution that allows said developers using said solution to provide a set of desired requirements and get back a ['mutex', 'semaphore'] that will match (or closely match) there desired requirements.

Oslo (WIP)

  • Intra-process (thread) lock using eventlet
  • Across-process lock using the local filesystem

Ironic (WIP)

  • Exclusive when distributed across hosts (only one host can get it)
  • Shared and exclusive between threads (only one gets exclusive, other threads may take shared lock)
  • Reference from the lock to all holders (to allow for manual lock destruction)

Solution (WIP)

In order to accommodate the multiple varying requirements for different ['mutex', 'semaphore'] types this wiki proposes that there would be an API that would be created which would take in a set of ['mutex', 'semaphore'] requirements and provide back objects that would attempt to satisfy those requirements (or raise an exception if the requirements are not satisfiable). The providing API would be backed by varying & configurable implementations. Each backing implementation would have the ability to be queried about which requirements it can satisfy and the providing API would retrieve ['mutex', 'semaphore'] objects from the most compatible backend provider. This allows for deployers of this API to configure backends they feel comfortable with (memcache, redis, zookeeper, ... for example) while allowing for developers using said API to be only concerned that some backend matches said requirements that they desire (and if said requirement is not satisfiable the application should not work, or should fallback to using less strict requirements).

API

# Lock requirements that this provider can attempt to satisfy.
DISTRIBUTED = 1
FILESYSTEM_BACKED = 2
THREAD_BACKED = 4
READER_WRITER = 8

class InvalidLockSpecification(Exception):
    pass

def provide(requirement, resource_identifer):
    """Provides a lock object on the given resource identifier that attempts to
    meet the given requirement or combination of requirements. The requirement
    should be an a 'or' of different desired requirements that you want your
    lock to have."""
    if ((requirement & DISTRIBUTED) and
        (requirement & FILESYSTEM_BACKED or requirement & THREAD_BACKED)):
        raise InvalidLockSpecification("A lock can not be distributed and "
                                       "filesytem or thread backed at the same"
                                       "time.")

Providers

Zookeeper

Benefits:

  • Distributed.
  • Tolerant to backend failure.
  • Automatic lock release on lock consumer failure.
  • Replicated via quorums.
  • Prefers consistency over partition tolerance.

Drawbacks:

  • Complex to deploy (its java).

Relevant python libraries:

Built-in language: Java

Memcached

Benefits:

  • Somewhat distributed (keys are hashed to different servers).
  • Simple to deploy.

Drawbacks:

  • Not tolerant to backend failure.
  • Does not release automatically on lock consumer failure (but timeouts on locks are possible).
  • Does not handle network partitions (keys will be rehashed to a different server on failure).
  • Prefers partition tolerance over consistency.

Relevant python libraries:

Built-in language: C

Redis

Benefits:

  • Somewhat distributed (keys are hashed to different servers).
  • Can be setup to persist stored information using an AOF.
  • Can be setup to replicate stored information.
  • Simple to deploy.

Drawbacks:

  • Locks may be inconsistent due to backend failure (even when setup with replication).
  • Does not release automatically on lock consumer failure (but timeouts on locks are possible).
  • Does not handle network partitions (keys will be rehashed to a different server on failure).
  • Replication is not blocking so can not be depended upon to be consistent!!
  • Prefers partition tolerance over consistency.

Relevant python libraries:

Built-in language: C

Relevant Links

The CAP theorem provides upper bounds on the abilities that solutions can provide.