Jump to: navigation, search

Difference between revisions of "StructuredWorkflowLocks"

(Relevant Links)
(Filesystem)
Line 67: Line 67:
  
 
'''Drawbacks:'''
 
'''Drawbacks:'''
*Does not release automatically on lock ''holder'' failure (but timeouts on locks are possible).
+
*Not-possible on all file-systems (NFS for example, afaik).
** Locks '''are''' released automatically though if the process dies (but not say if the green thread dies inside said process).
 
*Not-possible on all file-systems (NFS for example).
 
 
*Not partition tolerant (when machine hosting filesystem crashes, locks are gone).
 
*Not partition tolerant (when machine hosting filesystem crashes, locks are gone).
 
*Not highly available.
 
*Not highly available.
Line 76: Line 74:
 
'''Relevant python libraries:'''
 
'''Relevant python libraries:'''
 
*https://pypi.python.org/pypi/lockfile
 
*https://pypi.python.org/pypi/lockfile
 +
*https://github.com/openstack/oslo-incubator/blob/master/openstack/common/lockutils.py
  
 
'''Built-in language:''' N/A
 
'''Built-in language:''' N/A

Revision as of 17:35, 7 March 2014

Drafter: Harlowja

Revised on: 3/7/2014 by Harlowja

Rationale

Locks (and semaphores) are a critical component of most typical applications. This is especially so for structuring workflows in a manner that allows the entity applying the workflow to ensure that it is the only entity working on that workflow and its associated resources (for example, multiple servers that modify shared resources concurrently may cause data inconsistency). Ensuring correct locking and locking order is typically a very difficult component of creating reliable and fault tolerant distributed workflows. Even so they are absolutely necessary to ensure consistent workflow operations. This is especially relevant & important in large scale distributed systems such as OpenStack which have many concurrent workflows being processed at the same time by many varying services (nova, cinder, quantum...).

Requirements

Since different workflows will need different ['mutex', 'semaphore'] types there needs to be built-in flexibility of the providing solution that allows said developers using said solution to provide a set of desired requirements and get back a ['mutex', 'semaphore'] that will match (or closely match) their desired requirements.

Oslo (WIP)

  • Intra-process (thread) lock using eventlet
  • Across-process lock using the local filesystem

Ironic (WIP)

  • Exclusive when distributed across hosts (only one host can get it)
  • Shared and exclusive between threads (only one gets exclusive, other threads may take shared lock)
  • Reference from the lock to all holders (to allow for manual lock destruction)

Solution (WIP)

In order to accommodate the multiple varying requirements for different ['mutex', 'semaphore'] types this wiki proposes that there would be an API that would be created which would take in a set of ['mutex', 'semaphore'] requirements and provide back objects that would attempt to satisfy those requirements (or raise an exception if the requirements are not satisfiable). The providing API would be backed by varying & configurable implementations. Each backing implementation would have the ability to be queried about which requirements it can satisfy and the providing API would retrieve ['mutex', 'semaphore'] objects from the most compatible backend provider. This allows for deployers of this API to configure backends they feel comfortable with (memcache, redis, zookeeper, ... for example) while allowing for developers using said API to be only concerned that some backend matches said requirements that they desire (and if said requirement is not satisfiable the application should not work, or should fallback to using less strict requirements).

API

# Lock types that can be requested.
DISTRIBUTED = 1
INTER_PROCESS = 2
INTRA_PROCESS = 4

# Lock properties that can be requested.
MULTI_READER_SINGLE_WRITER = 1
HIGHLY_AVAILABLE = 2
REFERENCEABLE = 4
AUTOMATIC_RELEASE = 8

# These two can not be both requested at the same time.
ALWAYS_CONSISTENT = 32
USUALLY_CONSISTENT = 64

class InvalidLockSpecification(Exception):
    pass

def provide(type, requirements, resource_identifer):
    """Provides a lock object on the given resource identifier that attempts to
    meet the given requirement or combination of requirements. The requirement
    should be an a 'or' of different desired requirements that you want your
    lock to have."""
    if ((type & DISTRIBUTED) and
        (type & INTER_PROCESS or type & INTRA_PROCESS)):
        raise InvalidLockSpecification("A lock can not be distributed and "
                                       "inter-process or intra-process at the same"
                                       "time.")

Providers

Filesystem

Benefits:

  • Simple
  • Consistent to local systems processes.

Drawbacks:

  • Not-possible on all file-systems (NFS for example, afaik).
  • Not partition tolerant (when machine hosting filesystem crashes, locks are gone).
  • Not highly available.
  • Not distributed.

Relevant python libraries:

Built-in language: N/A

Zookeeper

Benefits:

  • Distributed.
  • Tolerant to backend failure (availability).
  • Automatic lock release on lock holder failure (liveness).
  • Replicated via quorums.
  • Prefers consistency over partition tolerance.
  • Can easily scale up and down additional capacity.
  • Strong durability guarantees using AOF (append only files).
  • Mature and battle-hardened.

Drawbacks:

  • Somewhat complex to deploy (it's java).

Relevant python libraries:

Built-in language: Java

Doozer

Benefits:

  • Distributed.
  • Tolerant to backend failure (availability).
  • Prefers consistency over partition tolerance.

Drawbacks:

  • Unknown maturity (??).
  • Lacks built-in durability.
  • Unknown deployment/upgrade/additional capacity strategy.

Relevant python libraries:

Built-in language: Go

Memcached

Benefits:

  • Distributed via key hashing to different servers (availability).
  • Simple to deploy.

Drawbacks:

  • Does not release automatically on lock consumer failure (but timeouts on locks are possible).
  • Does not handle network partitions (keys will be rehashed to a different server on failure).
  • Prefers partition tolerance over consistency.
  • Inconsistencies possible due to server flip-flopping problem.
    • Ex: server A goes up, gets lock, dies, server B takes over A's key range, server B dies, now server A has original key range but inconsistent values.
  • Can not easily scale up and down additional capacity (consistent hashing helps).
  • Not durable.

Relevant python libraries:

Built-in language: C

Redis

Benefits:

  • Distributed when partitioning via hashing keys to different servers (availability).
  • Can be setup to persist stored information using an AOF (durability).
  • Can be setup to replicate stored information.
  • Consistent (when not using partitioning via hashing keys to different servers).
  • Simple to deploy.

Drawbacks:

  • Locks may be inconsistent due to backend failure (even when setup with replication).
  • Does not release automatically on lock holder failure (but timeouts on locks are possible).
  • Does not handle network partitions (keys will be rehashed to a different server on failure).
  • Built-in replication is non-blocking and can not be depended upon to be consistent.
  • Prefers partition tolerance over consistency.
  • Can not easily scale up and down additional capacity (consistent hashing helps).
  • Inconsistencies possible due to server flip-flopping problem.
    • Ex: server A goes up, gets lock, dies, server B takes over A's key range, server B dies, now server A has original key range but inconsistent values.

Relevant python libraries:

Built-in language: C

Databases

Could be provided with limited semantics and limited capabilities if absolutely required.

See: http://www.percona.com/doc/percona-xtradb-cluster/wsrep-system-index.html#wsrep_causal_reads for some interesting potential to make this possible with percona.

Might require galera.

Relevant Links