Jump to: navigation, search

TaskFlow/Engines

< TaskFlow
Revision as of 05:59, 14 September 2013 by Harlowja (talk | contribs) (How)

Engine

Engine are what really runs your tasks and flows.

It takes a flow structure (described by patterns) and uses it to decide which task to run and when.

Felipecaparelli Gears 1.png

There may be different implementation of engines. Some may be easier to use (like, require no setup) and understand, others might require more complicated setup but provide better scalability. The idea and ideal is that deployers of a service that uses taskflow can select an engine that suites their setup best without modifying code of said service. This allows for starting off using a simpler implementation and scaling out the service that is powered by taskflow as the service grows. In concept, all engines should implement same interface to make it easy to replace one engine with another, and provide same guaranties on how patterns are interpreted -- for example, if an engine runs a linear flow, the tasks should be run one after another in order.

Note: Engines might have different capabilities and different configuration but overall the interface should remain the same.

Types

Distributed

When you want your applications tasks and flows to be performed in a system that is highly available & resilient to individual failure.

See: DistributedTaskManagement and Celery for more details.

Demo: http://www.youtube.com/watch?v=SJLc3U-KYxQ

Traditional

When you want your tasks and flows to just run inside your applications existing framework and still take advantage of the functionality offered.

Supports the following:

  • Threaded engine using thread pools.
  • Single threaded engine using no threads.

How

Blueprint: https://blueprints.launchpad.net/taskflow/+spec/patterns-and-engines

Blueprint: https://blueprints.launchpad.net/taskflow/+spec/distributed-celery

Blueprint: https://blueprints.launchpad.net/taskflow/+spec/eventlet-engine

Storage

Storage is out of scope of the blueprint, but it is still worth to point out its role here.

We already have storage in taskflow -- that's logbook. But it should be emphasized that logbook should become the authoritative, and, preferably, the only source of runtime state information. When task returns result, it should be written directly to logbook. When task or flow state changes in any way, logbook is first to know. Flow should not store task results -- there is logbook for that.

Logbook and a backend are responsible to store the actual data -- these together specify the persistence mechanism (how data is saved and where -- memory, database, whatever), and persistence policy (when data is saved -- every time it changes or at some particular moments or simply never).