Jump to: navigation, search

TaskFlow/Task Arguments and Results

Step 0: rename __call__ to execute

Just do that, __call__ looks too magical.

Required task arguments

Required arguments MUST be passed to task when it is executed, as execute method arguments.

Required arguments list should be inferred from execute method signature, like we already do for @task decorator.

We also need provide a way to override it, to make adaptors like FunctorTask possible.

Task results

Task result is what task returns from execute method. It is saved in persistence layer with task details and is passed to revert method when/if task is reverted.

To map result of already run task to task arguments, we must name task results This names are used as a key to storage ('memory' in TaskMachine terms) access

But we can't infer name from task signature, result naming should be explicit. To make solution more symmetric to inferring names from signature, we propose to use decorator on execute method for that.

For example:

   class MyTask(task.Task):
       @decorators.returns('foo', 'bar', 'baz'):
       def execute(self, context, spam, eggs):
           return 4, 8, 15  # 23 42

This required arguments are context, spam and eggs; it returns three values: foo, bar and baz. From it's implementation it is easy to see that foo will be equal to 4, bar to 8 and baz to 15.

Again, we need provide a way to override it, to make adaptors like FunctorTask possible.

Note

Farther examples specify flows using blocks/patterns from TaskFlow/Patterns and Engines.

Saving result with different name

This is sometimes convenient to save task result with different name while defining flow. For that purpose, we propose adding save_as non-required parameter for task block.

For example, to spawn a VM, you might implement a simple task

   class SpawnVMTask(taskTask):
       @decorator.returns('vm_id')
       def execute(self, context, vm_name, vm_params):
           servers = get_server_manager(context)
           return servers.create(name=vm_name, **vm_params).id

Then, to create two servers with same parameters in parallel, you can use parallel flow; to save VM ids with different names, e.g. 'vm_id_one' and 'vm_id_two', add save_as parameters like this:


   blocks.ParallelFlow().add(
       blocks.Task(SpawnVMTask, save_as='vm_id_one')
       blocks.Task(SpawnVMTask, save_as='vm_id_two')
   )

Rebinding arguments

There are cases when value you want to pass to task is stored with name other then corresponding task argument. For that it is proposed to add rebind_args task block parameter.

There are two possible way of using it. First is to pass dictionary that maps task argument name to name of saved value. For example, if you saved vm name with 'name' key, you can spawn vm with such name like this:

   blocks.Task(SpawnVMTask, rebind_args={'vm_name': 'name'})

Second, you can pass a tuple or list of argument names, and values with that names are passed to task. The length of tuple or list should not be less then number of task required parameters. For example, you can achieve same effect as the previous example with

   blocks.Task(SpawnVMTask, rebind_args=('context', 'name', 'vm_params'))

which is equivalent to more elaborate

   blocks.Task(SpawnVMTask,
               rebind_args=dict(context='context',
                                vm_name='name',
                                vm_params='vm_params')) 

Optional task arguments

Task gets optional arguments as keyword arguments to execute. Task authores may use **kwargs syntax or add optional parameters as positional parameters with default values.

There is no need to specify what optional arguments task accepts, also it definitely worth to document them. Which optional arguments are passed to task is specified in flow definition, in the same way as it is done for argument rebinding: add extra arguments to list or dict and these arguments will be fetched from storage and passed to execute as keyword arguments.

TODO(imelnikov): example here.