Task Parallelism (Concurrency Runtime)

This document describes the role of tasks and task groups in the Concurrency Runtime. Use tasks groups when you have two or more independent work items that you want to run concurrently. For example, suppose you have a recursive algorithm that divides the remaining work into two partitions. You can use task groups to run these partitions concurrently. Conversely, use parallel algorithms, such as Concurrency::parallel_for, when you want to apply the same routine to each element of a collection in parallel. For more information about parallel algorithms, see Parallel Algorithms.

A task is a unit of work that performs a specific job. A task can typically run in parallel with other tasks and can be decomposed into additional, more fine-grained, tasks. A task group organizes a collection of tasks. Task groups push tasks on to a work-stealing queue. The scheduler removes tasks from this queue and executes them on available computing resources. After you add tasks to a task group, you can wait for all tasks to finish or cancel tasks that have not yet started.

The PPL uses the Concurrency::task_group and Concurrency::structured_task_group classes to represent task groups, and the Concurrency::task_handle class to represent tasks. The task_handle class encapsulates the code that performs work. This code comes in the form of a lambda function, function pointer, or function object, and is often referred to as a work function. You typically do not need to work with task_handle objects directly. Instead, you pass work functions to a task group, and the task group creates and manages the task_handle objects.

The PPL divides task groups into these two categories: unstructured task groups and structured task groups. The PPL uses the task_group class to represent unstructured task groups and the structured_task_group class to represent structured task groups.

Important noteImportant

The PPL also defines the Concurrency::parallel_invoke algorithm, which uses the structured_task_group class to execute a set of tasks in parallel. Because the parallel_invoke algorithm has a more succinct syntax, we recommend that you use it instead of the structured_task_group class when you can. The topic Parallel Algorithms describes parallel_invoke in greater detail.

Use parallel_invoke when you have several independent tasks that you want to execute at the same time, and you must wait for all tasks to finish before you continue. Use task_group when you have several independent tasks that you want to execute at the same time, but you want to wait for the tasks to finish at a later time. For example, you can add tasks to a task_group object and wait for the tasks to finish in another function or from another thread.

Task groups support the concept of cancellation. Cancellation enables you to signal to all active tasks that you want to cancel the overall operation. Cancellation also prevents tasks that have not yet started from starting. For more information about cancellation, see Cancellation in the PPL.

The runtime also provides an exception-handling model that enables you to throw an exception from a task and handle that exception when you wait for the associated task group to finish. For more information about this exception-handling model, see Exception Handling in the Concurrency Runtime.

Although we recommend that you use task_group or parallel_invoke instead of the structured_task_group class, there are cases where you may want to use structured_task_group, for example, when you write a parallel algorithm that performs a variable number of tasks or requires support for cancellation. This section explains the differences between the task_group and structured_task_group classes.

The task_group class is thread-safe. Therefore, you can add tasks to a task_group object from multiple threads and wait on or cancel a task_group object from multiple threads. The construction and destruction of a structured_task_group object must occur in the same lexical scope. In addition, all operations on a structured_task_group object must occur on the same thread. The exception to this rule is the Concurrency::structured_task_group::cancel and Concurrency::structured_task_group::is_canceling methods. A child task can call these methods to cancel the parent task group or check for cancelation at any time.

You can run additional tasks on a task_group object after you call the Concurrency::task_group::wait or Concurrency::task_group::run_and_wait method. Conversely, you cannot run additional tasks on a structured_task_group object after you call the Concurrency::structured_task_group::wait or Concurrency:: structured_task_group::run_and_wait methods.

Because the structured_task_group class does not synchronize across threads, it has less execution overhead than the task_group class. Therefore, if your problem does not require that you schedule work from multiple threads and you cannot use the parallel_invoke algorithm, the structured_task_group class can help you write better performing code.

If you use one structured_task_group object inside another structured_task_group object, the inner object must finish and be destroyed before the outer object finishes. The task_group class does not require for nested task groups to finish before the outer group finishes.

Unstructured task groups and structured task groups work with task handles in different ways. You can pass work functions directly to a task_group object; the task_group object will create and manage the task handle for you. The structured_task_group class requires you to manage a task_handle object for each task. Every task_handle object must remain valid throughout the lifetime of its associated structured_task_group object. Use the Concurrency::make_task function to create a task_handle object, as shown in the following basic example:


// make-task-structure.cpp
// compile with: /EHsc
#include <ppl.h>

using namespace Concurrency;

int wmain()
{
   // Use the make_task function to define several tasks.
   auto task1 = make_task([] { /*TODO: Define the task body.*/ });
   auto task2 = make_task([] { /*TODO: Define the task body.*/ });
   auto task3 = make_task([] { /*TODO: Define the task body.*/ });

   // Create a structured task group and run the tasks concurrently.

   structured_task_group tasks;

   tasks.run(task1);
   tasks.run(task2);
   tasks.run_and_wait(task3);
}


To manage task handles for cases where you have a variable number of tasks, use a stack-allocation routine such as _malloca or a container class, such as std::vector.

Both task_group and structured_task_group support cancellation. For more information about cancellation, see Cancellation in the PPL.

The following basic example shows how to work with task groups. This example uses the parallel_invoke algorithm to perform two tasks concurrently. Each task adds sub-tasks to a task_group object. Note that the task_group class allows for multiple tasks to add tasks to it concurrently.


// using-task-groups.cpp
// compile with: /EHsc
#include <ppl.h>
#include <sstream>
#include <iostream>

using namespace Concurrency;
using namespace std;

// Prints a message to the console.
template<typename T>
void print_message(T t)
{
   wstringstream ss;
   ss << L"Message from task: " << t << endl;
   wcout << ss.str(); 
}

int wmain()
{  
   // A task_group object that can be used from multiple threads.
   task_group tasks;

   // Concurrently add several tasks to the task_group object.
   parallel_invoke(
      [&] {
         // Add a few tasks to the task_group object.
         tasks.run([] { print_message(L"Hello"); });
         tasks.run([] { print_message(42); });
      },
      [&] {
         // Add one additional task to the task_group object.
         tasks.run([] { print_message(3.14); });
      }
   );

   // Wait for all tasks to finish.
   tasks.wait();
}


The following is sample output for this example:

Message from task: Hello
Message from task: 3.14
Message from task: 42

Because the parallel_invoke algorithm runs tasks concurrently, the order of the output messages could vary.

For complete examples that show how to use the parallel_invoke algorithm, see How to: Use parallel_invoke to Write a Parallel Sort Routine and How to: Use parallel_invoke to Execute Parallel Operations. For a complete example that uses the task_group class to implement asynchronous futures, see Walkthrough: Implementing Futures.

Make sure that you understand the role of cancellation and exception handling when you use task groups and parallel algorithms. For example, in a tree of parallel work, a task that is canceled prevents child tasks from running. This can cause problems if one of the child tasks performs an operation that is important to your application, such as freeing a resource. In addition, if a child task throws an exception, that exception could propagate through an object destructor and cause undefined behavior in your application. For an example that illustrates these points, see the Understand how Cancellation and Exception Handling Affect Object Destruction section in the Best Practices in the Parallel Patterns Library document. For more information about the cancellation and exception-handling models in the PPL, see Cancellation in the PPL and Exception Handling in the Concurrency Runtime.

How to: Use parallel_invoke to Write a Parallel Sort Routine

Shows how to use the parallel_invoke algorithm to improve the performance of the bitonic sort algorithm.

How to: Use parallel_invoke to Execute Parallel Operations

Shows how to use the parallel_invoke algorithm to improve the performance of a program that performs multiple operations on a shared data source.

Walkthrough: Implementing Futures

Shows how to combine existing functionality in the Concurrency Runtime into something that does more.

Parallel Patterns Library (PPL)

Describes the PPL, which provides an imperative programming model for developing concurrent applications.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback

Community Additions

ADD
Show:
© 2014 Microsoft