How to: Perform Map and Reduce Operations in Parallel

This example shows how to use the concurrency::parallel_transform and concurrency::parallel_reduce algorithms and the concurrency::concurrent_unordered_map class to count the occurrences of words in files.

A map operation applies a function to each value in a sequence. A reduce operation combines the elements of a sequence into one value. You can use the C++ Standard Library std::transform and std::accumulate functions to perform map and reduce operations. However, to improve performance for many problems, you can use the parallel_transform algorithm to perform the map operation in parallel and the parallel_reduce algorithm to perform the reduce operation in parallel. In some cases, you can use concurrent_unordered_map to perform the map and the reduce in one operation.

Example

The following example counts the occurrences of words in files. It uses std::vector to represent the contents of two files. The map operation computes the occurrences of each word in each vector. The reduce operation accumulates the word counts across both vectors.

// parallel-map-reduce.cpp
// compile with: /EHsc
#include <ppl.h>
#include <algorithm>
#include <iostream>
#include <string>
#include <vector>
#include <numeric>
#include <unordered_map>
#include <windows.h>

using namespace concurrency;
using namespace std;

class MapFunc 
{ 
public:
    unordered_map<wstring, size_t> operator()(vector<wstring>& elements) const 
    { 
        unordered_map<wstring, size_t> m;
        for_each(begin(elements), end(elements), [&m](const wstring& elem)
        { 
            m[elem]++;
        });
        return m; 
    }
}; 

struct ReduceFunc : binary_function<unordered_map<wstring, size_t>, 
                    unordered_map<wstring, size_t>, unordered_map<wstring, size_t>>
{
    unordered_map<wstring, size_t> operator() (
        const unordered_map<wstring, size_t>& x, 
        const unordered_map<wstring, size_t>& y) const
    {
        unordered_map<wstring, size_t> ret(x);
        for_each(begin(y), end(y), [&ret](const pair<wstring, size_t>& pr) {
            auto key = pr.first;
            auto val = pr.second;
            ret[key] += val;
        });
        return ret; 
    }
}; 

int wmain()
{ 
    // File 1 
    vector<wstring> v1 {
      L"word1", // 1
      L"word1", // 1
      L"word2",
      L"word3",
      L"word4"
    };

    // File 2 
    vector<wstring> v2 {
      L"word5",
      L"word6",
      L"word7",
      L"word8",
      L"word1" // 3
    };

    vector<vector<wstring>> v { v1, v2 };

    vector<unordered_map<wstring, size_t>> map(v.size()); 

    // The Map operation
    parallel_transform(begin(v), end(v), begin(map), MapFunc()); 

    // The Reduce operation 
    unordered_map<wstring, size_t> result = parallel_reduce(
        begin(map), end(map), unordered_map<wstring, size_t>(), ReduceFunc());

    wcout << L"\"word1\" occurs " << result.at(L"word1") << L" times. " << endl;
} 
/* Output:
   "word1" occurs 3 times.
*/

Compiling the Code

To compile the code, copy it and then paste it in a Visual Studio project, or paste it in a file that is named parallel-map-reduce.cpp and then run the following command in a Visual Studio Command Prompt window.

cl.exe /EHsc parallel-map-reduce.cpp

Robust Programming

In this example, you can use the concurrent_unordered_map class—which is defined in concurrent_unordered_map.h—to perform the map and reduce in one operation.

// File 1 
vector<wstring> v1 {
  L"word1", // 1
  L"word1", // 2
  L"word2",
  L"word3",
  L"word4",
};

// File 2 
vector<wstring> v2 {
  L"word5",
  L"word6",
  L"word7",
  L"word8",
  L"word1", // 3
}; 

vector<vector<wstring>> v { v1, v2 };

concurrent_unordered_map<wstring, size_t> result;
for_each(begin(v), end(v), [&result](const vector<wstring>& words) {
    parallel_for_each(begin(words), end(words), [&result](const wstring& word) {
        InterlockedIncrement(&result[word]);
    });
});
            
wcout << L"\"word1\" occurs " << result.at(L"word1") << L" times. " << endl;

/* Output:
   "word1" occurs 3 times.
*/

Typically, you parallelize only the outer or the inner loop. Parallelize the inner loop if you have relatively few files and each file contains many words. Parallelize the outer loop if you have relatively many files and each file contains few words.

See also

Parallel Algorithms
parallel_transform Function
parallel_reduce Function
concurrent_unordered_map Class