MPI_Ireduce function

Performs a global reduce operation (for example sum, maximum, or logical and) across all members of a group in a non-blocking way.

Syntax

int MPIAPI MPI_Ireduce(
  _In_      void         *sendbuf,
  _Out_opt_ void         *recvbuf,
  _In_      int          count,
  _In_      MPI_Datatype datatype,
  _In_      MPI_Op       op,
  _In_      int          root,
  _In_      MPI_Comm     comm,
  _Out_     MPI_Request  *request
);

Parameters

  • sendbuf [in]
    The pointer to a buffer containing the data from this rank to be used in the reduction. The buffer consists of count successive elements of the MPI_Datatype indicated by the datatype handle. The message length is specified in terms of number of elements, not number of bytes.

  • recvbuf [out, optional]
    The pointer to a buffer to receive the result of the reduction operation. This parameter is significant only at the root process.

  • count [in]
    The number of elements to send from this process.

  • datatype [in]
    The MPI_Datatype handle representing the data type of each element in sendbuf.

  • op [in]
    The MPI_Op handle indicating the global reduction operation to perform. The handle can indicate a built-in or application defined operation. For a list of predefined operations, see the MPI_Op topic.

  • root [in]
    The rank of the receiving process within the MPI_Comm comm.

  • comm [in]
    The MPI_Comm communicator handle.

  • request [out]
    The MPI_Request handle representing the communication operation..

Return value

Returns MPI_SUCCESS on success. Otherwise, the return value is an error code.

In Fortran, the return value is stored in the IERROR parameter.

Fortran

    MPI_IREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, ROOT, COMM, REQUEST, IERROR) 
        <type> SENDBUF(*), RECVBUF(*) 
        INTEGER COUNT, DATATYPE, OP, ROOT, COMM, REQUEST, IERROR

Remarks

A non-blocking call initiates a collective reduction operation which must be completed in a separate completion call. Once initiated, the operation may progress independently of any computation or other communication at participating processes. In this manner, non-blocking reduction operations can mitigate possible synchronizing effects of reduction operations by running them in the “background.”

All completion calls (e.g., MPI_Wait) are supported for non-blocking reduction operations.

Requirements

Product

Microsoft MPI v6

Header

Mpi.h; Mpif.h

Library

Msmpi.lib

DLL

Msmpi.dll

See also

MPI Collective Functions

MPI_Datatype

MPI_Op

MPI_Reduce

MPI_Test

MPI_Testall

MPI_Testany

MPI_Testsome

MPI_Wait

MPI_Waitall

MPI_Waitany

MPI_Waitsome