1.2 Con't MPI_Reduce

This page explores the MPI_Reduce function that can save us bunch of time

MPI_Reduce

In MPI parlance, communication functions that involve all the processes in a communicator are called collective communications. To distinguish between collective communications and functions, such as MPI_Send and MPI_Recv, MPI_Send, and MPI_Recv are often called point-to-point communications.

figure 1.1

MPI_Reduce is for performing a reduction operation (figure 1.1 such as sum, max, etc.) across all processes within a communicator.

  • input_data_p: Pointer to the input data in each process.

  • output_data_p: Pointer to the output data in the destination process.

  • count: Number of elements in the input data.

  • datatype: MPI datatype of each element in the input data.

  • operator: The reduction operation (e.g., MPI_SUM, MPI_MAX).

  • dest_process: Rank of the process that will receive the result of the reduction.

  • comm: MPI communicator used for the operation.

This function applies the specified operation across all processes' data and sends the result to the dest_process.

By using MPI_SUM, we can calculate the sum of all values calculated in every process within a line

Collective vs Point to Point Communications

  • All the processes in the communicator must call the same collective function. e.g Cannot do MPI_Reduce and MPI_Recv at the same time

  • The arguments passed by each process to an MPI collective communication must be “compatible.” For example, if one process passes in 0 as the dest_process and other passes in 1, then the outcome of a call to MPI_Reduce is erroneous

  • The output_data_p argument is only used on dest_process. However, all of the processes still need to pass in an actual argument corresponding to output_data_p, even if it’s just NULL

  • All collective communication calls are blocking

  • Point-to-point communications are matched on the basis of tags and communicators

  • Collective communications don’t use tags

  • They’re matched solely on the basis of the communicator and the order in which they’re called

MPI_Allreduce

We can even the MPI_Allreduce for a less tedious programming job when it comes to summing up the results

Broadcast

A collective communication in which data belonging to a single process is sent to all of the processes in the communicator is called a broadcast

  • data_p: Pointer to the data to be broadcasted. On the source process, this is the data to be sent; on all other processes, this is where the broadcasted data will be stored.

  • count: Number of elements in the data.

  • datatype: MPI datatype of each element in the data.

  • source_proc: Rank of the source process that will broadcast the data.

  • comm: MPI communicator used for the broadcast.

figure 1.2

Last updated