MPI - Message Passing Interface

MPI uses collection of objects called Communicators and Groups to define which processes communicate with each other.

Let's assume Communicator has a pool of processes. There can be several such Communicators.

Two costs are involved in distributed Processing:

  1. Bandwidth - decides the maximum size of the message - and thus how fast we can move large amounts of data through the network.
  2. Latency - places a lower limit on the transmission time for any message.
https://computing.llnl.gov/tutorials/mpi/#What

http://stackoverflow.com/questions/18992701/mpi-c-derived-types-struct-of-vectors

http://stackoverflow.com/questions/9269399/sending-blocks-of-2d-array-in-c-using-mpi

Blocking Call means that the call will not return until the function arguments are safe for re-use in the program. It will only return when a matching receive has been posted and data transmission has completed. The calling process will be idle waiting to be paired up with a matching receive.

Examples:
- MPI_Ssend(buffer,...) : Blocking Synchronous. Therefore, deadlock!!

- MPI_Send : If system buffers are involved - Asynchronous. Else, acts just like MPI_Ssend.

- MPI_Bsend : System Buffered. Therefore, asynchronous. User must attach and detach the buffer.

Collective Communications
Broadcast, Scatter, Gather - I think we're splitting different iterations of the loops among various nodes here!
Reduction - relates to mathematical operations on a common variable that has an individual copy maintained by all the nodes. Eg: Finally, summing together all the copies [maintained by the different nodes] of a particular variable.


Derived Data Types
https://www.rc.colorado.edu/sites/default/files/Datatypes.pdf

No comments:

Post a Comment