ACENET Summer School - MPI: Glossary

Key Points

Introduction
  • mpirun starts any program

  • srun does same as mpirun inside a Slurm job

  • mpirun and srun get task count from Slurm context

MPI Hello World
  • C requires #include <mpi.h> and is compiled with mpicc

  • Fortran requires use mpi and is compiled with mpif90

  • commands mpicc, mpif90 and others are wrappers around compilers

Hello World in detail
  • MPI processes have rank from 0 to size-1

  • MPI_COMM_WORLD is the ordered set of all processes

  • Functions MPI_Init, MPI_Finalize, MPI_Comm_size, MPI_Comm_rank are fundamental to any MPI program

Send and receive
  • Functions MPI_Ssend, MPI_Recv

  • Types MPI_CHAR, MPI_Status

More than two processes
  • A typical MPI process calculates which other processes it will communicate with.

  • If there is a closed loop of procs sending to one another, there is a risk of deadlock.

  • All sends and receives must be paired, at time of sending.

  • Types MPI_DOUBLE (C), MPI_DOUBLE_PRECISION (Ftn)

  • Constants MPI_PROC_NULL, MPI_ANY_SOURCE

Blocking and buffering
  • There are advanced MPI routines that solve most common problems. Don’t reinvent the wheel.

  • Function MPI_SendRecv

Collective communication
  • Collective operations involve all processes in the communicator.

  • Functions MPI_Reduce, MPI_AllReduce

  • Types MPI_FLOAT (C), MPI_REAL (Ftn)

  • Constants MPI_MIN, MPI_MAX, MPI_SUM

Diffusion simulation
  • Finite differences, stencils, guard cells

Where to go next

Glossary

FIXME