ex00 f_ex00.f c_ex00.c
This is a simple hello world program. Each processor prints out it's rank and the size of the current MPI run (Total number of processors). |
- MPI_Init
- MPI_Comm_rank
- MPI_Comm_size
- MPI_Finalize
ex01 f_ex01.f c_ex01.c
A simple send/receive program in MPI |
- MPI_Init
- MPI_Comm_rank
- MPI_Comm_size
- MPI_Send
- MPI_Recv
- MPI_Finalize
ex02 f_ex02.f c_ex02.c
Shows how to use probe and get_count to find the size of an incomming message. |
- MPI_Init
- MPI_Comm_rank
- MPI_Comm_size
- MPI_SEND
- MPI_Probe
- MPI_get_count
- MPI_recv
- MPI_Finalize
ex03 f_ex03.f c_ex03.c
This is a simple isend/ireceive program in MPI. |
- MPI_Init
- MPI_Comm_rank
- MPI_Comm_size
- MPI_Isend
- MPI_Irecv
- MPI_Wait
- MPI_Finalize
ex04 f_ex04.f c_ex04.c
This is a simple broadcast program in MPI. |
- MPI_Init
- MPI_Comm_rank
- MPI_Comm_size
- MPI_Bcast
- MPI_Finalize
ex05 f_ex05.f c_ex05.c
This program shows how to use MPI_Scatter and MPI_Gather. Each processor gets different data from the root processor by way of mpi_scatter. The data is summed and then sent back to the root processor using MPI_Gather. The root processor then prints the global sum. |
- MPI_Init
- MPI_Comm_size
- MPI_Comm_rank
- MPI_Scatter
- MPI_Gather
- MPI_Finalize
ex06 f_ex06.f c_ex06.c
This program shows how to use MPI_Scatter and MPI_Reduce. Each processor gets different data from the root processor by way of mpi_scatter. The data is summed and then sent back to the root processor using MPI_Reduce. The root processor then prints the global sum. |
- MPI_Init
- MPI_Comm_size
- MPI_Comm_rank
- MPI_Scatter
- MPI_Reduce
- MPI_Finalize
ex07 f_ex07.f c_ex07.c
This program shows how to use MPI_Alltoall. Each processor send/rec a different random number to/from other processors. |
- MPI_Init
- MPI_Comm_size
- MPI_Comm_rank
- MPI_alltoall
- MPI_Finalize
ex08 f_ex08.f c_ex08.c
This program shows how to use MPI_Gatherv. Each processor sends a different amount of data to the root processor. We use MPI_Gather first to tell the root how much data is going to be sent. |
- MPI_Init
- MPI_Comm_size
- MPI_Comm_rank
- MPI_Gather
- MPI_Gatherv
- MPI_Finalize
ex09 f_ex09.f c_ex09.c
This program shows how to use MPI_Alltoallv. Each processor send/rec a different and random amount of data to/from other processors. We use MPI_Alltoall to tell how much data is going to be sent. |
- MPI_Init
- MPI_Comm_size
- MPI_Comm_rank
- MPI_alltoall
- MPI_alltoallv
- MPI_Finalize
ex10 f_ex10.f c_ex10.c
This program is designed to show how to set up a new communicator. We set up a communicator that includes all but one of the processors, The last processor is not part of the new communcator, TIMS_COMM_WORLD. We use the routine MPI_Group_rank to find the rank within the new connunicator. For the last processor the rank is MPI_UNDEFINED because it is not part of the communicator. For this processor we call get_input The processors in TIMS_COMM_WORLD pass a token between themselves in the subroutine pass_token. The remaining processor gets input, i, from the terminal and passes it to processor 1 of MPI_COMM_WORLD. If i > 100 the program stops. |
- MPI_Init
- MPI_Comm_size
- MPI_Comm_rank
- MPI_COMM_GROUP
- MPI_GROUP_INCL
- MPI_COMM_CREATE
- MPI_GROUP_RANK
- MPI_Barrier
- MPI_Finalize
- MPI_COMM_DUP
- MPI_Comm_rank
- MPI_Comm_size
- MPI_Barrier
- MPI_Finalize
- MPI_Iprobe
- MPI_RECV
- MPI_SEND
- MPI_RECV
- MPI_SEND
- MPI_SEND
ex11 f_ex11.f c_ex011.c
Shows how to use MPI_Type_vector to send noncontiguous blocks of data and MPI_Get_count and MPI_Get_elements to see the number of elements sent. |
- MPI_Init
- MPI_Comm_rank
- MPI_Comm_size
- MPI_Type_vector
- MPI_Type_commit
- MPI_Send
- MPI_Recv
- MPI_Get_count
- MPI_Get_elements
- MPI_Finalize
ex12 f_ex12.f c_ex12.c
Shows a short cut method to create a collection of communicators. All processors with the "same color" will be in the same communicator. In this case the color is either 0 or 1 for even or odd processors. Index gives rank in new communicator. |
- MPI_Init
- MPI_Comm_size
- MPI_Comm_rank
- MPI_Comm_split
- MPI_Comm_rank
- MPI_Comm_rank
- MPI_Bcast
- MPI_Finalize
ex12 f_ex13.f c_ex13.c
This program shows how to use mpi_scatterv. Each processor gets a different amount of data from the root processor. We use mpi_Gather first to tell the root how much data is going to be sent. |
- MPI_Init
- MPI_Comm_size
- MPI_Comm_rank
- MPI_comm_split
- MPI_Comm_rank
- MPI_Comm_size
- MPI_bcast
- MPI_Finalize
- MPI_Init
- MPI_Comm_size
- MPI_Comm_rank
- MPI_Gather
- MPI_Scatterv
- MPI_Finalize
No comments:
Post a Comment