You are on page 1of 7

Scientific Computing Department

Faculty of Computer & information Sciences


Ain Shams University

Distributed Computing
Lab2 : Point-to-Point Communication

Am al Said Khalifa
Lab 2

Point-to-Point Communications

Point-to-point communication is the most basic form of communication in MPI, allowing a


program to send a message from one process to another over a given communicator. Each
message has a source and target process (identified by their ranks within the communicator),
an integral tag that identifies the kind of message, and a payload containing arbitrary data.

There are two kinds of communication for sending and receiving messages via MPI
point-to-point facilities, blocking and non-blocking. The blocking point-to-point operations will
wait until a communication has completed on its local processor before continuing. For example,
a blocking Send operation will not return until the message has entered into MPI's internal
buffers to be transmitted, while a blocking Receive operation will wait until a message has been
received and completely decoded before returning. MPI’s non-blocking point-to-point operations,
on the other hand, will initiate a communication without waiting for that communication to be
completed. Instead, a Request object, which can be used to query, complete, or cancel the
communication, will be returned. For our initial examples, we will use blocking communication.

1
Lab 2

Ping Pong:

For our first example of point-to-point communication, we will write a program in which Task 0
pings task 1 and awaits return pong. This program is executed on two processes only..

#include "mpi.h"
#include <iostream>

using namespace std;

int main(int argc,char **argv)


{
/** Ping Pong **/

int MyId, P;
int source, dest;
char msg = 'x';

MPI::Init(argc,argv);

P = MPI::COMM_WORLD.Get_size();

MyId = MPI::COMM_WORLD.Get_rank();

if ( MyId == 0)
{
source = 1;
dest = 1;

MPI::COMM_WORLD.Send(&msg, 1, MPI::CHAR, dest, 0);


MPI::COMM_WORLD.Recv(&msg, 1, MPI::CHAR, source, 0);
cout<<"Pong ";
}
else
{
source = 0;
dest = 0;

MPI::COMM_WORLD.Recv(&msg, 1, MPI::CHAR, source, 0);


cout<<"\nPing"<<endl;
MPI::COMM_WORLD.Send(&msg, 1, MPI::CHAR, dest, 0);
}

MPI::Finalize();

2
Lab 2

Ring Around the Network

The following program sends a message around a


ring. The message will start at one of the processes--we'll
pick the rank 0 process--then proceed from one process to
another, eventually ending up back at the process that
originally sent the data. The figure below illustrates the
communication pattern, where process is a circle and the
arrows indicate the transmission of a message.

#include "mpi.h"
#include <iostream>

using namespace std;

int main(int argc,char **argv)


{
int MyId, P;
int x;

MPI::Init(argc,argv);

P = MPI::COMM_WORLD.Get_size();
MyId = MPI::COMM_WORLD.Get_rank();

if ( MyId == 0)
{
cout << "enter a number >> ";
cin>> x;
MPI::COMM_WORLD.Send(&x, 1, MPI::INT, MyId+1, 0);
MPI::COMM_WORLD.Recv(&x, 1, MPI::INT, P-1, 0);
cout<<"Message Recieved, value = "<< x;
cout<<"\n All done!!";
}
else
{
MPI::COMM_WORLD.Recv(&x, 1, MPI::INT, MyId-1, 0);
cout<<"This is node no. "<<MyId<< ". The message was
recieved"<<endl;
MPI::COMM_WORLD.Send(&x, 1, MPI::INT, (MyId+1)%P, 0);

MPI::Finalize();

return 0;

3
Lab 2

Parallel Computations

The following program must be run on at least 4 machines. The instructions are to have
the master process generate some integers, send them to process 3 which will use some of
those values to generate some real numbers that will then be sent back to the master. The
instructions are as follows:

 Process 1 computes the squares of the first 200 integers.


 It sends this data to process 3.
 Process 3 should divide the integers between 20 and 119 by 53, getting a real result,
and passes this data back to process 1.

#include "mpi.h"
#include <iostream>

using namespace std;

int main(int argc,char **argv)


{
int count;
int count2;
int dest;
int i;
int i_buffer[200];
int num_procs;
float r_buffer[200];
int rank;
int source;
MPI::Status status;
int tag;
//
MPI::Init ( argc, argv );

// Determine this process's rank.


num_procs = MPI::COMM_WORLD.Get_size ( );
rank = MPI::COMM_WORLD.Get_rank ( );

// Have Process 0 say hello.


if ( rank == 0 )
{
cout << " An MPI example program.\n";
cout << " The number of processes available is " << num_procs << "\n";
}
// If we don't have at least 4 processes, then bail out now.
if ( num_procs < 4 )
{
cout << " Not enough processes for this task!\n";
cout << " Bailing out now!\n";
MPI::Finalize ( );
return 1;
}

4
Lab 2
// Process 1 knows that it will generate 200 integers, and may receive no
// more than 200 reals.
if ( rank == 1 )
{
count = 200;
for ( i = 0; i < count; i++ )
i_buffer[i] = i * i;

dest = 3;
tag = 1;
MPI::COMM_WORLD.Send ( i_buffer, count, MPI::INT, dest, tag );
cout << "P:" << rank << " sent " << count
<< " integers to process " << dest << ".\n";

source = 3;
tag = 2;
MPI::COMM_WORLD.Recv ( r_buffer, 200, MPI::FLOAT, source, tag,
status );
cout << "\nP:" << rank << " received real values from process 3.\n";
count = status.Get_count ( MPI::FLOAT );
cout << "P:" << rank << " Number of real values received is "
<< count << ".\n";
cout << "P:" << rank << " First 3 values = "
<< r_buffer[0] << " "
<< r_buffer[1] << " "
<< r_buffer[2] << "\n";
}
//
// Process 3 receives the integer data from process 1, selects some of the
// data, does a real computation on it, and sends that part back to process 1.
else if ( rank == 3 )
{
source = 1;
tag = 1;

MPI::COMM_WORLD.Recv ( i_buffer, 200, MPI::INT, source, tag, status );


cout << "\n P:" << rank << " received integer values from process 1.\n";
count = status.Get_count ( MPI::INT );
cout << "P:" << rank << " - Number of integers received is "
<< count << ".\n";
cout << "P:" << rank << " First 3 values = "
<< i_buffer[0] << " "
<< i_buffer[1] << " "
<< i_buffer[2] << "\n";

count2 = 0;

for ( i = 0; i < count; i++ )


if ( 20 <= i_buffer[i] && i_buffer[i] <= 119 )
{
r_buffer[count2] = ( float ) (i_buffer[i] / 53);
count2 = count2 + 1;
}

5
Lab 2
dest = 1;
tag = 2;
MPI::COMM_WORLD.Send ( r_buffer, count2, MPI::FLOAT, dest, tag );
cout << "P:" << rank << " sent " << count2 << " reals to process "
<< dest << ".\n";
}
else
{
cout << "\n";
cout << "P:" << rank << " - MPI has no work for me!\n";
}

MPI::Finalize ( );

return 0;
}

Assignment:

Q1: Chinese Whisper


Modify the ring program and make it accept a string form the user and send it around
to all of the other processes. That is, process i should receive the data and send it to process
i+1, until the last process is reached. Whenever a process listens to the word (received a
message), it will modify any letter in it and send it to the next node. The last node will
eventually display the word as it was heard from its predecessor!

You might also like