You are on page 1of 31

Shared memory multiprogramming

Our

study has to be done on SMP,symmetric multiprocessors. It is not necessary to have a physical multiprocessor system. In Unix environment we can spawn different processes on the system. These processes can be used to simulate multiprocessor programming.

Parallelism in Unix environment


Unix

systems provide primitives such as semaphores and shared memory. In multitasking systems such as Unix ,the numbers of processors and processes may not be equal. Most Unix systems allow one to fork processes irrespective of the number of processors available. The processes are scheduled on all available processors.

Parallelism in Unix environment


Unix

systems provide primitives such as semaphores and shared memory. In multitasking systems such as Unix ,the numbers of processors and processes may not be equal. Most Unix systems allow one to fork processes irrespective of the number of processors available. The processes are scheduled on all available processors.

Two important constraints


Any

process can wait for any amount of time interval between two instructions. Another constraint is that the instructions cannot be expected to execute atomically at the programming language level. A simple statement like a=a+1 will be a sequence of 4 instructions at the machine language level.

Process creation and process destruction


To

achieve parallelism ,we would like to generate the processes as per the problem requirements. Similarly we would like to destroy these processes when the parallel processing is complete so that the system resources are not wasted and For sequential processing there is no interference.

Process creation
The

primitive for create processes is:Id=create_process(n); N processes will be present in the system after this call. Creates n-1 fresh processes. Id will be different for different processes. Id will be a sequence of integers from 0o to n-1.

int create_process(int n) { int j; for(j=1;j<=n-1;j++) { if(fork()==0) return(j); }//for

Process destruction
We

use the following primitives to merge processes. Join_process(n,id); N is the number of processes in the system. Id is the return value from the create_process() call. Merges all the created processes. Only the parent process one with id=0 will survive this call.

void join_process(int n,int id) { int i; if(id==0) { for(i=1;i<=n-1;i++) wait(0); } else exit(0); }

Shared memory allocation


Allocation

of shared memory is done

using :Addr=shared(size,id); Size is the number of shared memory required in bytes. Id is the unique identifier for the shared memory segment allocated. This uses Unix system calls in the shm library.

Char *shared(int size, int *shmid) { *shmid=shmget(IPC_PRIVATE,size,IPC_CREAT|SHM_R| SHM_W); If (*shmid<0) { printf(memory couldnt be allocated\n); Exit(); } return(shmat(*id,0,0)); }

Typically shared memory is allocated from a system pool and these will survive even a This is unlike malloc which will be released at the termination of the process. Therefore it is the responsibility of the programmer to release such shared memory wh

Remove the allocated shared memory

Void free_shm(int id) { Struct shmid_ds *buf; If (shmctl(id,IPC_RMID,buf)!=0) { printf(remove the shared memory); Exit(); } }

Basic concepts of concurrency


OS

Concurrency

Those that execute system code Those that execute users code

consists of two kinds of processes:refers to the parallel execution of

programs Concurrent processing is the basic of OS which supports multiple programming.

Basic concepts of synchronisation


In

order to execute the concurrent processes ,processes must communicate and synchronise Interprocess communication is based on the use of shared variables or message passing. So in order to synchronise a set of constraints is laid on the execution of the processes.

Mutual exclusion
Processes

frequently need to communicate with the other processes. Processes that work together often share some common storage that one can read and write. The shared storage may be in the main memory or it may be a shared file. Each process has a segment of code called critical section which accesses share memory or file. The key issue involved is to prevent more than one process to access the share data at the same time. Mutual exclusion is therefore some way of

Mutual exclusion algorithm


Step Step Step Step Step Step Step Step Step Step

Step

Step

1:parent process 2:boolean p1busy=false,p2busy=false; 2: initiate p1,p2 3:while p2busy=true 4: do testing critical-section 5:if (critical-section=free)then 6:Allocate critical-section to p1 7:p2busy=false,p1busy=true 8: else do_other _p1_processing 9:end if 10 end while 11:Allocate critical-section to p1

Step Step Step Step Step Step

Step

Step

Step

12:while p1busy=true 13: do testing critical-section 14:if (critical-section=free)then 15:Allocate critical-section to p2 16:p1busy=false,p2busy=true 17: else do_other _p2_processing 18:end if 19: end while 20:Allocate critical-section to p2

Steps for mutual exclusion


P1

tests p2busy to determine what to do next. When it finds p2busy to be false process p1 may safely proceed to the critical-section. Set p1busy=true Process p2 finds p1busy to be true and stay away from the critical section.

Semaphores
The

mutual exclusion works for only two processes. To overcome this problem a synchronisation tool called semaphore gained wide acceptance. A semaphore is a variable which accepts non integer values and may be accessed and manipulated through two primitive operations:--wait and signal.

The two primitive operations


Wait(s): S=s-1 Wait

while s<=0 do {keep testing}

operation decrements the value of the semaphore variable as soon as it becomes non negative. Signal(s): s=s+1 Signal operation increments the value of the semaphore.

process
Only

one process can modify the semahore variable at one time. The parent process in the program first intialises the binary semaphore variable bsem to 1 indicating that the resource is available. Once p1 is given permission it prevents the other resources to access critical-section. The wait operation decrements the value of semaphore variable to 0. When p1 completes execution it releases critical section and executes signal operation to make bsem=1 available again.

Time

P1

P2

P3

Bsem 1=free 0=busy 1 0 0

Process sharing resource; processes wanting to enter

T1 T2 T3

_ Wait CS

_ Wait Waiti n Waiti n

_ Wait Waiti n Waiti n

_;_ _;p1,p2,p3 P1;p2,p3

T4

Signal

_;p2,p3

T5
T6 T7

Rest_p1
Wait Wait

Waiti ng
Waiti ng CS

CS

P3;p2
__;p2,p1 P2;p1

Signa 1 l Rest_ p3 0

Mutual Exclusion

Suppose we want to add up the n number elements of an array by m processors. So the elements to be added will be divided among m processors. Each processor have to operate on n/m elements. Each processor will compute the sum in its local variable. After they are finished ,they will update the global shared variable sum.

case 0: case 1: for(i=0;i<10;i=i+5) for(i=1;i<10;i=i+5) { { sum1=sum1 + a[i]; sum2=sum2 + a[i]; } } printf("Parent Sum : %d\n",sum1); printf("child Process1 sum : %d\n",s *sum=*sum+sum1; *sum=*sum+sum2;

break;

break;

Problems in this method

This method is not supposed to work correctly always. The updates may not always be correctly reflected in the variable sum. Suppose p1 read the value sum add sum1 to the the current value of sum and s But before p1 can write p2 may have read the old value and add sum2 to the old So in this case only one procesors local sum value will be updated in the globa The other is lost.

Create and initialise lock

lock_init(lockid) Lockid is a variable of type pointer to an integer. The routine assigns value to this variable. The variable lock_id must be declared ,shared and adequate memory should be The lock is initialised to open state.

Lock the lock

It attempts to lock the created lock,or busy. If it is locked ,the process is put to wait,till the lock is unlocked. If there is more than 1 process waiting for a lock ,there is no particular order in which t

unlock

This routine unlocks the lock. It also wakes up any processor waiting for the lock to open.

Delete semaphore

This deletes the semaphore allotted. This should be called before the termination of the program. Semaphore like shared memory will remain active even when the user logs out of the

You might also like