You are on page 1of 6

ADAPTIVE DYNAMIC PREEMPTIVE SCHEDULING

MECHANISM FOR P2P COMPUTING SYSTEM


Ashwathy M.G
1
, Arul Xavier V. M
2

1
Post-Graduate Student, Department of Computer Science and Engineering, Karunya University, India.
2
Assistant professor, Department of Computer Science and Engineering, Karunya University, India.
__________________________________________________________________
Abstract

P2P computing is a distributed computing paradigm that uses Internet to connect thousands of users into a single
large virtual computer, based on the sharing of computational resources. P2P aims to maximize the utilization of
computing resources by making them shareable across applications. In P2P computing, job scheduling is an
important challenging task due its heterogeneous nature. In a dynamic and heterogeneous time sharing system, it
becomes important to minimize average response time and maximize the throughput for the users. In such
computing, cooperative scheduling mechanism was implemented, which works on multiple processor environment.
It employs mono task execution which results in large response time and large execution time. It may cause
processes with short processor bursts to wait for a long time. And also, it is not suitable for time sharing systems.
This work proposes a dynamic preemptive scheduling algorithm which performs multitasking on multiple processor
system and thereby, minimizes the average response time and execution time. Consequently the throughput and
CPU utilization is better. Experimental results via simulation shows that our scheduling scheme out performs better
than the existing approach.

Keywords: Grid computing, P2P computing, cooperative scheduling, resource utilization
_____________________________________________________________________________________________

1. Introduction
Grid computing consists of geographically distributed heterogeneous resources. Grid system is a dynamic system in
which resources may join and leave the system at any time. This dynamic and heterogeneity nature of the Grid
makes the system difficult to schedule resources which results in poor resource utilization and management. The
scheduling in Grid computing is the process of allocating jobs to available resources. In order to perform proper
scheduling of jobs to available resources several techniques has been developed but still it is impossible to achieve
proper resource utilization. When resources are allocated for jobs, resources with good performance may get
assigned to jobs thus overloading the resource and reducing the proper utilization of resources of the system. This
can lead to reduced overall system performance. P2P computing systems harness CPU idle cycles from large
number of computers connected through Internet and utilize their computational resources to execute huge parallel
distributed applications. Such systems require an efficient scheduling mechanism using which the tasks can be
assigned to the heterogeneous computing resources. However, since a scheduling mechanism with many resources
in decentralized environments require long processing times, the non-preemptive scheduling methods seems to be
inefficient. The development of incentive techniques to encourage cooperation and resource sharing among
participants is considered important to improve system performance. Cooperation is highly appreciated and the
incentive mechanism avoids peer cheating by offering values lower than their respective cost.
In a dynamic, interactive and time sharing system, it becomes important to minimize average response time and
maximize the throughput for the users. Typically, grids employ first come first serve method of executing the jobs
which results in large response time and large execution time. It may cause processes with short processor bursts to
wait for a long time. And also, it is not suitable for time sharing systems. In this paper, we propose a preemptive
scheduling algorithm with dynamic time quantum. Our approach is based on the calculation of time quantum twice
in single round robin cycle. It minimizes the average response time and execution time, thereby maximizing the
throughput and CPU utilization. This paper is organized as follows. The section 2 includes a discussion on related
work; section 3 gives the explanation on the system model; section 4 explains the scheduling mechanism; section 5
explains the dynamic preemptive scheduling strategy and section 6 gives the experimental analysis. Finally, in
section 7 conclusion is presented, followed by references.
2. Related Work
Andrade [11] presents an incentive mechanism called the Network of Favors to assemble a large grid, which makes
it in each participating peers interest to contribute their spare resources. In [13], the authors presented architecture
for sharing computing resources in peer-to-peer networks, called CompuP2P. It creates dynamic markets of
network-accessible mutable resources in a completely distributed, scalable, and fault-tolerant manner. The authors
of [12] proposes a new technique using maximum and minimum burst time of the set of processes in the ready
queue and calculating a modified time quantum. The authors of [14] talks about calculating the mean of the burst
times of all the processes and then finds the difference between the mean of the burst time and the burst time of a
particular process and allocates the CPU to the process which has the maximum difference. Mixed Scheduling (A
New Scheduling Policy) [15], uses the job mix order for non-pre-emptive scheduling FCFS and SJF. According to
job mix order, from a list of N processes, the process which needs minimum CPU time is executed first and then the
highest from the list and so on till the nth process. In the previous work [8], cooperative scheduling mechanism was
implemented which employs mono task execution on multiple processor environment.
3. System Model
In the P2P Computing architecture, like the one shown in Fig.1, peers are grouped in areas controlled by a super-
peer, also called the Manager peer. The manager peers are interconnected by means of an overlay network. Peers are
the smallest autonomous unit with computational resources, mainly CPU, Memory and Secondary Memory. The
low-level scheduling [9] is performed by each manager at the area level, normally a local area network. We assume
that the information required for local assignments is not very large and the manager has accurate information about
the areas peers. Furthermore, as an area is usually small in size, the low-level scheduler can know the state of all
peer resources and perform a near-optimal task assignment.
On the upper level, managers deal with more scheduling information, although less accurate. At this point, each
manager has a distributed scheduler for assigning computing tasks among areas. Thus, global scheduling involving
all the computing resources is achieved by collaborations among distributed schedulers in individual areas.
Fig.1 System Model
The peers can acquire three different kinds of role, managers, workers and masters [10]. The function of a Manager
(M) is to govern and control the state (computational power and workload) of peers in the same logical area and
schedule tasks among the workers. A Worker (W) is responsible for executing tasks scheduled by its manager.
Finally, a Master (MS) is any peer that submits parallel jobs to the system.
4. Scheduling Mechanism
The scheduling mechanism [8][9] has a two-level topology. At the low-level (LS), we deal with the problem of
mapping of group of tasks onto a set of computational resources. We are mainly concerned with distributing tasks
globally and balancing them across all the areas. The low level scheduler has a non-negative credit incentive
scheme. The allocation of job tasks, performed by a super-peer, is based on the reverse Vickrey auction strategy,
thereby achieving the optimum allocation of resources. This method consists of always choosing the worker
(worker1) with lowest cost (value1) for each job task. The selected peer is rewarded with the second lowest offered
cost (value2), so the profit will be this second lowest cost minus its own cost. It can be said that using this strategy,
the incentive mechanism avoids peer cheating by offering values lower than their respective cost. In the case where
one peer wants to be selected, and tries to drop its corresponding value, if the second lowest cost is higher, selecting
it will cause a negative profit. Thus cheating is discouraged. Due to the scalability issues, we integrate this
mechanism with the higher level scheduler.
At the high level (HS), the scheduler located in each peer coordinates its own areas peers and the neighboring ones.
As the scope of the LSs is limited to their own area, the HS takes responsibility for distributing the portion of tasks
that cannot be assigned by the LS to the outside in a balanced and efficient way. In doing so, the managers of each
area use three scheduling criteria, Computing Capacity with Neighbors(CCN), Distance(D) and Reputation(R). The
scheduling criteria are used to assign the tasks to the best resource in the system. Once the manager has obtained all
the scheduling criteria(V) from the neighbors, it will start assigning the jobs.
5. Dynamic Preemptive Scheduling Strategy
The distributed and cooperative two level scheduling mechanisms include the dynamic preemptive scheduling
algorithm for assigning the tasks to the corresponding resources. In a highly heterogeneous and dynamic time
sharing environment, each user is under the illusion that the system is working only for them. This demands fairly
quick response to the user. This algorithm solves the problem of higher average response time and average
execution time thereby improving the throughput and the system performance. A dynamic preemptive strategy is
developed to perform multitasking on a multiple processor system and to minimize average response time and total
job execution time and thereby, optimize the system performance with better throughput.
The early the shorter processes are removed from the ready queue [7], the better the execution time, response time.
So in our algorithm, the shorter processes are given more time quantum so that they can finish their execution
earlier. Here shorter processes are defined as the processes having less assumed CPU burst time than the previous
process. We slightly modify the time quantum for the processes that require only a fractional greater time than the
allotted time quantum cycle(s) to complete their execution. In this algorithm, the dynamic time quantum is
calculated, where the time quantum is repeatedly adjusted according to the shortness component. The calculated
time quantum will be based on burst time. The median method [4] is used to calculate the time quantum as shown in
(1).
Median = {

]
(1)

where, y = number located in the middle of a group of numbers arranged in ascending order
n = number of processes
Here, the time quantum is assigned to the processes. The time quantum is recalculated twice in single round robin
cycle.
Algorithm 1. Dynamic Preemptive Scheduling Algorithm
1. Read the tasks
2. While (ready queue! = null)
Sort the tasks in ascending order based on the lowest burst time.
3. Calculate smart Time Quantum:
4. Assign qt to each ith task/process.
For each task i=1 to n
P[i] qt
5. If (burst time<qt)
Assign the resource to that process
6. Else if(remaining burst time< qt/2)
Assign resource again to that same process till it terminates
7. Else,
The process will occupy the CPU/resource till the time quantum and it is added to the ready queue for the
next round of execution.
8. End of While

The cooperative scheduling mechanism is composed of the low-level scheduler, which operates in an area and the
high-level scheduler, which manages inter-area information in scheduling tasks. In both the levels, the tasks are
assigned to corresponding resources by applying the Algorithm 1. Our proposed architecture can always achieve
appropriate scheduling irrespective of the number of peers.
6. Experimental Analysis
6.1. Assumptions
The environment where all the experiments are performed is a multiprocessor environment and all the tasks are
independent. As the system is highly dynamic, all the attributes like burst time, number of tasks, number of users
and the number of resources are not known before submitting to the system. Since, the cases are assumed to be close
to ideal, the Context Switching Time is equal to zero i.e. there is no Context Switch Overhead incurred in
transferring from one job to another.
6.2. Performance Parameters
The criteria include the following:
Response Time: time from submission till the first response is produced, minimize response time for interactive
users
Fairness: make sure each process gets a fair share of the CPU
Execution time: The total application execution time is measured from the time the first job is sent to the grid, until
the last job comes out of the grid.
Resource Utilization : keep the CPU busy 100% of the time with useful work
Throughput: maximize the number of jobs processed per hour.
6.3. Experiments Performed
In this section, experimentation was conducted to demonstrate the feasibility and good performance of the proposed
mechanism. The experimentation was performed through simulation using GriSim. Each simulation was carried out
assuming 100 to 600 workers randomly distributed between 50 managers (peers are randomly distributed per
area).The algorithm works effectively even if it used with a very large number of processes. The resource utilization
is 100% in all cases, since CST =0 as assumed in ideal cases. The graphs in Figs. 2 and 3 show the performance of
the proposed scheduling algorithm in terms of scheduling perspective.


Fig.2 Average Response Time

Fig.3 Makespan
7. Conclusion
The scheduling mechanism can be used in many different kinds of shared computing networks provided these can be
sub-grouped into areas controlled by a Manager. Area scheduling processes are independently performed in
individual areas by their managers and the task assignment is near-optimal at low level and the task scheduling of
inter-area level is managed at high level. But in the existing system, it considers the mono task execution (mono
programming). Consequently, only a single task is allowed to execute at a time. Thus, the system results in larger
response time and which degrades the performance of a dynamic and timesharing system. So, multiple task
execution is to be performed aiming to maximize resource utilization and to minimize the response time. So this
work focuses on, a preemptive scheduling mechanism which is developed to perform multiple task execution and to
0
200
400
600
800
1000
1200
1400
1600
10 20 30 40 50
A
v
e
r
a
g
e

R
e
s
p
o
n
s
e

T
i
m
e

Number of GIS
Cooperative mechanism
without multitasking
Cooperative mechanism with
multitasking
0
200
400
600
800
1000
1200
1400
1600
10 20 30 40 50
M
a
k
e
s
p
a
n

Number of GIS
Cooperative mechanism
without multitasking
Cooperative mechanism with
multitasking
minimize the total job execution time and response time, and to optimize the system performance with better
throughput.
References
[1] Al-Azzoni I, Down DG.(2010),Dynamic scheduling for heterogeneous desktop grids., Journal of Parallel and Distributed
Computing ;70(12):123140.
[2] Balasubramanian A, Sussman A (2010),Decentralized Dynamic Scheduling across Heterogeneous Multi-core Desktop
Grid, Journal of Parallel and Distributed Computing;26 (10) 753_768
[3] Balasubramanian A, Sussman A and Sadeh N(2010),Decentralized Preemptive Scheduling across Heterogeneous Multi-core
Grid Resources, Journal of Parallel and Distributed Computing;26 (10) 641_670
[4] Behera S. H., Mohanty R., Sahu S. & Bhoi K. S.(2011),Design and Performance Evaluation of Multi Cyclic Round Robin
(MCRR) Algorithm Using Dynamic Time Quantum, Journal of Global Research in Computer Science, ISSN-2229-371X
[5] Behera S. H., Patel S. & Panda B.(2011),A New Dynamic Round Robin and SRTN Algorithm with Variable Original Time
Slice and Intelligent Time Slice for Soft Real Time Systems,International Journal of Computer Applications (0975 8887)
[6] D.M. Dhamdhere operating Systems A Concept Based Approach, Second edition, Tata McGraw-Hill, 2006.
[7] Goel N, Garg R.B. (2013), An Optimum Multilevel Dynamic Round Robin Scheduling Algorithm, National Conference on
Information Communication & Networks
[8] J. Rius, F. Cores, F. Solsona(2013), Cooperative scheduling mechanism for large-scale peer-to-peer computing systems,
Journal of Network and Computer Applications 36 (13) 16201631
[9] J. Rius, S. Estrada, F. Cores, F. Solsona(2012),Incentive mechanism for scheduling jobs in a peer-to-peer computing
system, Simulation Modelling Practice and Theory 25 (12) 3655
[10] Jean-Pierre Goux, Sanjeev Kulkarni, Jeff Linderoth, Michael Yoder(2005),An Enabling Framework for Master-Worker
Applications on the Computational Grid, Journal on Future Generations of Computer Systems, 12:53.65
[11] N. Andrade, F. Brasileiro, W. Cirne, M. Mowbray, Automatic grid assembly by promoting collaboration in peer-to-peer
grids, Journal of Parallel and Distributed Computing.
[12] P. Surendra Varma, A Best possible time quantum for Improving Shortest Remaining Burst Round Robin (SRBRR)
Algorithm, International Journal of Advanced Research in Computer Science and software Engineering, Volume 2, Issue 11,
ISSN: 2277 128X , November 2012.
[13] R. Gupta, V. Sekhri, A. Somani, Compup2p: an architecture for internet computing using peer-to-peer networks, IEEE
Transactions on Parallel and Distributed Systems.
[14] RNDSS Kiran, Polinati Vinod Babu, B.B Murali Krishna, Optimizing CPU Scheduling for Real Time Application Using
Mean-Difference Round Robin (MDRR) Algorithm.
[15] Sunita Mohan.Mixed Scheduling (A New Scheduling Policy). Proceedings of Insight09,25-26 November 2009.

You might also like