You are on page 1of 12

Amdahl's Law

Validity of the single processor approach to achieving large scale computing capabilities

Presented By: Mohinderpartap Salooja

History

Gene Amdahl, chief architect of IBM's first mainframe series and founder of Amdahl Corporation and other companies found that there were some fairly stringent restrictions on how much of a speedup one could get for a given parallelized task. These observations were wrapped up in Amdahl's Law

INTRODUCTION

If F is the fraction of a calculation that is sequential, and (1-F) is the fraction that can be parallelized, then the maximum speed-up that can be achieved by using P processors is 1/(F+(1-F)/P).

Examples

if 90% of a calculation can be parallelized (i.e. 10% is sequential) then the maximum speed-up which can be achieved on 5 processors is 1/(0.1+(1-0.1)/5) or roughly 3.6 (i.e. the program can theoratically run 3.6 times faster on five processors than on one)

Examples

If 90% of a calculation can be parallelized then the maximum speed-up on 10 processors is 1/(0.1+(1-0.1)/10) or 5.3 (i.e. investing twice as much hardware speeds the calculation up by about 50%). If 90% of a calculation can be parallelized then the maximum speed-up on 20 processors is 1/(0.1+(1-0.1)/20) or 6.9 (i.e. doubling the hardware again speeds up the calculation by only 30%).

Examples

if 90% of a calculation can be parallelized then the maximum speed-up on 1000 processors is 1/(0.1+(1-0.1)/1000) or 9.9 (i.e. throwing an absurd amount of hardware at the calculation results in a maximum theoretical (i.e. actual results will be worse) speed-up of 9.9 vs a single processor).

Essence
The point that Amdahl was trying to make was that using lots of parallel processors was not a viable way of achieving the sort of speed-ups that people were looking for. i.e. it was essentially an argument in support of investing effort in making single processor systems run faster.

Generalizations

The performance of any system is constrained by the speed or capacity of the slowest point. The impact of an effort to improve the performance of a program is primarily constrained by the amount of time that the program spends in parts of the program not targeted by the effort

MAXIMUM THEORETICAL SPEED-UP

Amdahl's Law is a statement of the maximum theoretical speed-up you can ever hope to achieve. The actual speed-ups are always less than the speed-up predicted by Amdahl's Law

Why actual speed ups are always less ?

distributing work to the parallel processors and collecting the results back together is extra work required in the parallel version which isn't required in the serial version straggler problem :when the program is executing in the parallel parts of the code, it is unlikely that all of the processors will be computing all of the time as some of them will likely run out of work to do before others are finished their part of the parallel work.

Bottom Line
Anyone intending to write parallel software simply MUST have a very deep understanding of Amdahl's Law if they are to avoid having unrealistic expectations of what parallelizing a program/algorithm can achieve and if they are to avoid underestimating the effort required to achieve their performance expectations

Questions ?

You might also like