Professional Documents
Culture Documents
Who am I?
Syed Hasibur Rahman (Ananno)
M.Sc. 2008 Department of Physics
Thesis : Setup a prototype PC Cluster for High Performance Computing
And now?
Sr . Software Engineer Research and Development Brac IT Services Ltd.
Good quality computer usually Servers interconnected via High Speed Network are working together on one particular job in parallel delivering very high calculations per unit time
Software:
A computer programming that allows a particular task to be split into parts in such a way that each parts can be assigned to one of the several different processing units available in a distributed system i.e. HPC facility
Much
bigger Memory
E.g. : 1024x1024x1024 x 4 bits (Integer) = 4294967296 bits > 4 GBytes E.g. : 1024x1024x1024 x 64 bits (Double) = 68719476736 bits > 68 GBytes
Very
Big in Size
of Sq. Ft. of floor space
Thousands
Power
Hungry
Hundreds of Kilowatts / Several Megawatts of Electricity
Requires
Each PC ~ 400Watt Few Gigaflops Few GB of RAM x 1000 = Teraflop scale ~ Megawatt power Server ~ 600Watt 7~12 Gigaflops 8~128 TB RAM x 100 = Teraflop scale ~ 30 KW
Types of HPC
Supercomputers
One
Clusters
Hundreds/Thousands
of relatively small machine grouped together with high speed interconnectivity Hundreds/Thousands of Node Each node has several CPUs and each CPU has multiple processing Core, few GB of RAM for each Core or Clusters geographically distributed are grouped together to sharing resources several clusters / supercomputers Infrastructure As A Service Virtual Clusters consist of several machines
Grid
Supercomputers
Cloud
IAAS
Generating large quantities of Random Numbers, System involving Large number of particles, Genome decoding, Nano-magnetism
Parametric calculations Analyzing very large data Analyzing Huge number of relatively small data
LHC
SETI
Stock Market Analysis Cyber forensic
Astrophysics
Bio Informatics Aerodynamics
Why simulation is preferred before actual experiments No high-tech expensive lab setup is needed
High-tech
HPC facility might be required and computation time depends of number of independent
variables
programming skills might be needed of them are pretty expensive programming skills is needed
Better visualization
Supercomputers are very much expensive and strictly task oriented machine
Clusters can be built using even Desktop PCs which may not be run continuously for very long time
No special hardware Commodity Hardware Can be scaled very easily Relatively less power hungry Can be contribute to existing Grid / Cloud Technology Can be connected via fiber-optics within the country using govt. infrastructures to establish grid network Not much sophisticated support systems i.e. cooling, power etc. are required
to ask question WHAT, WHY , WHEN, WHERE and most importantly HOW open source and very powerful and most importantly - Free sense
Linux of course
Being Common
Common
Patience of Bug-Fixing
Parallel
SIMD
MISD
Multiple Instruction Single Data Data Parallel Calculations Multiple Instruction Multiple Data
Recursive MIMD
Both
Tasks
Instruction Parallel
Same
Instruction
Multiple
Data (Tires)
Data Parallel
Same
Data
Multiple
Each
Instruction (Crews)
Multicore
Embarrassingly
parallel / SIMD
Todays
6/2012)
16.32
Thank You
QA