You are on page 1of 9

Implementation of

Parallelization and Nano


Simulation using Multi-
Scale Modeling on various
HPC setups
Rohit Pathak
Satyadhar Joshi
Satyadhar_joshi@yahoo.com
xrohit@hotmail.com
(Research Paper Available at IEEE Xplore
Kindly cite the PPT as per the IEEE copyright)
Introduction & The Importance
of Work
• Importance of HPC and multi scale
modeling are eminent in current era
• Various models for HPC have been
proposed but very less has been
talked about different models
available for an HPC based setup for
multi scale modeling. A problem we
have solved
• Multi scale modeling poses some
challenges which can be met by
choosing the appropriate environment
for HPC. Also the cost of the HPC
setups may vary so it is important to
select things as per our needs.
Distribution of various
computations needed to be
performed in an HPC setup
PART OF THE MPI.NET
• IMPLEMENTATION
using (new MPI.Environment(ref args)) { ON WCCS
• Communicator comm = Communicator.world; Console.WriteLine("Check
from process " + comm.Rank); switch (comm.Rank){
• case 0: /*Code to be executed at this level goes down*/
/*Continuum Theory - NavierStokes*/ initContinuumThory();
break;
• case 1: /*Code to be executed at this level goes down*/ /*Kinetic
Theory - Boltzmann*/ initKineticTheory(); break;
• case 2: /*Code to be executed at this level goes down*/ /*Molecular
Dynamics - Newton*/ initMolecularDynamics();
break;
• case 3: /*Code to be executed at this level goes down*/ /*Quantum
Mechanics - Schrodinger*/ initQuantumMechanics();
break;
• default: Console.WriteLine("Process " + comm.Rank+ " status: Idle");
break; }
• /*All processes join here*/ comm.Barrier(); /*All processes
completed*/
• if (comm.Rank == 0){Console.WriteLine("All processes finished");} }/* End of
MPI Environment namespace */


Compilation in PVM

• Spawning the master


process through PVM
• Executing the
masterSimulation
• Configuring the virtual
machine
• Compiling the files
MATLAB IMPLEMENTATION
• function [ ] =
pvmpi_matlab_distcomp1( )
• jm = findResource('scheduler',
'type', 'jobmanager', 'name',
'jm', 'LookupURL', 'localhost');
• job = createJob(jm);
• set (job,'FileDependencies',
{'quantum.m' 'molecular.m'
'kinetic.m' 'continuum.m'});
• createTask(job, @continuum, 1,
{});
• createTask(job, @kinetic, 1,{});
• createTask(job, @quantum, 1,
{});
• createTask(job, @molecular, 1,
{});
• submit(job);
• waitForState(job, 'finished', 60);
• ans =
getAllOutputArguments(job);


Overall System Costing vs.
System Resource Requirement
• Overall
System
Costing vs.
System
Resource
Requireme
nt
• Value for
Money vs.
Reliability
• Reliability vs.
Overall
System
Costing
• Complexity
vs. Value
Various Factors
• Setup Complexities
• System Requirements
• Efficiency and Resource
Requirement
• Performance Analysis
• Access to low level
Customizations
• Reliability and Node
Failure Mechanism
• Ease of Use
• Arithmetic Library and
Tools
• Price Vs Performance
CONCLUSION

• A comprehensive approach to cover all available HPC


models for Multi Scale simulation was studied and
implemented.
• The category wise performers have been
demonstrated keeping the criteria as ease of
handling, speed, and overall balance and support
• The maximum performance in ease of handling was
given by MATLAB with its inbuilt mathematical
library and distributive toolbox but it had high
memory consumption and greater processing power
requirement.
• The best results in speed were given by PVM – Linux
and MPI.NET. Ease of handling was found to be the
least in PVM but it being open source it gave us
freedom in changing the source code to meet out
case specific requirements.
• Finally overall balance and support was found
optimum in MPI.NET WCCS. PVM was found to be

You might also like