You are on page 1of 20

Parallel Implementation of Dissipative Particle Dynamics: Application to Energetic Materials 1 Introduction

The simulation of various systems of particles provide an alternative for conducting costly or risky experiments in the eld. However, these simulations often require a trade-off between accuracy or level of detail and the size of the simulation. For example, a simulation which strives to maintain electronic detail such as Quantum Monte Carlo can only be used to simulate systems of 30 or 40 atoms for a picosecond on current computer systems. Molecular Dynamics (MD) can be used to simulate molecules on an atomistic scale. Atomistic MD follows the classical laws of physics with atomic movement governed by Newtons second law [1]. In atomistic MD simulations, only atoms are modeled as opposed to quantum mechanics simulations, which model atoms and electrons. Millions of atoms can be modeled on nanosecond timescales in atomistic MD. Another method to run larger simulations with acceptable trade-off in detail is coarse graining. Methods of coarse graining include representing groups of atoms as beads or considering the system in terms of elds instead of individual forces. Coarse grained MD would be calculated much the same way as MD, by considering the conservative forces on each bead via Newtons second law. Dissipative Particle Dynamics (DPD) improves CG MD by adding dissipative and random forces [2]. DPD has been prevalent in the simulation of polymers, surfactants and colloids since its inception, but has recently been used in other applications including energetic materials [3, 4, 5]. DPD, as originally formulated, samples the canonical ensemble (i.e., constant temperature), but variants of DPD have also been developed that sample the isothermal, isobaric ensemble (constant pressure DPD), the microcanonical ensemble (constant energy DPD), as well as isoenthalpic conditions (constant enthalpy DPD). The purpose of this research project was to develop parallel

versions of these DPD variants for a previously written parallel MD code, called CorexMD. Implementation of these parallel DPD variants will allow larger simulations of millions or billions of particles on longer length (up to microns) and time (up to microseconds) scales. From this work, researchers can model microstructural voids in energetic materials, which are known to be important for detonation of explosives. Current experiments also cannot provide information about materials and their detailed mechanisms so this project will assist in that respect.

Dissipative Particle Dynamics


In DPD, particles are dened by mass (m), position (r), and momentum (p). They interact

through a pairwise force (Fi j ).

D R Fi j = FC i j + Fi j + Fi j

(1)

D R In Eq. (1), FC i j is the conservative force, Fi j is the dissipative force, and Fi j is the random force,

which are given by Eqs. (2) - (4). duCG ij ri j drij ri j ri j ri j vi j ) ri j ri j ri j ri j

FC ij

(2) (3) (4)

D FD i j = i j (ri j )(

R FR i j = i j (ri j )Wi j

In Eqs. (2) - (4), rij is the separation vector between particles i and j, ri j = |rij |, is the friction
pi coefcient, is the noise amplitude, vij = m mjj , Wi j is an independent Wiener process such that i p

Wi j = W ji , and D and R are weighting functions that vanish for r rc where rc is the cut-off radius [6]. DPD variants exist that conserve temperature (DPD-T), energy (DPD-E), pressure (DPD-P),

and enthalpy (DPD-H). The equations of motion for each variant are briey discussed below.

2.1

Constant Temperature DPD, DPD-T

In constant-temperature DPD, temperature and momentum are conserved. The following are the equations of motion that describe DPD-T: pi dt mi

dri =

(5) (6)

D R dpi = (FC i j + Fi j + Fi j )dt j=i

In order for the system to sample the canonical ensemble (i.e. constant number of particles, volume, and temperature), DPD-T must obey the following uctuation-dissipation theorem [7]:

2 i j = 2i j kB T

(7)

and D (r) = [R (r)]2 Typically, the weighting functions D (r) and R (r) are chosen to be: r 2 ) rc (8)

D (r) = [R (r)]2 = (1

(9)

2.2

Constant Energy DPD, DPD-E

In constant-energy DPD (DPD-E), the total energy of the system is conserved. In order to do so, the equations of motion for DPD-T are coupled with an internal mesoparticle equation of state, ui , which is taken to be a sum of two terms that account for internal energy transfer via mechanical and conductive means. This requires that dui = dumech + ducond [8]. The equations of motion for i i

DPD-E then become [2] :

dri =

pi dt mi

(10)

D dpi = FC i j dt i j ( j=i

rij rij rij vij ) dt + i j R dWij ri j ri j ri j

(11)

duvis i =

2 rij rij 1 1 ij 1 D rij ( v ) d t ( + )(R )2 dt i j R ( vij )dWij ij ij 2 j=i ri j ri j 2 mi m j ri j = i j ( ducond i


j=i

(12)

1 1 )Dq dt + i j Rq dWij i j

(13)

In Eqs. (10) - (13), and are the mesoscopic thermal conductivity and noise amplitude, respectively, is the internal temperature, and Wij is an independent Wiener process, such that Wij = Wji . The uctuation theorem becomes:

2 i j = 2i j kB i j
1 1 where i j = 1 2 ( i + j ) and with,

(14)

D (r) = [R (r)]2

(15)

The mesoscopic thermal conductivity and noise amplitude are also related through a uctuationdissipation theorem [9]: 2 i j = 2kB i j Dq (r) = [Rq (r)]2 (16) (17)

Similar to DPD-T, the weighting functions are generally assumed to be [2][8]: r 2 ) rc

Dq (r) = D (r) = [R (r)]2 = [Rq (r)]2 = (1

(18)

2.3

Constant Pressure DPD, DPD-P

A constant pressure variant of DPD (DPD-P) can be formulated by coupling the equations of motion for DPD-T with a barostat. This barostat xes pressure upon some imposed pressure and allows volue to uctuate. Thus, DPD-P conserves pressure, temperature and momentum. For uniform dilation using a Langevin barostat, the equations of motion for DPD-P[10] are: p pi dt + ri dt mi W d p ) pi dt Nf W

dri =

(19)

D R dpi = (FC i j + Fi j + Fi j )dt (1 + j=i

(20)

dlnV =

dp dt W

(21) (22)

dp = F dt

V , V is the volume, W is a mass parameter, p is a momentum In Eqs. (19) - (22), = ln V 0 pi pi d conjugate, and F = dV (P P0 ) + N p p + pWp . The pressure, P, is calculated from i m i f

the virial formula [11], pi pi 1 ( + FC i j ri j ) dV i mi i j>i

P=

(23)

In Eq. (23), P0 is the imposed pressure, d is the dimensionality, N f = dN d , p and p are Langevin barostat parameters, and Wp is the Wiener process associated with the piston. The associated uctuation-dissipation theorem relationships are those from DPD-T, Eqs. (7) -

(8), along with 2 p = 2 pW kB T


p2

(24)

As i j and p go to zero, the expression H = K + U + P0V + 2W should be conserved, where K

is the kinetic energy and U is the potential energy.

2.4

Constant Enthalpy DPD, DPD-H

Constant-enthalpy DPD is a new DPD variant proposed by M. L sal et al. [2]. It combines the equations of motion for DPD-E with the barostat of DPD-P. As it conserves both energy and pressure, DPD-H conserves enthalpy (i.e. a system at constant energy and pressure is at constant enthalpy). The resulting equations of motion become: p pi dt + ri dt mi W d p ) pi dt Nf W

dri =

(25) (26)

D R dpi = (FC i j + Fi j + Fi j )dt (1 + j=i

dlnV =

dp dt W

(27) (28) (29)

dp = F dt dumech i 2 rij 1 1 ij 1 D rij 2 = i j ( vij ) dt ( + )(R )2 dt i j R ( vij )dWij 2 j=i ri j 2 mi m j ri j ducond = i j ( i


j=i

1 1 )Dq dt + i j Rq dWij i j

(30)

The uctuation-dissipation theorem relationships from DPD-P, Eq. (24), and DPD-E, Eqs. (14) - (17), dene the necessary relations for DPD-H.
In DPD-H, the total enthalpy is conserved [12]. Thus, H = K + U + Ui + P0V + 2W is con

p2

served.

2.5

DPD Model

The DPD variants were tested with two types of potential models. The rst was the DPD uid model, which is commonly used as a potential in DPD. The model denes the conservative potential as [13]:
D uCG i j = ai j rc (ri j )

(31)

where ai j denes the magnitude of the repulsion between two particles. DPD does not require that Eq. (31) be used as the potential. Thus, tabulated potentials have also been implemented into the code which allow for any conservative potential and force to be used within the DPD framework. As an example, simulations have been performed using a density dependent tabulated potential for RDX, which was tted through a technique called force matching by Izvekov et al. [14].

FORTRAN90 and CoreXMD


FORTRAN90 is a general-purpose, procedural programming language that is especially suited

to numeric computation and scientic computing. Because of its capabilities, it is used for programming the DPD variants into CoreXMD. CoreXMD is a software package designed for performing particle simulations over multiple processors [15]. In CoreXMD, simulations follow a typical procedure within the code involving initialization, iteration and destruction. During the iteration step, CoreXMD splits up pairs of particles onto different domains on different processors such that the forces and energy of those pairs of particles can be calculated simultaneously and therefore, parallel speed-up occurs. The goal of this project was to implement variants of DPD such that the parallel abilities of CoreXMD were used effeciently.

Specics of Implementation of DPD Variants into CoreXMD


The fundamental steps required when performing simulations in CoreXMD involve the creation

and initialization of the system variables then the iteration for each time step in which forces are calculated and nally the destruction of the simulation variables. The creation and initialization of the simulation variables dene the simulation. Various parameters are read in from input which dene the unit cell, periodic boundaries, the initial positions and velocities, as well as other parameters which dene the variant of DPD to be used (i.e., DPD-T, DPD-E, DPD-P, DPD-H). The iterative step is the most expensive step of the simulation as it requires calculating the forces for each pair of particles as well as updating particle velocities and positions through iteration of the pair lists. The iterative step from DPD as implemented requires a Shardlow Splitting Algorithm (SSA) [16] and a two step velocity Verlet algorithm to update the positions and velocities [11]. Between the two velocity Verlet steps, the conservative forces for that time step are calculated. The Shardlow Splitting Algorithm method updates the velocity based on the dissipative and random forces via integration of stochastic differential equations, while the velocity Verlet algorithm updates the velocities via integration of ordinary differential equations. In DPD-E and DPDH, the Shardlow Splitting Algorithm also calculates the mechanical and conductive energies and the internal temperature. The Shardlow Splitting Algorithm was implemented in the serial code as it allows much larger time steps to be taken compared to velocity Verlet alone [16]. However, in the following results section it is shown that it may be incompatible with domain decomposition, which is fundamental to the parallel processing capabilities of CoreXMD. The nal step, the destruction of the simulation variables, is used for memory optimization, with the memory of various variables and arrays being released. DPD-P was implemented into CoreXMD by adding a stochastic Langevin barostat to the DPD-

T code. The CoreXMD DPD-E was implemented by the addition of the internal particle energy comprising the conductive and mechanical energies, which account for some internal degrees of freedom lost due to coarse graining. DPD-H was implemented through the addition of the Langevin barostat from DPD-P into the DPD-E code. DPD-T was not implemented as a part of this project as it was previously coded into CoreXMD.

Results
The success of the implementation of the DPD variants into the parallel code was measured

by two criteria: how well the results conserved the quantity stated in the variant (i.e., constant temperature, pressure, energy, or enthalpy) and how well the parallel code scaled. This paper will rst discuss the former by comparing results from CoreXMD codes to those from serial versions of the codes.

5.1

DPD-E Results

In Figure 1, the total energy calculated from 1-million step DPD-E simulations is shown for the DPD uid potential. For the serial code, very little energy drift occurs as evidenced by the black, horizontal line. All of the CorexMD runs show energy drift, with the magnitude of the energy drift increasing with the number of processors used. For the serial version of the code, the percentage drift over 1-million steps is nearly 0, while in the 1 processor CorexMD runs it is > 0.01 % increasing to > 0.05 % for 16 processors. In Appendix B1, the conductive, mechanical, congurational, and kinetic contributions to the total energy are shown. When compared to the serial code, all have similar standard deviations in their uctuations except total energy in CoreXMD. This suggests that there is an incompatability between the splitting algorithm employed and domain decomposition. Since the percentage drift is increasing with the number of processors, the error could be due to the use of the Shardlow Splitting Algorithm. The Shardlow Splitting Algorithm relies on a 9

sequential update over the pairs of particles, which is fundamentally known to be incompatible with parallelization via domain decomposition. This is due to domain decomposition utilizing loops over pairs on different processors non-sequentially. Despite the drift, it is noted that the percentage drift is small over 1 million steps, with the total energy varying only in the fourth decimal place in Figure 1 in the CoreXMD results. Generally for velocity Verlet and other integration algorithms for microcanonical molecular dynamics simulations, the energy should be constant within 0.01 % over the course of the simulation [1], and the results for DPD-E nearly meet this requirement. Thus, even with the current energy drift, the results may be suitable for many applications.

Figure 1. A DPD-E simulation of 10,125 molecules over 1,000,000 timesteps using the DPD uid potential from Eq. 31. Note that the serial run displays constant energy, but the CoreXMD parallel runs show considerable drift as the number of processors increases. While the drift looks signicant, it is important to note that the drift is on the order of 104 which is quite minute.

5.2

DPD-P Results

Figure 2 displays the pressure from a DPD-P simulation of the DPD uid potential. The uctuations of the CoreXMD results on 1, 8 and 16 processors compare well with that of the serial code

10

results. In constant pressure DPD, the calculated pressure uctuates about the imposed pressure. In Appendix B2, comparisons of the temperature, congurational energy, and density are shown along with the pressure.

Figure 2. DPD-P simulation of 10,125 molecules over 1,000,000 timesteps using the DPD uid potential from Eq. 31. Note that the serial and CoreXMD parallel runs seem to contatin similar results, with a uctuation of 8 bars over 1,000,000 steps and a total of 1500 bars. This is accepted as use of the stochastic barostat induces these expected uctuations.

5.3

DPD-H Results

DPD-H was implemented into CorexMD by implementing the barostat from DPD-P in DPD-E. For the same reason as was observed in DPD-E, the enthalpy should be expected to show some drift, increasing with the number of processors used. This is indeed the case and can be observed in Figure 3. As a stochastic barostat is used in DPD-H, the standard deviation of the uctuations in the enthalpy for CorexMD are closer in magnitude to the serial code than was the total energy in DPD-E. In Appendix B3, the averages and standard deviations of various properties are shown to

11

be in agreement between the serial and CorexMD implementations of DPD-H, with the enthalpy uctuations being one or two orders of magnitude larger than that of the serial code.

Figure 3. DPD-H simulation of 10,125 molecules over 1,000,000 teimsteps using the DPD uid potential from Eq. 31. Note that because the DPD-H was built off the DPD-E, there is still a drift between the enthalpy of the CoreXMD parallel runs and the constant serial runs. It is important to note that these drifts are signicantly close in magnitude to the serial run because DPD-H employs the same stochastic barostat used in DPD-P.

5.4

Error Suppression

A technique to suppress energy and enthalpy drift in the DPD-E and DPD-H methods, respectively, was proposed by M. L sal et al [2]. This was implemented into the CorexMD DPD-E and DPD-H variants. Using error suppression, the mechanical energy is adjusted such that any energy or enthalpy drift is corrected, ensuring the total energy or enthalpy to be perfectly constant. It is advised not to use this option until the cause of the DPD-E and DPD-H drift is further investigated.

12

Scalability
To measure the success of the parallel implementation of the DPD variants, the scalability of

the codes was investigated. The DPD-P simulation runs are used to demonstrate the scalability and speed up from the serial counterpart. In Tables 1 - 3, the run times and parallel speedups are shown for 9,216, 124,416 and 1,152,000 RDX particles for DPD-P as implemented into CorexMD with density dependent tabulated potentials. Parallel speedups are shown in relation to 8 processors, the number of cores on 1 node on the supercomputer employed. As the number of particles increases the parallel speedup increases (Figure 4). This is the expected behavior as particle dynamic codes generally scale better for larger systems due to the implementation of domain decomposition. For effective utilization of processors, the correct choice of the number of processors and nodes based on the number of particles is important. A scalability graph (Figure 4) is important to fully quantify the efciency of ones parallel code. Generally, the aim is to distribute 1000 to 2000 particles per processor. Thus, for 1.1 million particles using 2000 particles per processor, the scalability should be efcient to 550 processors. However the scalability in Figure 4 is shown to drop off signicantly with a parallel efciency of 75 % on 32 processors for 1.1 million particles, then decreasing to 37 % on 128 processors. Further optimization of the code should improve the scalability, allowing for larger numbers of processors and nodes to be utilized. However, the parallelization has dramatically increased the speed at which results can be obtained. For example, for 124,416 particles using 8 processors, CoreXMD was 293 % faster than the serial DPD-P code.

13

Table 1. This table shows the parallel speedup of the DPD-P CoreXMD run for 9,216 beads using the RDX potential. All speedup values are relative to the run time on 8 processors because that is how many processors are present on each node.

CoreXMD DPD-P Speedup of 9,216 RDX beads # of Processors Run Time Parallel Speedup 1 8 16 0:21:27 0:03:29 0:02:19 N/A 8 12.02

Table 2. This table shows the parallel speedup of the DPD-P CoreXMD run for 124,416 RDX beads. All speedup values are relative to the run time on 8 processors because that is how many processors are present on each node. It is important to note that as the number of processors increases, the speedup increases, but in decreasing amounts. This is due to the additional processors not used to each ones fullest potential.

CoreXMD DPD-P Speedup of 124,416 RDX beads # of Processors Run Time Parallel Speedup 1 8 16 32 64 128 4:11:37 0:36:53 0:21:49 0:14:13 0:11:05 0:09:26 N/A 8 13.52 20.75 26.62 32.48

14

Table 3. This table shows the parallel speedup of the DPD-P CoreXMD run for 1,152,000 RDX beads. All speedup values are relative to the run time on 8 processors because that is how many processors are present on each node. It is important to note that as the number of processors increases, the speedup increases, but in decreasing amounts. This is due to the additional processors not used to each ones fullest potential

CoreXMD DPD-P Speedup of 1,152,000 RDX beads # of Processors Run Time Parallel Speedup 1 8 16 32 64 128 N/A 1:16:02 0:43:55 0:25:32 0:16:48 0:12:43 N/A 8 13.85 23.82 36.20 47.83

Fig 4. This is the scalability of DPD-P as implemented into CoreXMD for the Izvekov et al. [17] density dependent model of RDX. Note that as the number of molecules increases, the speedup improves; this is due to increased efciency in the use of the processors full potential. The 45 degree line is perfect scalability, essentially for some increase in the number of processors, the program will speed up by that amount. This line is the objective so the white space between the current scalability lines and the 45 degree line demonstrates room for improvement.

15

Summary
Constant temperature (DPD-T), pressure (DPD-P), energy (DPD-E) and enthalpy (DPD-H)

variants of dissipative particle dynamics have been implemented into ARLs CorexMD code. DPDT and DPD-P conserved temperature and pressure, respectively, in CorexMD as well as their serial counterparts. However, the DPD-E and DPD-H implementations of CorexMD showed larger energy and enthalpy drift, respectively, compared to their serial counterparts. This was attributed to the known problems of utilizing the Shardlow Splitting Algorithm [16] with domain decomposition. In light of that, the energy and enthalpy drifts were relatively small, a maximum of < 0.06 % over 1 million steps on 16 processors. An error suppression algorithm was implemented which can eliminate this error but was advised not to be used until the source of the error was investigated further. Density dependent tabulated potentials were implemented which allow for any potential to be used for the conservative force, not just the DPD uid potential. This allowed 1.1 million RDX molecules to be simulated within the DPD-P code for the rst time. The scalability of the DPD variant implementations shows that further improvement to the codes are needed. This can be accomplished through removing numerous multiple variables and transforming them into particle data or type variables. There is overhead from the initial coding and removing these overheads would provide a signicant improvement toward scalability and speedup. For 1.1 million RDX molecules the DPD-P code was shown to be 75 % efcient on 32 processors and was 293 % faster on 8 processors compared to the serial code. The effort presented here provides a suitable starting point for future improvements and optimization of the code.

16

A
A.1

DPD Variant Results


Constant Energy DPD, DPD-E

Table B1-1. Table containing components in DPD-E for a DPD-E simulation using DPD uid potential from Eq. 31. Format is: average standard deviations

CoreXMD Component Serial Umech/eV Ucond/eV Uconf/eV Ukin/eV Utot/eV Tkin/K Press/bar 0.776 0.000374 0.776 0.0 0.117 0.000202 0.0382 0.000308 1.707 3.52E-9 295.484 2.381 1537.571 2.302 Serial 0.776 0.000372 0.775 0.0 0.117 0.000195 0.0382 0.00031 1.707 0.0000638 295.292 2.399 1537.427 2.284 8 Processors 0.776 0.000441 0.775 0.0 0.117 0.000206 16 Processors 0.776 0.000457 0.775 0.0 0.117 0.000203

0.0382 0.000316 0.0382 0.00029 1.707 0.000175 295.226 2.442 1537.41 2.241 1.707 0.000266 295.229 2.241 1537.418 2.27

A.2

Constant Pressure DPD, DPD-P

Table B2-1. Table containing components in DPD-P for DPD-P simulation using the uid potential from Eq. 31. Format is: average standard deviations

CoreXMD Component Serial Tkin/K Press/bar Uconf/eV Dens 299.987 2.403 1536.241 8.108 0.117 0.000486 Serial 299.649 2.323 1537.073 7.972 0.117 0.000493 8 Processors 299.487 2.411 1537.363 7.745 0.117 0.000473 0.00471 1.11E-05 16 Processors 299.536 2.42 1536.801 8.37 0.117 0.000514 0.00471 1.22E-05

0.00471 1.19E-05 0.00471 1.14E-05

17

A.3

Constant Enthalpy DPD, DPD-H

Table B3-1. Table containing components of DPD-H from a DPD-H simulation using the DPD uid potential from Eq. 31. Format is: average standard deviations

CoreXMD Component Umech/eV Ucond/eV Uconf/eV Ukin/eV Utot/eV Tkin/K Press/bar Serial 0.776 0.000364 0.776 0.0 0.117 0.000506 Serial 0.776 0.00042 0.776 0.0 0.117 0.000472 8 Processors 0.776 0.00048 0.776 0.0 0.117 0.000508 16 Processors 0.776 0.000461 0.777 0.0 0.117 0.000489

0.0382 0.000325 0.0382 0.000333 0.0382 0.000345 0.0381 0.000306 1.707 0.00052 295.579 2.325 1536.3995 8.2 1.707 0.000498 295.289 2.574 1536.948 7.935 1.91 0.000217 1.707 0.000545 295.172 2.671 1537.667 8.318 1.911 0.000158 1.707 0.000559 295.945 2.366 1537.396 8.205 1.911 0.000221

Enthalpy/eV 1.911 6.60 E-6

18

References
[1] K.E. Gubbins and J.D. Moore. Molecular modeling of matter: Impact and prospects in engineering. Ind. Eng. Chem. Res., 49:30263046, 2010. [2] M. Lisal, J.K. Brennan, and J.Bonet Avalos. Dissipative particle dynamics at isothermal, isobaric, isoenergetic and isoenthalpic conditions using shardlow-like splitting algorithms. Submitted, 2011. [3] G. Stoltz. A reduced model for shock and detonation waves. i. the inert case. Eur. Phys. Lett., 76(849), 2006. [4] A. Strachan and B.L. Holian. Energy exchange between mesoparticles and their internal degrees of freedom. PRL, 94(014301), 2005. [5] B.L. Holian. Formulating mesodynamics for polycrystaline materials. Eur. Phys. Lett., 64(330), 2003. [6] P.J. Hoogerbrugge and J.M.V.A. Koelman. Simulating microscopic hydrodynamic phenomena with dissipative particle dynamics. Europhys. Lett., 19(3):155160, 1992. [7] P. Espanol and P. Warren. Statistical mechanics of dissipative particle dynamics. Europhys. Lett., 30(4):191196, 1995. [8] P. Espanol. Dissipative particle dynamics with energy conservation. 40(6):631636, 1997. Europhys. Lett.,

[9] J.B. Avalos and A.D. Mackie. Dissipative particle dynamics with energy conservation. Europhys. Lett., 40(2):141146, 1997. [10] A.F. Jakobsen. J. Chem. Phys., 122(124901), 2005. [11] D. Frenkel and B. Smit. Understanding Molecular Simulation (Second ed.). Academic Press, 2002. [12] M.P. Allen and D.J. Tildesley. Computer Simulation of Liquids. Clarendon Press, 1987. [13] R.D. Groot and P.B. Warren. J. Chem. Phys., 107(4423), 1997. [14] S. Izvekov, P.W. Chung, and B.M. Rice. Particle-based multiscale coarse graining with density-dependent potentials: Application to molecular crystlas (hexahydro-1,3,5-trinitro-striazine). J. Chem. Phys., 135(044112), 2011. [15] ARL. CoreXMD: New Component Tutorial. [16] T. Shardlow. SIAM J. Sci. Comput, 24(1267), 2003.

19

[17] S. Izvekov, P.W. Chung, and B.M. Rice. The multiscale coarse-graining method: Assessing its accuracy and introducing density dependent coarse-grain potentials. J. Chem. Phys, 133(064109), 2011.

20

You might also like