You are on page 1of 11

Computer Physics Communications 200 (2016) 7686

Contents lists available at ScienceDirect

Computer Physics Communications


journal homepage: www.elsevier.com/locate/cpc

Coupling between a multi-physics workflow engine and an


optimization framework
L. Di Gallo a, , C. Reux a , F. Imbeaux a , J.-F. Artaud a , M. Owsiak d , B. Saoutic a , G. Aiello c ,
P. Bernardi a , G. Ciraolo a , J. Bucalossi a , J.-L. Duchateau a , C. Fausser b , D. Galassi a,e ,
P. Hertout a , J.-C. Jaboulay b , A. Li-Puma c , L. Zani a
a
b
c

CEA, IRFM, F-13108 Saint-Paul-lez-Durance, France


CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette, France
CEA, DEN, Saclay, DM2S, SEMT, F-91191 Gif-sur-Yvette, France

Poznan Supercomputing and Networking Center, IChB PAS, Noskowskiego 12/14, 61-704 Poznan, Poland

Aix Marseille Universit, CNRS, Centrale Marseille, M2P2 UMR 7340, 13451, Marseille, France

article

info

Article history:
Received 6 August 2015
Received in revised form
16 October 2015
Accepted 10 November 2015
Available online 1 December 2015
Keywords:
Multi-physics workflow engine
Optimization framework
Code coupling
DEMO fusion reactor design

abstract
A generic coupling method between a multi-physics workflow engine and an optimization framework is
presented in this paper. The coupling architecture has been developed in order to preserve the integrity
of the two frameworks. The objective is to provide the possibility to replace a framework, a workflow
or an optimizer by another one without changing the whole coupling procedure or modifying the main
content in each framework. The coupling is achieved by using a socket-based communication library for
exchanging data between the two frameworks. Among a number of algorithms provided by optimization
frameworks, Genetic Algorithms (GAs) have demonstrated their efficiency on single and multiple criteria
optimization. Additionally to their robustness, GAs can handle non-valid data which may appear during
the optimization. Consequently GAs work on most general cases. A parallelized framework has been
developed to reduce the time spent for optimizations and evaluation of large samples. A test has shown
a good scaling efficiency of this parallelized framework. This coupling method has been applied to the
case of SYCOMORE (SYstem COde for MOdeling tokamak REactor) which is a system code developed in
form of a modular workflow for designing magnetic fusion reactors. The coupling of SYCOMORE with the
optimization platform URANIE enables design optimization along various figures of merit and constraints.
2015 EURATOM. Published by Elsevier B.V. All rights reserved.

1. Introduction
Generic purpose workflow engines can be used to solve physics
problems involving a number of coupled modular components.
These generic purpose workflow engines usually lack numerical optimization tools, which are instead embedded in dedicated
optimization frameworks. This work presents a generic coupling
method between a multi-physics workflow engine and an optimization framework, thus enabling the optimization of complex
simulations integrating potentially a large number of workflow
components. This method has been successfully applied to the optimization of fusion reactor design.

Corresponding author.
E-mail address: luc.digallo@cea.fr (L. Di Gallo).

http://dx.doi.org/10.1016/j.cpc.2015.11.002
0010-4655/ 2015 EURATOM. Published by Elsevier B.V. All rights reserved.

The main objective is preserving the integrity of both frameworks. The solution is to separate them as much as possible in order to run them independently. They are coupled by an exchange
of data as it is illustrated in Fig. 1. One waits for the data coming from the other to run and send its results to the other. Each
framework sees the other one as a black box. The data communication is performed by a socket-based communication library called
KUI (KEPLERURANIE Interface) which we have developed for this
purpose. Instead of a tight integration, the two frameworks remain loosely coupled with this socket-based communication. This
coupling architecture allows thus to replace easily any scientific
workflow or optimization algorithm by another one. Moreover, optimizer frameworks or workflow engines can be changed without
affecting the coupling procedure.
Software and tools for both frameworks have been chosen for
their simplicity of use and their adaptability to any physics context.
The workflow engine chosen for the multi-physics framework is

L. Di Gallo et al. / Computer Physics Communications 200 (2016) 7686

77

Fig. 2. Diagrammatic view of the SYCOMORE workflow with the series of


subsystems components (CS = Central Solenoid, TF = Toroidal Fields).

Fig. 1. The system code SYCOMORE for DEMO designing is coupled with the
URANIE platform by using the socket-based communication library KUI.

the graphical software KEPLER [1,2]. A standardized data model


adapted to the physics context ensures data consistency within the
complete workflow. This facilitates the use and the development of
workflows and gives more modularity. Optimizations are achieved
by the URANIE platform developed by CEA. Such platform provides
various tools for optimizations, data analysis and sensitivity
studies. Genetic Algorithms (GA) have been selected to perform
optimizations since they have demonstrated their efficiency. Such
algorithms can achieve single and multiple criteria optimizations
and they are enough robust to handle non-valid data which can
appear during the optimization. Other tools provided by URANIE
such as for data analysis and sensitivity study can be helpful to
confirm optimization results or highlight parameters which mostly
impact the optimum [3].
An external parallelized framework has been developed to
speed up optimizations and evaluation of large samples. This
parallelized framework has been demonstrated on up to 320
CPU on the EUROfusion Gateway cluster at the IPP Garching [4]
hosting the European Integrated Modelling (EU-IM) platform. This
architecture can be also generalized to grid computing.
A number of couplings between a code and an optimizer have
been already achieved in the past. For instance the Computational
Fluid Dynamics (CFD) code N3S-Natur has been coupled to the
MIPTO optimization application by using the PALM coupler [5].
The latter one exchanges data by using MPI which is also a socketbased communication. Another example is the optimizer DAKOTA
which has been coupled to the CFD code OpenFOAM [6]. DAKOTA
and OpenFOAM exchange data by using files. The coupling proposed in this paper is similar to the one developed between the
IPS and DAKOTA systems [7]. However, several main differences
between the present work and the one described in [7] can be noticed. First, multi-physics workflows are developed on a graphical
software (KEPLER) and use a standardized data model for the communication between workflow components. Second, the socket
based communication is one of the solutions to allow a loose coupling between the two frameworks. The two platforms thus keep
their integrity so that any change in one side (elements inside the
platform or even the platform itself) will not affect the other one.
This communication method also allows running the optimization
framework and the workflow engine on different computers, thus
potentially enabling distributed (grid/cloud) computing.
To illustrate the present coupling method, the modular system
code SYCOMORE (SYstem COde for MOdeling tokamak REactor)
developed by CEA to study DEMO fusion reactor design has been
coupled to the URANIE platform. SYCOMORE aims at representing
the interaction of main DEMO power plant subsystems from the
central plasma to the power Balance of Plant [8]. Each reactor
subsystem is represented by a component connected to the others

in a KEPLER workflow. The inter-component exchange of data is


performed by using the standardized data model developed by
the former EFDA Task Force on Integrated Tokamak Modelling
[9,10]. Here, optimizations achieved by URANIE aim to determine
the best DEMO reactor design and highlight main technological
developments required to get efficient fusion reactors. Several
other system codes based on a different approach from SYCOMORE
have been also developed to study DEMO design [1113]. They are
less modular and optimizations are more complex to carry out.
The current paper presents tools and methods developed to
achieve the coupling between a multi-physics workflow engine
and an optimization framework based on the SYCOMOREURANIE
coupling. The system code SYCOMORE and especially the architecture used for its development are presented in Section 2. This
part also gives details on the characteristics of the data involved
in SYCOMORE. Section 3 is dedicated to the KUI communication library. The URANIE tools are presented in Section 4. Especially, the
URANIE genetic algorithm is described in this part since it is used
to achieve SYCOMORE optimizations. Section 5 is focused on the
parallelization procedure of SYCOMORE and its performance is assessed. Finally, some results of SYCOMORE sampling and optimization are shown in Section 6 in order to illustrate the performance
of the SYCOMOREURANIE coupling.
2. Multi-physics workflow frameworks
2.1. Modularity and graphical interfaces
The KEPLER software has been chosen by the EU IM framework
to run workflows (e.g. SYCOMORE). It is a JAVA-based application
for the analysis and modeling of scientific data [1,2] providing a
graphical user interface which allows creating executable scientific
workflows.
Each modular component of the scientific workflow represents
an independent model and is called actor in KEPLER. Furthermore,
several actors which are a part of a common sub-system can be
gathered in one composite actor to make a coherent group of
actors. The SYCOMORE system code is therefore a workflow which
links together a series of components each representing a model
of a reactor subsystem [8]. A diagrammatic view of the chain
made of current components is presented in Fig. 2. In SYCOMORE,
components are executed sequentially and each of them sends
the computed data to the next one at the end of its execution. A
few internal loops over several actors (included within composite
actors) are introduced in order to let some parameters converge
to specified values. The position of a component in the sequence
is determined by its required input data (provided by previous
components). Furthermore, the reactor design is built by radially
integrating subsystems one after the other starting from the center
of the plasma up to the edge of the reactor. This impacts the
position of components in the workflow. It also allows thus to keep
the design consistency.
The graphical organization of components makes the code
structure easy to understand (see Fig. 3). This facilitates the
collective development of an integrated modeling workflow. On
the developer side, this helps contributors to see the sequence
of components clearly and easily, as well as where their own

78

L. Di Gallo et al. / Computer Physics Communications 200 (2016) 7686

Fig. 3. SYCOMORE workflow in Kepler containing composite actors.

component operates. This solution is particularly adapted for long


term projects such as DEMO design by simplifying maintenance
and integration of new developments. Furthermore, it simplifies
transfer of skills to newcomers when the team of developers
changes.
The EU-IM framework includes tools allowing the integration in
a same workflow of components written in various programming
languages [10]. Presently, SYCOMORE contains components in
FORTRAN, C++ and Python, which have been developed in different
laboratories. Such coupling of KEPLER with an optimizer allows
thus to realize optimizations of many codes, and their possible
inter-connections, written in different programming languages.
This allows also contributors to use their preferred development
framework rather than adapting to a new one.
2.2. Data model
A data model has been developed by the EU-IM Team to set
a communication standard between workflow components [9].
It aims at providing a complete description of the tokamak
subsystems and associated physical quantities. Each tokamak
subsystem or physical concept is described by a coarse-grained
and modular data structure named Consistent Physical Object
(CPO), which serves as a standard interface between workflow
components. Typical CPOs used in SYCOMORE are a 0D description
of the plasma, toroidal field coils or magnetic fields.
This standardization is convenient for implementing the
interaction between components. It also facilitates the component
specifications for external contributors. Inputs and outputs of
a given component are clearly identified. The graphical user
interface associated to the data structure makes the system code
flexible. For instance, a component can be easily removed and
replaced by an updated one or a new one.
2.3. Workflows as mathematical functions
In order to make the data exchange between the two frameworks efficient, the data produced by the multi-physics workflow
has to be characterized.
As for many system codes, SYCOMORE is an association of
several algorithms in a workflow which produces output data from
given inputs. The current workflow makes a relation between a set
of 52 permissible inputs and a set of 98 outputs. A simple run of

SYCOMORE, called a direct run, generates a unique set of output


data. In that sense, SYCOMORE workflow can be considered as a
function mathematically speaking.
In this article, input data refers to the values required to start a
run of SYCOMORE and output data refers to the numbers generated
by a run of SYCOMORE. Both kinds of data are a part of the EU-IM
data structure.
Additionally to the direct run option, SYCOMORE has the
possibility to generate many sets of SYCOMORE data (inputs plus
outputs) in order to be exploited by the user. For instance, those
sets of data can be used for optimization, sensitivity studies or
sampling. Usually, the user does not need to work with all the input
and output data involved in a SYCOMORE run but with a selection
of them. Those selected data are considered as variables. To avoid
any confusion, input variables refers to the values which may vary
between two runs of SYCOMORE during the study. Input variables
constitute the subset of input data selected by the user. The
SYCOMORE code is considered here as a function of such variables.
Output variables refers to the values extracted from SYCOMORE
used for the optimization or sampling and they constitute a subset
of output data.
2.4. Optimization scheme
A loop over the computation part of SYCOMORE has been
implemented inside the workflow in the case of multiple
evaluations. This has been done to save a lot of time when a study
needs many evaluations. For each iteration of the loop, the KEPLER
libraries and the EU-IM data structure are already loaded and
the computation of components restarts directly. Consequently, a
new evaluation of SYCOMORE during the study only requires an
iteration of the loop instead of restarting the whole SYCOMORE
workflow with its environment. During an iteration of the loop,
only the values of input variables are changed while the other input
data remain identical.
It has been decided to externalize tools for the data exploitation
(optimization, data analysis) in order to make a separation with
the problem of the data generation and thus keep the integrity
of the SYCOMORE workflow. This gives more freedom to the user
in the choice of tools and methods to achieve optimization and
data analysis. The externalization has been also motivated by the
complexity of the generated data which requires sophisticated
algorithms for the data exploitation. Output data represent many
different physical quantities and they are the results of the

L. Di Gallo et al. / Computer Physics Communications 200 (2016) 7686

Fig. 4. Diagram of the SYCOMORE loop working with an external tool.

convergence of several algorithms. Consequently, output data have


various behaviors and special cases can appear, such as errors.
Furthermore, SYCOMORE does not provide any derivatives or
gradients to the optimizer. So, derivative-free optimizers, such as
Genetic Algorithms, are recommended in that situation.
Currently, the URANIE platform, presented below, has been
selected as the external tool for optimization and data analysis.
The requirement is that the user shall be free to choose any
input and output variables within a list determined by the input
data required for operating SYCOMORE. For the rest, SYCOMORE
is considered as a black box which communicates with an
external tool. The externalization requires implementing a data
transfer procedure which is performed during the loop by the
communication library KUI.
Fig. 4 shows how the coupling with URANIE works. Three
different parts can be identified in this figure. The top part
corresponds to the SYCOMORE workflow with the generation of
output data from input data. The middle part corresponds to the
data exchange with KUI and the insertion (resp. extraction) of
inputs (resp. output) variables values in the input (resp. output)
data. The bottom part is the data exploitation by external tools,
such as URANIE. An ending signal, sent by URANIE when its
operations are completed, is checked by SYCOMORE at the end of
its computation. If the ending signal is yes, the workflow does
not loop again and finishes.
2.5. Input and output data
Due to the variety of data values and their special cases, more
details about the data characteristics are given here in order to
know how they will affect communications between SYCOMORE
and external tools for the study.
First, the input variables are chosen by the user from a list and
the remaining input data are set to an initial fixed value. The user
has to set the range of study for each variable. As for input variables,
main output variables are listed and are a subset of output data.
Depending on the user needs, one can add its own customized
output variable calculated for a specific study and not included
in the output data list. This implies that the number of possible
outputs is undetermined and can vary from a study to another one.
Flexibility in the choice and the implementation of extra output
variables must be taken into account in the data exchange process.
Another difficulty comes from the unpredictable range of output values, especially since output variables can reach infinite
values. Those values often correspond to code problems such as
non-convergence of internal solvers or variables outside validity

79

domains. Though, in a few cases, infinite values correspond to a


specific valid physical state of the system. For instance, the case of
infinite duration plasma which corresponds to a steady-state tokamak and the case of infinite fusion power gain which corresponds
to the ignition of the fusion plasma are both cases which allow infinite values. Consequently the output variables which allow infinite
values must be clearly identified. In these cases, the inf value is
replaced by an extremely large numerical value in order to ensure
the correct treatment of the variable during the study.
Other cases associated to forbidden infinite values are considered as an issue of the computed result. The key point is to identify
such non-valid output data in the workflow and stop the computation of the code after the identification of the first error in the calculation sequence. This has been achieved by defining an error flag
in the workflow which is checked at the beginning of each component and updated at the end. An error flag is notified in all situations associated to non-valid data, for instance non-valid infinite
values, and thus all output data has to be rejected.
The presence in the workflow of errors impacts the development of SYCOMORE and scientific workflows in general. Two different solutions can be adopted to solve that data problem. The
first one consists in avoiding those errors by deeply modifying the
code and has a careful management of code evolution to remove
all potential new errors. The second solution is to allow having errors in the workflow and manage them within the optimization or
data analysis. The second solution has been chosen for SYCOMORE.
This has also implied to keep the system code identical and introduce a flag to identify the errors. Therefore, external tools have to
deal with such error flag. This has the consequence to exclude deterministic algorithms, such as several classical optimizers, which
cannot manage non-valid data. Moreover, the possibility of using
that error flag gives more freedom in the development of algorithms. This encourages developers to favor algorithm efficiency
even if it increases the number of error flags.
3. A communication library for the coupling between the
optimization platform and the workflow engine
The use of an independent and external communication library
is a solution to achieve a loose coupling of the two frameworks
and thus preserving their integrity. The coupling between URANIE
and SYCOMORE has been achieved by using the dedicated KUI
communication library developed by CEA in C++. One specification
is to get communication methods easy to use and which do
not work exclusively between a specific workflow and a specific
optimizer. This leaves the user free to change the external
tool or the workflow. This communication method also allows
running the optimization framework and the workflow engine
on different computers, thus potentially enabling distributed
(grid/cloud) computing.
The communication library is based on socket application
and the procedure for data transfer consists in exchanging
messages made of a character string. Other codes have adopted
another approach for the communication process such as for
IPSDAKOTA [7]. The latter one exchanges data by using file-based
communication procedure as following. First input data are written
in a file by DAKOTA. Then a signal to start the computation is
sent to IPS by using sockets. Finally IPS writes output data in a
file which will be read by DAKOTA after receiving a signal. The
socket-based method for the data exchange has several benefits
compared to file-based communication. First, the data exchange
does not require a memory disc access. This potentially speeds-up
the global process. This method is also adapted to do inter-process
communication across a computer network. But an important
aspect here is that the socket-based communication allows a
pending mode for sending or receiving data. This avoids confusion

80

L. Di Gallo et al. / Computer Physics Communications 200 (2016) 7686

4. Sampling, optimizing and data analysis using URANIE


4.1. Introduction to the URANIE platform

Fig. 5. Class diagram of the KUI communication library.

with files that are already being written by the process at another
moment. For instance, the optimizer will wait for the data coming
from SYCOMORE instead of reading a file which may have already
been filled during a previous iteration. Therefore, the reception of
a message itself is a signal to start the computation. So, no more
signals are required.
The user specifies the number of desired input/output and
which variables these correspond in a list. The range of study for
inputs is fixed during this parametrization. For the moment, these
parameters are written in a file which is read just once at the
initialization of SYCOMORE. But a graphical user interface for this
parametrization is under development to simplify this operation.
KUI is structured in 5 classes as shown in Fig. 5. The first
top class which is inherited by all other classes contains the
basic declaration for sockets such as the domain, the type and
the protocol. The domain, which defines the address family, is
configured by default in KUI to AF_UNIX. This socket family is
used to communicate between processes on the same machine
efficiently. It is a local communication. The socket type configured
by default is SOCK_STREAM. This socket type is full-duplex byte
streams which ensure that data is neither lost nor duplicated.
Only a single protocol exists for the AF_UNIX domain which is
specified to 0. A few preconfigured error messages are declared in
this class too. The next two classes are the receiving and sending
classes. Both inherit the top class and proceed in two steps. The
size of the message is sent/received first. Then the message itself,
written as a character array is sent/received. The next two classes,
which inherit both receiving and sending classes, correspond to the
declaration of the socket server and socket client.
KUI is implemented in two Kepler actors (components), one
at the beginning of the global SYCOMORE loop to receive values
of input variables and one at the end of the loop to send values
of output variables (Fig. 4). These two SYCOMORE components
are configured to work with two separate channels. Instead of
using the same socket for sending and receiving data, SYCOMORE
uses two server sockets created by the external tool (URANIE) at
each exchange. The creation of a socket at each exchange prevents
blocking the global SYCOMORE loop for the next step in case of
malfunction.
These two actors proceed to a conversion of characters into
numbers and vice versa. All double precision numbers and all
integers are converted into characters and concatenated into one
string chain before sending the message. The opposite operation is
done after the reception of a message. There is also a final control of
data validity and remaining special values such as infinite values or
NaN are replaced by the 1.0 value and the validity flag mentioned
above is set to error flag.

URANIE is based on the data analysis framework ROOT (http://


root.cern.ch) an object-oriented and petaflopic computing system
developed by CERN [14]. URANIE gathers a number of methods and
algorithms to perform optimization, data analysis, construction of
experiment designs, statistical modeling, sensitivity analysis, reliability analysis and uncertainty analysis. This platform provides a
number of optimization algorithms such as NLopt library, Minuit
from ROOT and Vizir from the CEA [15]. The latter solves multicriteria and multi-constraints optimization problems using evolutionary algorithms.
As mentioned above, the coupling between URANIE and
SYCOMORE is done with socket communications. Since the ROOT
platform includes a socket-based protocol of communication,
the KUI library exchanges data directly with ROOT. No other
communication methods have been implemented inside URANIE.
KUI is used here only on the KEPLER side even if it can be used for
both sides in the communication.
4.2. Optimization with genetic algorithm
SYCOMORE optimizations are carried out with the Genetic
Algorithms (GA) provided by Vizir in URANIE. GAs work with
a population of points to create a new one closer to the
optimum at the next step [1618]. This is a different approach
than usual deterministic algorithms which proceed by using the
characteristics of a unique point to calculate the next one. GAs
are also able to find multiple optima of multi-modal functions
which correspond to different values of input variables for the
same optimal value of the function. So, instead of getting a unique
solution, GAs can give a family of solutions.
GAs algorithms mimic Darwins natural selection using randomness and follow the 4 rules below:
1. Evaluate the population of points with the function to be
optimized. The first population is generated with a uniform
random sampling over the study domain.
2. Select and rank the best points following the objectives and
deviations from the constraints.
3. Give more chance to better points, randomly recombine them
to create new children.
4. Introduce mutations in these values to increase the chance of
avoiding a local extrema. The set of new created values gathered
with the set of best selected points at the step 2 constitutes the
next generation of the population.
GAs converge repeating these four rules until all the population
observe the termination condition. GAs are well known for their
robustness with any kind of function and their ability to find
global optima. But the counterpart is that they need much
more evaluations of the function than classical gradient-based
algorithms. In the SYCOMORE case, this issue is alleviated by
computing in parallel the points of the same generation. GAs have
been chosen for DEMO design optimization because they are able
to handle non-valid points of SYCOMORE during the optimization.
Such non-valid points represent usually a strong break for the
convergence of classical deterministic algorithms. With stochastic
algorithms such as GAs, if a set of input variables produces a nonvalid result in SYCOMORE, the point is rejected and a new child is
created by another recombination.
The Vizir genetic algorithm included in URANIE is a real-coded
GA which uses a diploid representation of elements for a fraction of
the population [19]. These features of URANIE Genetic Algorithms
are an advantage over several existing GAs (see Appendix A.1 for
further details about variable coding in Vizir).

L. Di Gallo et al. / Computer Physics Communications 200 (2016) 7686

81

4.3. Mono and multi-criteria optimizations


GAs allow achieving two kinds of optimizations: mono-criteria
optimizations and multi-criteria optimizations. Both possibilities
are implemented in Vizir.
Mono-criterion optimization consists in optimizing variables in
order to minimize or maximize a function chosen by the user. The
GA explores the domain following the four rules presented above
until it gets the population included in a tolerance interval. This
tolerance interval is the termination condition of the GA in the
case of a mono-criteria optimization. The user can set this interval
which represents the convergence precision for the optimization.
Once the convergence is reached, the GA has explored the domain
and gives a family of solutions represented by the final population.
Multi-criteria optimization consists in minimizing or maximizing simultaneously k functions fj of n variables. This requires giving
the definition of an optimum for a vector F = (f1 , . . . , fk ) made
of multiple functions. The optimum for a single function (monocriteria optimization) is clearly defined by the usual order relation
between real numbers. But there are no obvious order relations
for vectors included in a multi-dimensional set of real numbers.
Reducing the problem of minimization or maximization to a one
dimensional function by using the vector norm of F does not correspond automatically to an optimum of multi-dimensional functions. The problem comes from the mixing in the norm of several
physical quantities which have different importance. For instance,
there is no possibility to calculate the norm of a vector which corresponds to a distance on the first dimension and a temperature on
the second one. A solution would be to get a weighted sum of the
functions or define an approach to combine the multiple functions
into one as it is done in DAKOTA [20].
So, instead of finding a unique optimal value of the vector
F (norm, weighted sum, etc.), a more satisfying solution for the
multi-criteria optimization is getting directly a set of point F
where variable values of its elements are spread on an optimal kdimensional surface, called a Pareto frontier. Using the dominance
relation (see Appendix A.2) associated to the Pareto frontier
definition; Vizir has developed a strategy to determine the
Pareto frontier without reducing the problem to a one criteria
optimization. This approach makes Vizir as a complete powerful
multi-criteria optimizer without any reduction to a mono criterion
optimization. More details about the definition of the Pareto
frontier can be found in Appendix A.2. The constraint treatment
is explained in Appendix A.3.
5. Parallel computing
Sensitivity studies or optimizations using genetic algorithms require large number of evaluations. So, a parallelization framework
and tools have been developed to speed up the execution time
spent during such studies. Here, the parallelization aims at computing as much as possible independent evaluations on independent processors. This part of the paper presents the framework we
have developed to achieve the parallelization.
There are several levels and different approaches of KEPLER
parallelization from the actor level to the whole workflow
level [21]. Here, the level of parallelization corresponds to the
workflow level. The whole workflow is executed in a distributed
environment and all the corresponding actors of the workflow
are executed on the same node. Each independent evaluation of
SYCOMORE is thus computed on an independent KEPLER session
containing the whole workflow. Additionally to this, the loop
implemented in the workflow avoids having to load a new KEPLER
session for each evaluation. Therefore, each KEPLER session waits
for incoming data from URANIE to start a loop of SYCOMORE.

Fig. 6. URANIESYCOMORE communication scheme for four parallel sessions.


Communication between URANIE SLAVES and SYCOMORE are handled by different
sockets. Internal communication between master and slaves of URANIE are done
with MPI.

KEPLER assumes that all internal data are stored inside three
dedicated directories. So, starting multiple Kepler sessions would
overwrite data and try to use the same HSQL database. A specific
method has thus to be developed to bypass such KEPLER internal
blocking. The method adopted here for the parallelization is
based on the virtual duplication of the KEPLER environment by
using a wrapper which separates data involved in workflows. All
environment variables, files required by KEPLER (e.g. environment
variable description) and links to the KEPLER executable are
copied in directories especially created for each parallel job.
Directories where to put all files generated by each KEPLER session
(e.g. log files or internal data stored in files) are also created.
This allows thus starting several identical workflows in parallel
without any internal blocking. The wrapper works in two steps,
it first duplicates the KEPLER environment and then launches the
SYCOMORE run on KEPLER.
URANIE has been designed to handle parallelized jobs [22]. All
parallel sessions of URANIE start using MPI. The first process is
allocated to the URANIE master and executes algorithms such
as an optimizer. The other processes launched in parallel are
allocated to URANIE slaves and they execute external codes for
each evaluation. The master handles the gathering of computed
data by slaves.
The solution adopted with URANIE to do optimizations,
samplings and sensitivity studies of SYCOMORE using parallel
sessions is presented in Fig. 6. In our case, parallel sessions of
SYCOMORE are launched just once at the initialization and remain
active until the end of the whole job. Thus, URANIE slaves only
have to handle the exchange of data with the external parallel
sessions of SYCOMORE. This has required modifying the source
code of URANIE slaves.
Except for the URANIE master, each URANIE slave and associated workflow sessions (e.g. SYCOMORE session) are launched
on the same processor. As it is shown in Fig. 6, this configuration
requires the same number of parallel sessions for URANIE as SYCOMORE sessions plus one (for the URANIE master process). This
procedure is managed by a script which launches URANIE and
SYCOMORE wrapper on parallel processors using MPI. A qsub command starts this script which submits batch jobs with the desired
number of processors.
Different socket names, defined in the parallel job script are
created in order to get separate channels of communication for
each SYCOMORE session. Socket names are passed to SYCOMORE
and URANIE by environment variables.
This level of parallelization allows achieving independent
SYCOMORE evaluations. The simplicity of the URANIE slave jobs
which mainly have to exchange data with SYCOMORE does not

82

L. Di Gallo et al. / Computer Physics Communications 200 (2016) 7686

Table 1
Scaling study of the parallelization by varying the number of parallel sessions (N)
from 255 to 3. Tev al is the averaged time spent for one SYCOMORE evaluation per
session. Using the extended Amdahls law the time that would be spent on one
processor to achieve that sampling is time(1) = 11.37 h.
N

Nbr of processors

Wall clock time

Tev al

255
127
95
63
47
31
27
23
19
15
11
7
3

256
128
96
64
48
32
28
24
20
16
12
8
4

4 min 56
7 min 53 s
11 min 09 s
13 min 26 s
17 min 30 s
23 min 27 s
53 min 20 s
1 h 02 min 37 s
1 h 15 min 47 s
48 min 36 s
2 h 10 min 55 s
1 h 38 min 32 s
3 h 58 min 52 s

8.4 s
6.0 s
5.3 s
5.1 s
4.9 s
4.4 s
4.3 s
4.2 s
4.2 s
4.4 s
4.0 s
4.1 s
4.1 s

slow down SYCOMORE even if both run on the same processor.


Thus a good scaling efficiency is expected.
The execution time is expected to scale almost as the inverse
number of sessions since evaluations are independent. This
statement is true until the number of parallel sessions has reached
the maximal number of independent SYCOMORE evaluations. A
test of scalability has been done by generating a random sample
of SYCOMORE evaluations with URANIE. A sample of 10 000
SYCOMORE evaluations has been computed for different numbers
of parallel sessions on the EUROfusion Gateway cluster [4]. The
wall clock time spent for the sampling with different number of
parallel sessions is given in Table 1. The averaged time spent for
one SYCOMORE evaluation per session is also given in Table 1. The
number of processors requested for the sampling varies between 4
and 256. One can notice that the number N of SYCOMORE parallel
sessions is the number of processors minus one since one processor
is allocated to the URANIE master session as shown in Table 1.
Fig. 7 shows that the time spent to compute the sample of
10 000 SYCOMORE evaluations decreases with the number of
processors. The red curve on the figure is a fit of those points using
the extended Amdahls law [2325]. This law gives an expected
speedup of the parallel computation as a function of the number
of processors.
Taking the fraction P of the code that can be made parallel,
the time spent time(N ) to achieve the computation on N parallel
sessions (i.e. N + 1 processors) should follow:

long pulse of 2 h corresponds to an adequate availability/reliability


operation over a reasonable time span. Those specifications
are typical for a power plant demonstrator and DEMO design
requirements are presented in [26].
Only five optimization variables which represent significant
quantities for tokamak design are used for sake of simplicity.
Those variables are the major radius R, the minor radius a of
the torus, the toroidal magnetic field amplitude at the center of
the plasma Bt , the safety factor q95 which defines the maximum
ratio of poloidal magnetic field (plasma current) over the toroidal
magnetic field and the density-averaged electron temperature
Te ne (see [27,28,8] for further details about Tokamak physics and
DEMO description).
Three aspects of the SYCOMORE study by URANIE are addressed
in the following part. First, a preliminary identification of the
validity domain is determined by a random sampling. This also
defines an area in which the optimum should be found. Then, a
genetic algorithm has optimized the 5 variables mentioned above
to determine the minimal size of the reactor in terms of major
radius. Finally, a multi-criteria optimization is done with a GA. In
this case, the major radius is minimized and simultaneously the net
electric power is maximized. The constraint on the pulse duration
is kept and it has to be longer than 2 h.

P
time(N ) = time(1) (1 P ) +
+ N

6.1. SYCOMORE sampling

(1)

is the additional serial time spent doing things like interprocessor communications. The fit of the points with the extended
Amdahls law gives a good scaling efficiency of P = 0.9980. The
time that would be spent on one processor to achieve that sampling is time(1) = 11.37 h. The inter-processor communications
coefficient is = 1.133 105 which becomes non negligible
at N = 255 with a 33% contribution to the time. This contribution of the inter-processor communication to the time can be also
observed on the time Tev al which increases with the number of processors in Table 1.
6. Application to DEMO design optimization with URANIE
SYCOMORE
An example of a DEMO design optimization is given in this
section to illustrate the performance of this coupling framework.
The aim of the optimization is to find the smallest reactor which
produces at least a net electric power of 500 MW. A pulsed reactor
(finite duty cycle) is studied in the present optimization problem. A

Fig. 7. Scalability graph for different numbers of parallel sessions. The study has
been achieved with a sampling of 10 000 SYCOMORE evaluations.

A random sampling following a uniform distribution has been


done on the five input variables mentioned above. A sample of
9.72 106 points has been generated inside the domain of study
presented in Table 2. The generation of these points took a total
of 43 h12 in two runs executed on 256 processors and one run
executed on 320 processors. Due to a 24 h limitation of runs on
the IPP Garching computing center, it has been chosen to achieve
this sampling on three separate runs. This has therefore ensured a
proper ending of the computations even if a data recovery option
is possible with URANIE when the wall time limit is exceeded. All
data has been put together since the sampling follows a uniform
distribution. Such sampling represents an equivalent grid pattern
of around 25 divisions on each variable. The mean interval between
two consecutive points on each variable is given in Table 2.
Over the 9.72 million of evaluations, around a two-thirds of
them have been rejected and 3.07 106 points remained valid.
The validity means here that SYCOMORE has finished correctly
meaning that all algorithms in SYCOMORE components have
correctly converged without any error flag. A part of these valid

L. Di Gallo et al. / Computer Physics Communications 200 (2016) 7686

83

Table 2
Minimal and maximal values for the five input variables of the study. Mean interval between two consecutive points for
a random sampling of 106 points is given too.
Input variable

Symbol

Minimal value

Maximal value

Mean interval

Major radius
Minor radius
Toroidal field amplitude
Safety factor
Density-averaged electron temperature

R (m)
a (m)
Bt (T)
q95
Te ne (keV)

7.0
2.0
4.0
3.0
8

15.0
6.0
8.0
10.0
20

0.32
0.16
0.16
0.28
0.48

14

15

14.5
13

14
13.5

4.5

12

13

R [m]

R [m]

11

12
11.5
11

10

8.851

10.5
10

3.5

9.5

2.5

9
8.5

6
-500

8
7.5
7

a [m]

12.5

2.5

3.5

4.5

5.5

a [m]

Fig. 8. Feasibility domain (red points) delimited by a red line and points complying
with the constraints for the optimization (black points). The area of the expected
optimum is shown by an ellipse. The minor radius a is on the horizontal axis and
the major radius R is on the vertical one. (For interpretation of the references to
color in this figure legend, the reader is referred to the web version of this article.)

points corresponds to non-feasible reactors, for instance when


the remaining space internal to the reactor is negative or when
the burn time is negative. These points have to be rejected too.
After filtering feasible points, 518 666 points remain and are
shown in red in Fig. 8. Feasible points do not cover the whole
studied domain, a non-accessible area exists as shown in Fig. 8. For
instance, on the R-a domain, the feasibility zone is separated by a
line corresponding to the equation R = 1.289 a + 3.779 m. This
line represents the minimal space internal to the reactor allowing
to enclose all the sub-systems.
Such a sampling is useful to prepare optimizations and predict
the results. 82 422 points are left after the selection of points which
respect the constraints of electric power and pulse duration. The
point corresponding to the minimal major radius is R = 9.085 m.
This uniform sampling leads to define that the optimum is in the
interval [8.585 m, 9.585 m] with a confidence of 95% calculated for
a uniform distribution. An ellipse shows the position of the area
where the optimum is expected.
6.2. Single criterion optimization of SYCOMORE
An optimization of the five variables mentioned above has
been done with a genetic algorithm in order to minimize the
major radius with the constraints of a minimum net electric
power of 500 MW and a minimum pulse duration of 2 h. The
constraint of getting feasible reactors is added too. The termination
condition of the algorithm is that major radius of the whole final
population must be within a tolerance interval of 2 cm. This ending
condition does not correspond to any SYCOMORE uncertainty on
the dimension determination of the tokamak. It is only a precision
on the convergence of the optimization algorithm.
The optimum has been found after 47 generations of 500 elements in the population which represent a total of 43 207 SYCOMORE evaluations. This number includes non-valid points and

500
1000
Net Electric Power [MW]

1500

2000

Fig. 9. Pareto frontier for the two criteria optimization which involves minimizing
the major radius and maximizing the net electric power. The color scale represented
the minor radius. (For interpretation of the references to color in this figure legend,
the reader is referred to the web version of this article.)

non-feasible points. The optimization has been achieved on 255


parallel sessions of SYCOMOREURANIE and took about 19 min.
The best point of the final population has a major radius of 8.913
m. This value is exactly in the range of the optimum predicted by
the sampling which was 9.085 m 0.50 m. The major radius of
the final population ranges from 8.912 m to 8.932 m with an averaged value at 8.929 m. The optimal point values, final population
characteristics and the comparison to the value predicted by sampling are summarized in Table 3. More characteristics about the
optimum are given in Appendix B.
The comparison between the optimum found with the brute
force sampling and the GA shows that a better optimum can be
found by the latter with much less evaluations. This demonstrates
the efficiency of genetic algorithms.
6.3. Multi-criteria optimization of SYCOMORE
The multi-criteria optimization done here has two objectives:

Minimizing the major radius (R)


Maximizing the net electric power (Pnet ).
The pulse duration is still constrained to be longer than 2 h.
The optimization took 4 h32 with 255 parallel sessions of
SYCOMOREURANIE. The Pareto frontier has been found after 40
generations of 1500 elements in the population which represent a
total of 659 143 SYCOMORE evaluations including non-valid points
and non-feasible points. No ending condition is defined here since
the algorithm stops when the entire population becomes nondominated following Eqs. (A.1) and (A.2). The Pareto frontier is
presented in Fig. 9.
The advantage of multi-criteria optimization is that the Pareto
frontier shows the trend of the curve following the evolution of
different objectives. For instance, the slope of the curve shows how
the major radius has to increase to get a higher power reactor at a
given net electric power. One can observe that the best slope is thus
between about 400 MW and 1400 MW.

84

L. Di Gallo et al. / Computer Physics Communications 200 (2016) 7686

Table 3
Single criterion optimization: Characteristic of the optimal population and comparison with the sampling prediction associated with confidence intervals.
Variable

Symbol

Optimum

Mean

Min

Max

Predicted value by the sampling

Major radius
Minor radius
Toroidal field amplitude
Safety factor
Density-averaged electron temperature
Net electric power
Pulse duration

R (m)
a (m)
Bt (T)
q95
Te ne (keV)
Pnet (MW)
(s)

8.913
2.491
5.812
3.064
11.67
501.8
7235

8.929
2.497
5.810
3.068
11.66
504.0
7322

8.913
2.450
5.688
3.028
11.52
500.0
7200

8.933
2.522
5.953
3.099
11.97
516.3
7769

9.085 0.50
2.172 0.25
6.826 0.25
3.009 0.44
11.77 0.75
531.1
8510

Table 4
Values of the closest point to 500 MW in the two criteria optimization and comparison with the solution of one criterion
optimization.
Variable

Symbol

Simple optimization

2-criteria optimization

Major radius
Minor radius
Toroidal field amplitude
Safety factor
Density-averaged electron temperature
Net electric power
Pulse duration

R (m)
a (m)
Bt (T)
q95
Te ne (keV)
Pnet (MW)
(s)

8.913
2.491
5.812
3.064
11.67
501.8
7235

8.851
2.284
6.285
3.000
11.70
504.1
7286

Variable values of the closest point to 500 MW are listed in Table 4. Further characteristics of DEMO performances corresponding to that point are given in Appendix B. These values are similar to
the optimum found by the single criterion optimization except for
the Bt variable with a difference of about 8%. This difference suggests a weak dependency of the optimum on the toroidal magnetic
field. This observation has to be confirmed by a sensitivity study of
SYCOMORE on the toroidal magnetic field variation.

and genetic algorithm for single and two criteria optimization


have shown coherent results. This confirms the capability and
performances of the framework proposed here. The SYCOMORE
system code coupled to the URANIE platform is then a complete
framework to achieve DEMO design studies. The coupling method
presented in this paper constitutes a generic and efficient
framework which could be exported to other contexts of physics
modeling and/or other optimizations.

7. Conclusion

Acknowledgments

A generic method for the coupling between a multi-physics


workflow engine and an optimization framework has been
developed. This coupling aims at making the two sides of the
coupling as much as possible independent in order to keep
their own integrity. This has been done by using a separate and
independent communication library to achieve the data exchange.
This library, called KUI, has been developed to exchange data using
a socket-based communication. Any workflow or any optimization
framework can be replaced by another one without significant
changes in their content. The data exchange protocol between the
two frameworks has been standardized to allow the exchange of
any output values generated by the workflow, so as to make the
communication operational. For instance infinite values or nonvalid data are identified in the workflow and they are treated to
make these data transmissible and usable by the optimizer.
This coupling has been applied to the SYCOMORE system code
for optimizing DEMO fusion reactor design. The SYCOMORE workflow is a sequence of physics and engineering components developed on the KEPLER graphical user interface. Inter-component
communication is implemented by using the EU-IM data model,
which defines standard data communication for all the components.
The optimization with genetic algorithms provided by the
URANIE platform has been chosen for its robustness, especially in
the presence of non-valid data. Such an algorithm has required
a parallelization of SYCOMOREURANIE to handle the need for
a large number of SYCOMORE evaluations. A parallelization
framework has been proposed for any workflow. A test of
scalability has shown the efficiency of the method adopted.
Following the Amdahls law, a fraction of the code that can be made
parallel of 0.998 has been found, which is a good scaling efficiency.
Finally, performance of the URANIESYCOMORE coupling has
been tested. Different optimizations using a large sample of points

This work has been carried out within the framework of


the EUROfusion Consortium and has received funding from the
Euratom research and training program 20142018 under grant
agreement No. 633053. The views and opinions expressed herein
do not necessarily reflect those of the European Commission.
The computations have been performed on the EUROfusion
Gateway Cluster hosted by IPP Garching, using the framework
developed and maintained by the EU-IM Team.1
The authors warmly thank Gilles Arnaud and Fabrice Gaudier
(CEA, DEN, Saclay, DM2S, STMF, F-91191 Gif-sur-Yvette, France)
for providing the URANIE platform and for their valuable URANIE
advices.
Appendix A. Vizir genetic algorithm
A.1. Real-coded GA
Originally, genetic algorithms, called binary-coded genetic
algorithm, were developed to work with multiple binary variables.
This means each variable, called gene, is represented by a binary
string. Then, all genes of one point are gathered in one binary
string to form a chromosome. But the representation of genes,
for instance binary strings, is a key issue for GAs since they
directly manipulate the characters constituting chromosomes
(i.e. 0 and 1 characters). Several limitations are associated
to the binary representation for the optimization of real number
functions [29]. Therefore, real-coded genetic algorithms have been
developed in order to better adapt GAs for functions of real
numbers. In that case, genes are real numbers and chromosomes

1 See http://www.euro-fusionscipub.org/eu-im.

L. Di Gallo et al. / Computer Physics Communications 200 (2016) 7686

85

Table B.5
DEMO characteristics for the optimum obtained with the mono-criteria optimization (Section 6.2) and for the optimum at 500 MW
obtained with the 2 criteria optimization (Section 6.3).
Variable

Symbol

Simple optimization

2-criteria optimization

Major radius
Minor radius
Toroidal field amplitude
Safety factor
Density-averaged electron temperature
Net electric power
Pulse duration
Total plasma current
Boostrap fraction
NBI-drived current
Confinement time
Thermal energy content
Normalized total beta value
Z-effective
Helium concentration
Argon concentration
Volume-averaged electron density
Volume-averaged electron temperature
Electron density (maximum, pedestal, edge)
Electron temperature (maximum, pedestal, edge)
Fusion power
Bremsstrahlung power loss
Syncrotron power loss
Line radiation power loss
Power through the separatrix
Neutral Beam Injection power
Fusion power gain

R (m)
a (m)
Bt (T)
q95
Te ne (keV)
Pnet (MW)
(s)
Ip (MA)

8.913
2.491
5.812
3.064
11.67
501.8
7235
14.25
30%
70%
3.20
756.3
2.49
1.49
5.5%
0.12%
8.22 1019
10.67
9.79, 7.53, 3.6 1019
25.7, 2.34, 0.1
1366
31
17
20
113
25
52.73

8.851
2.284
6.285
3.000
11.70
504.1
7286
13.18
31%
69%
2.97
696.4
2.50
1.48
5.6%
0.12%
9.02 1019
10.70
10.8, 8.29, 3.6 1019
25.2, 2.75, 0.1
1365
32
17
20
232
25
52.78

E (s)
(MJ)
N

Zeff

ne (m3 )
Te (keV)
ne,max,ped,edg (m3 )
Te,max,ped,edg (keV)
Pfus (MW)
Pbrem (MW)
Psynch (MW)
Pline (MW)
Pcon (MW)
PNBI (MW)
Q

are vectors of real numbers made of genes. Unlike binary-coded


GAs, real-coded GAs manipulate real numbers instead of characters
(i.e. numerical digits here) constituting chromosomes. The Vizir
Genetic algorithm included in URANIE is a real-coded GA.
Furthermore, Vizir uses diploid representation of elements for
a fraction of the population [19]. Diploid elements are made of
two different chromosomes plus a dominance vector which is a
vector of numbers between 0 and 1 generated by the GA. Diploidy
increases the diversity of created children and thus increases the
exploration of the domain by the GA [30,18,16]. Both features of
URANIE Genetic Algorithms are an advantage over several existing
GAs.
A.2. Pareto frontier
The Pareto frontier is an optimal k-dimensional surface found
by using a dominance relation which is an order relation extended
to vectors [19,16]. The dominance relation has been specifically
defined for multi-criteria optimization. A x Rn point dominates
a y Rn point for the k multiple functions fj with j [[1, k]] if the
following relations are respected:

j [[1, k]] | fj (x) fj (y)

(A.1)

j0 [[1, k]] | fj0 (x) < fj0 (y).

(A.2)

Using this relation, Vizir ranks each point in the population


by how many times the point is dominated. The more times
the point is dominated the worse is the rank of the point. Best
points are then selected and recombined following the four rules
of GAs presented in Section 4.2. The non-domination of each of
the elements against each other in the population defines the
termination condition in Vizir for the multi-criteria optimization.
An important part of the GAs job is to get a final population as
much as possible uniformly distributed on the Pareto frontier [19].
This approach makes Vizir as a complete powerful multi-criteria
optimizer without any reduction to a mono criterion optimization.

A.3. Constraint treatment


The treatment of constraints can be problematic in GAs since
the optimal solution is often close to the constraint limit. A basic
treatment would be to reject points which do not respect the
constraint and keep the others. But that method trends to steer
away the solution from the constraint limit and thus away from
the optimum. To avoid this problem, the solution adopted in
Vizir is to consider the constraint as an optimization criterion.
If a point does not respect the constraint, the algorithm will try
to optimize the constrained quantity instead of rejecting it. By
this way, the minimization (resp. maximization) for an inferiority
(resp. superiority) constraint puts the point closer and closer to
the constraint until it crosses the limit. If the point respects the
constraint, it is kept and no minimization or maximization is
applied anymore to this particular point.
Appendix B. Optimal DEMO characteristics
See Table B.5.
References
[1] I. Altintas, C. Berkley, E. Jaeger, M. Jones, B. Ludascher, S. Mock, Kepler:
an extensible system for design and execution of scientific workflows, in:
Scientific and Statistical Database Management, 2004. Proceedings. 16th
International Conference on, 2004, pp. 423424.
http://dx.doi.org/10.1109/SSDM.2004.1311241.
[2] The kepler project website. URL http://kepler-project.org.
[3] F. Gaudier, URANIE: The CEA/DEN uncertainty and sensitivity platform, in:
sixth International Conference on Sensitivity Analysis of Model Output, vol.
2, 2010, pp. 76607661, http://dx.doi.org/10.1016/j.sbspro.2010.05.166. URL
http://www.sciencedirect.com/science/article/pii/S1877042810013078.
[4] Website presenting the EUROfusion Gateway cluster, hosted at IPP Garching. URL https://itm.ipp.mpg.de/wiki/ITM/index.php/ITM_Gateway_at_IPP_
Garching.
[5] F. Duchaine, T. Morel, L.Y. Gicquel, Computational-fluid-dynamics-based
kriging optimization tool for aeronautical combustion chambers, AIAA J. 47
(3) (2009) 631645. URL http://arc.aiaa.org/doi/abs/10.2514/1.37808.
[6] I. Spisso, Parametric and optimization study: OpenFOAM and dakota, PRACE
(Partnership for Advanced Computing in Europe), in: 2012, workshop HPC
enabling of OpenFOAM for CFD applications CINECA.
URL http://www.training.prace-ri.eu/uploads/tx_pracetmo/OpenFOAM_and_
Dakota.pdf.

86

L. Di Gallo et al. / Computer Physics Communications 200 (2016) 7686

[7] W.R. Elwasif, D.E. Bernholdt, S. Pannala, S. Allu, S.S. Foley, Parameter sweep
and optimization of loosely coupled simulations using the DAKOTA toolkit,
in: Proceedings of the 2012 IEEE 15th International Conference on Computational Science and Engineering, CSE12, IEEE Computer Society, Washington,
DC, USA, 2012, pp. 102110. http://dx.doi.org/10.1109/ICCSE.2012.24.
[8] C. Reux, L.D. Gallo, F. Imbeaux, J.-F. Artaud, P. Bernardi, J. Bucalossi, G.
Ciraolo, J.-L. Duchateau, C. Fausser, D. Galassi, P. Hertout, J.-C. Jaboulay, A. LiPuma, B. Saoutic, L. Zani, I. Contributors, DEMO reactor design using the new
modular system code SYCOMORE, Nucl. Fusion 55 (7) (2015) 073011. URL
http://stacks.iop.org/0029-5515/55/i=7/a=073011.
[9] F. Imbeaux, J. Lister, G. Huysmans, W. Zwingmann, M. Airaj, L. Appel, V. Basiuk,
D. Coster, L.-G. Eriksson, B. Guillerminet, D. Kalupin, C. Konz, G. Manduchi, M.
Ottaviani, G. Pereverzev, Y. Peysson, O. Sauter, J. Signoret, P. Strand, A generic
data structure for integrated modelling of tokamak physics and subsystems,
Comput. Phys. Comm. 181 (6) (2010) 987998.
http://dx.doi.org/10.1016/j.cpc.2010.02.001.
URL http://www.sciencedirect.com/science/article/pii/S0010465510000214.
[10] G. Falchetto, D. Coster, R. Coelho, B. Scott, L. Figini, D. Kalupin, E. Nardon, S.
Nowak, L. Alves, J. Artaud, V. Basiuk, J.P. Bizarro, C. Boulbe, A. Dinklage, D.
Farina, B. Faugeras, J. Ferreira, A. Figueiredo, P. Huynh, F. Imbeaux, I. IvanovaStanik, T. Jonsson, H.-J. Klingshirn, C. Konz, A. Kus, N. Marushchenko, G.
Pereverzev, M. Owsiak, E. Poli, Y. Peysson, R. Reimer, J. Signoret, O. Sauter, R.
Stankiewicz, P. Strand, I. Voitsekhovitch, E. Westerhof, T. Zok, W. Zwingmann,
I. Contributors, J. Contributors, the ASDEX Upgrade Team, The European
integrated tokamak modelling (ITM) effort: achievements and first physics
results, Nucl. Fusion 54 (4) (2014) 043018. URL http://stacks.iop.org/00295515/54/i=4/a=043018.
[11] Z. Dragojlovic, A.R. Raffray, F. Najmabadi, C. Kessel, L. Waganer, L. ElGuebaly, L. Bromberg, An advanced computational algorithm for systems
analysis of tokamak power plants, Fusion Eng. Des. 85 (2) (2010) 243265.
http://dx.doi.org/10.1016/j.fusengdes.2010.02.015.
URL http://www.sciencedirect.com/science/article/pii/S0920379610000414.
[12] M. Nakamura, R. Kemp, H. Utoh, D.J. Ward, K. Tobita, R. Hiwatari, G.
Federici, Efforts towards improvement of systems codes for the broader
approach DEMO design, Fusion Eng. Des. 87 (56) (2012) 864867.
tenth International Symposium on Fusion Nuclear Technology (ISFNT-10).
http://dx.doi.org/10.1016/j.fusengdes.2012.02.034.
URL http://www.sciencedirect.com/science/article/pii/S0920379612000944.
[13] M. Kovari, R. Kemp, H. Lux, P. Knight, J. Morris, D. Ward, Process: A systems
code for fusion power plants - part 1: Physics, Fusion Eng. Des. 89 (12) (2014)
30543069. http://dx.doi.org/10.1016/j.fusengdes.2014.09.018.
URL http://www.sciencedirect.com/science/article/pii/S0920379614005961.
[14] R. Brun, F. Rademakers, ROOTan object oriented data analysis framework,
Nucl. Instrum. Methods Phys. Res. A 389 (12) (1997) 8186. new Computing Techniques in Physics Research V. http://dx.doi.org/10.1016/S01689002(97)00048-X.
URL http://www.sciencedirect.com/science/article/pii/S016890029700048X.
[15] G. Arnaud, Manuel dutilisation de vizir distribu v2.0, CEA report
DEN/SFME/LGLS/RT/10-001/A (October 2009).

[16] K. Deb, D. Kalyanmoy, Multi-Objective Optimization Using Evolutionary


Algorithms, John Wiley & Sons, Inc., New York, NY, USA, 2001.
[17] D. Beasley, D.R. Bull, R.R. Martin, An overview of genetic algorithms: Part 1,
fundamentals, 1993.
[18] D. Beasley, D.R. Bull, R.R. Martin, An overview of genetic algorithms: Part 2,
research topics, 1993.
[19] M. Dumas, Optimisation multicritre par algorithmes gntiques, CEA report
DEN/DM2S/SFME/LETR/02-027/A (September 2002).
[20] B. Adams, L. Bauman, W. Bohnhoff, K. Dalbey, M. Ebeida, J. Eddy, M. Eldred,
P. Hough, K. Hu, J. Jakeman, L. Swiler, D. Vigil, Dakota, a multilevel parallel
object-oriented framework for design optimization, parameter estimation,
uncertainty quantification, and sensitivity analysis: Version 5.4 users manual,
sandia Technical Report SAND2010-2183, December 2009. Updated April 2013
(April 2013). URL https://dakota.sandia.gov/content/manuals.
[21] M. Pciennik, T. Zok, I. Altintas, J. Wang, D. Crawl, D. Abramson, F. Imbeaux,
B. Guillerminet, M. Lopez-Caniego, I.C. Plasencia, W. Pych, P. Ciecielag, B.
Palak, M. Owsiak, Y. Frauel, Approaches to distributed execution of scientific
workflows in Kepler, Fundam. Inform. 128 (3) (2013) 281302.
http://dx.doi.org/10.3233/FI-2013-947.
[22] F. Gaudier, User manual for URANIE v3.3.2 version, 2013.
[23] G.M. Amdahl, Validity of the single processor approach to achieving large
scale computing capabilities, in: Proceedings of the April 1820, 1967, Spring
Joint Computer Conference, AFIPS67 (Spring), ACM, New York, NY, USA, 1967,
pp. 483485. http://dx.doi.org/10.1145/1465482.1465560.
[24] M. Horoi, R.J. Enbody, Using Amdahls law as a metric to drive code
parallelization: Two case studies, IJHPCA 15 (1) (2001) 7580.
http://dx.doi.org/10.1177/109434200101500107.
[25] R.G. Brown, Maximizing beowulf performance, in: Proceedings of the 4th
Annual Linux Showcase & Conference - Volume 4, ALS00, USENIX Association,
Berkeley, CA, USA, 2000, pp. 2929.
URL http://dl.acm.org/citation.cfm?id=1268379.1268408.
[26] G. Federici, R. Kemp, D. Ward, C. Bachmann, T. Franke, S. Gonzalez, C.
Lowry, M. Gadomska, J. Harman, B. Meszaros, C. Morlock, F. Romanelli,
R. Wenninger, Overview of EU DEMO design and R&D activities, Fusion
Eng. Des. 89 (78) (2014) 882889. proceedings of the 11th International
Symposium on Fusion Nuclear Technology-11 (ISFNT-11) Barcelona, Spain,
1520 September, 2013. http://dx.doi.org/10.1016/j.fusengdes.2014.01.070.
URL http://www.sciencedirect.com/science/article/pii/S0920379614000714.
[27] J. Wesson, Tokamaks, fourth ed., in: International Series of Monographs on
Physics, Oxford Univ. Press, Oxford, 2011.
[28] M. Kikuchi, K. Lackner, Tran, M. Quang, International Atomic Energy, Agency,
Fusion Physics, International Atomic Energy Agency (IAEA), 2012. URL
http://www-pub.iaea.org/books/IAEABooks/8879/Fusion-Physics.
[29] F. Herrera, M. Lozano, J. Verdegay, Tackling real-coded genetic algorithms:
Operators and tools for behavioural analysis, Artif. Intell. Rev. 12 (4) (1998)
265319. http://dx.doi.org/10.1023/A:1006504901164.
[30] E. Collingwood, D. Corne, P. Ross, Useful diversity via multiploidy, in: Evolutionary Computation, 1996., Proceedings of IEEE International Conference on,
1996, pp. 810813. http://dx.doi.org/10.1109/ICEC.1996.542705.

You might also like