index.rst 8.85 KB
Newer Older
1 2 3 4 5 6 7
.. E-CAM documentation master file, created by
   sphinx-quickstart on Thu Sep 15 17:56:17 2016.
   You can adapt this file completely to your liking, but it should at least
   contain the root `toctree` directive.

.. _readme_meso_multi:

8 9 10
*****************************
Meso- and Multi-scale Modules
*****************************
11 12 13 14 15 16

Introduction
============

.. sidebar:: General Information

Alan O'Cais's avatar
Alan O'Cais committed
17
    .. contents:: :depth: 2
18

19
    * :ref:`contributing`
20 21 22 23 24 25 26 27 28 29
    * :ref:`search`

.. image:: ./images/DPD1.jpg
   :width: 10 %
   :align: left

This is a collection of the modules that have been created by E-CAM community 
within the area of Meso- and Multi-scale Modelling. This documentation is 
created using ReStructured Text and the git repository for the documentation 
source files can be found at 
Alan O'Cais's avatar
Alan O'Cais committed
30
https://gitlab.e-cam2020.eu/e-cam/E-CAM-Library which are
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
public and open to contributions.

In the context of E-CAM, the definition of a software module is any piece of software that could be of use to the E-CAM community and that encapsulates some additional functionality, enhanced performance or improved usability for people performing computational simulations in the domain areas of interest to us. 

This definition is deliberately broader than the traditional concept of a module as defined in the semantics of most high-level programming languages and is intended to capture inter alia workflow scripts, analysis tools and test suites as well as traditional subroutines and functions. Because such E-CAM modules will form a heterogeneous collection we prefer to refer to this as an E-CAM software repository rather than a library (since the word library carries a particular meaning in the programming world). The modules do however share with the traditional computer science definition the concept of hiding the internal workings of a module behind simple and well-defined interfaces. It is probable that in many cases the modules will result from the abstraction and refactoring of useful ideas from existing codes rather than being written entirely de novo.

Perhaps more important than exactly what a module is, is how it is written and used. A final E-CAM module adheres to current best-practice programming style conventions, is well documented and comes with either regression or unit tests (and any necessary associated data). E-CAM modules should be written in such a way that they can potentially take advantage of anticipated hardware developments in the near future (and this is one of the training objectives of E-CAM). 

Pilot Projects
==============

One of primary activity of E-CAM is to engage with pilot projects with industrial partners. These projects are conceived
together with the partner and typically are to facilitate or improve the scope of computational simulation within the
partner. The related code development for the pilot projects are open source (where the licence of the underlying
software allows this) and are described in the modules associated with the pilot projects.

Alan O'Cais's avatar
Alan O'Cais committed
47 48
Software related to Extended Software Development Workshops
===========================================================
49 50 51

DL_MESO_DPD
-----------
Alan O'Cais's avatar
Alan O'Cais committed
52

53
The following modules connected to the DL_MESO_DPD code have been produced so far:
54 55 56 57 58 59 60 61

.. toctree::
    :glob:
    :maxdepth: 1

    ./modules/DL_MESO_DPD/dipole_dlmeso_dpd/readme
    ./modules/DL_MESO_DPD/format_dlmeso_dpd/readme
    ./modules/DL_MESO_DPD/dipole_af_dlmeso_dpd/readme
Alan O'Cais's avatar
Alan O'Cais committed
62 63
    ./modules/DL_MESO_DPD/moldip_af_dlmeso_dpd/readme
    ./modules/DL_MESO_DPD_onGPU/add_gpu_version/readme
Jony Castagna's avatar
Jony Castagna committed
64
    ./modules/DL_MESO_DPD_onGPU/fftw/readme
Alan O'Cais's avatar
Alan O'Cais committed
65
    ./modules/DL_MESO_DPD/check_dlmeso_dpd/readme
66
    ./modules/DL_MESO_DPD/tetra_dlmeso_dpd/readme
Jony Castagna's avatar
Jony Castagna committed
67
    ./modules/DL_MESO_DPD_onGPU/multi_gpu/readme
Alan O'Cais's avatar
Alan O'Cais committed
68
    ./modules/DL_MESO_DPD/sionlib_dlmeso_dpd/readme
69

Alan O'Cais's avatar
Alan O'Cais committed
70
ESPResSo++
Alan O'Cais's avatar
Alan O'Cais committed
71 72
----------

Alan O'Cais's avatar
Alan O'Cais committed
73
The following modules connected to the ESPResSo++ code have been produced so far in the context of an `associated Pilot Project <https://www.e-cam2020.eu/pilot-project-composite-materials/>`_:
Alan O'Cais's avatar
Alan O'Cais committed
74 75 76 77 78 79 80 81

.. toctree::
    :glob:
    :maxdepth: 1

    ./modules/hierarchical-strategy/components/fixed-local-tuple/readme
    ./modules/hierarchical-strategy/components/md-softblob/readme
    ./modules/hierarchical-strategy/components/minimize-energy/readme
82
    ./modules/hierarchical-strategy/components/constrain-com/readme
Alan O'Cais's avatar
Alan O'Cais committed
83 84 85 86 87 88
    ./modules/hierarchical-strategy/components/constrain-rg/readme
    ./modules/hierarchical-strategy/simple_one-component_melts/fbloop/readme
    ./modules/hierarchical-strategy/simple_one-component_melts/reinsertion/readme
    ./modules/hierarchical-strategy/simple_one-component_melts/fine-graining/readme
    ./modules/hierarchical-strategy/simple_one-component_melts/coarse-graining/readme

Alan O'Cais's avatar
Alan O'Cais committed
89
These modules have resulted in the final overarching module that captures the goal of the pilot project:
Alan O'Cais's avatar
Alan O'Cais committed
90

Alan O'Cais's avatar
Alan O'Cais committed
91 92 93 94 95
.. toctree::
    :glob:
    :maxdepth: 1

    ./modules/hierarchical-strategy/simple_one-component_melts/readme
Alan O'Cais's avatar
Alan O'Cais committed
96

Alan O'Cais's avatar
Alan O'Cais committed
97 98
ParaDiS
-------
99

Alan O'Cais's avatar
Alan O'Cais committed
100
The following modules connected to the ParaDiS code have been produced so far:
Jony's avatar
Jony committed
101 102 103 104

.. toctree::
    :glob:
    :maxdepth: 1
105

Alan O'Cais's avatar
Alan O'Cais committed
106 107
    ./modules/paradis_precipitate/paradis_precipitate_GC/readme
    ./modules/paradis_precipitate/paradis_precipitate_HPC/readme
Alan O'Cais's avatar
Alan O'Cais committed
108

Alan O'Cais's avatar
Alan O'Cais committed
109 110 111 112

GC-AdResS 
---------

Jony Castagna's avatar
Jony Castagna committed
113
This modules are connected to the Adaptive Resolution Simulation implementation in GROMACS. 
Christian Krekeler's avatar
Christian Krekeler committed
114 115 116 117 118

.. toctree::
    :glob:
    :maxdepth: 1

Alan O'Cais's avatar
Alan O'Cais committed
119 120
    ./modules/GC-AdResS/Abrupt_AdResS/readme
    ./modules/GC-AdResS/AdResS_RDF/readme
Christian Krekeler's avatar
Christian Krekeler committed
121
    ./modules/GC-AdResS/Abrupt_Adress_forcecap/readme
Alan O'Cais's avatar
Alan O'Cais committed
122
    ./modules/GC-AdResS/AdResS_TF/readme
Alan O'Cais's avatar
Alan O'Cais committed
123
    ./modules/GC-AdResS/LocalThermostat_AdResS/readme
Alan O'Cais's avatar
Alan O'Cais committed
124
    ./modules/GC-AdResS/Analyse_Tools/readme
Alan O'Cais's avatar
Alan O'Cais committed
125
    ./modules/GC-AdResS/Analyse_VACF/readme
Alan O'Cais's avatar
Alan O'Cais committed
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169

.. _ALL_background:

ALL (A Load-balancing Library)
------------------------------

Most modern parallelized (classical) particle simulation programs are based on a spatial decomposition method as an
underlying parallel algorithm: different processors administrate different spatial regions of the simulation domain and
keep track of those particles that are located in their respective region. Processors exchange information

* in order to compute interactions between particles located on different processors
* to exchange particles that have moved to a region administrated by a different processor.

This implies that the workload of a given processor is very much determined by its number of particles, or, more
precisely, by the number of interactions that are to be evaluated within its spatial region.

Certain systems of high physical and practical interest (e.g. condensing fluids) dynamically develop into a state where
the distribution of particles becomes spatially inhomogeneous. Unless special care is being taken, this results in a
substantially inhomogeneous distribution of the processors’ workload. Since the work usually has to be synchronized
between the processors, the runtime is determined by the slowest processor (i.e. the one with highest workload). In the
extreme case, this means that a large fraction of the processors is idle during these waiting times. This problem
becomes particularly severe if one aims at strong scaling, where the number of processors is increased at constant
problem size: Every processor administrates smaller and smaller regions and therefore inhomogeneities will become more
and more pronounced. This will eventually saturate the scalability of a given problem, already at a processor number
that is still so small that communication overhead remains negligible.

The solution to this problem is the inclusion of dynamic load balancing techniques. These methods redistribute the
workload among the processors, by lowering the load of the most busy cores and enhancing the load of the most idle ones.
Fortunately, several successful techniques are known already to put this strategy into practice. Nevertheless, dynamic
load balancing that is both efficient and widely applicable implies highly non-trivial coding work. Therefore it has has
not yet been implemented in a number of important codes of the E-CAM community, e.g. DL_Meso, DL_Poly, Espresso,
Espresso++, to name a few. Other codes (e.g. LAMMPS) have implemented somewhat simpler schemes, which however might turn
out to lack sufficient flexibility to accommodate all important cases. Therefore, the ALL library was created in the
context of an Extended Software Development Workshop (ESDW) within E-CAM (see `ALL ESDW event details <https://www.e-cam2020.eu/legacy_event/extended-software-development-workshop-for-atomistic-meso-and-multiscale-methods-on-hpc-systems/>`_
), where code developers of CECAM community codes were invited together with E-CAM postdocs, to work on the
implementation of load balancing strategies. The goal of this activity was to increase the scalability of these
applications to a larger number of cores on HPC systems, for spatially inhomogeneous systems, and thus to reduce the
time-to-solution of the applications.

.. toctree::
    :glob:
    :maxdepth: 1

    ./modules/ALL_library/tensor_method/readme