Commit ca20b271 authored by Alan O'Cais's avatar Alan O'Cais

Merge branch 'master' into adress-analysis-scripts-tools

parents 3bec9533 b903f524
......@@ -107,11 +107,13 @@ Below is a list of the modules developed directly within the context of the pilo
./modules/Geomoltools/readme
./modules/GRASP_Sampling/readme
./modules/GROMACS_interface/README
./modules/Gaussian_interface/README
./modules/Selectively-Localized-Wannier-Functions/readme
./modules/Gaussian_interface/README
./modules/Differential_Evolution/README
./modules/Gaussian_interface/README
./modules/WLRR/README
./modules/FFTXlib/readme
.. _E-CAM: https://www.e-cam2020.eu/
########
FFTXlib
########
.. sidebar:: Software Technical Information
Language
Fortran 1995
Documentation Tool
Sphinx, ReStructuredText
Application Documentation
`Doc mirror <https://gitlab.com/kucukben/fftxlib-esl-ecam/tree/master/doc>`_
Relevant Training Material
See usage examples in the ``examples`` directory of the source code.
Licence
GNU Lesser General Public License v3.0
Author of Module
Emine Kucukbenli
.. contents:: :local:
Purpose of Module
_________________
FFTXlib module is a collection of driver routines for complex 3D fast Fourier transform (FFT) libraries
to be used within planewave-based electronic structure calculation software.
Generally speaking, FFT algorithm requires a data array to act on, a clear description of the
input-output sequence and transform domains.
In the context of planewave based electronic structure calculations, the data array may hold elements such as
electronic wavefunction :math:`\psi` or charge density :math:`\rho` or their functions.
The transform domains are direct (real) and reciprocal space,
the discretization in real space is represented as a uniform grid of the unit cell and
the discretization of the reciprocal space is in the basis of planewaves whose wavevectors
are multiples of reciprocal space vectors :math:`(\mathbf G)` .
To understand the main motivation behind FFTXlib routines we need to clarify the differences between the representation
of wavefunction and charge density in planewave based codes:
In these codes, the expansion of wavefunction in planewave basis is
truncated at a cut-off wave-vector :math:`\mathbf G_{max}`.
Since density is the norm-square of the wavefunction, the expansion that is consistent with
the one of wavefunctions requires a cut-off wavevector twice that of wavefunctions: :math:`2 \mathbf G_{max}`.
Meanwhile, the real space FFT domain is often only defined by one uniform grid of the unit cell,
so the array sizes of both :math:`\rho` and :math:`\psi` in their real space representation are the same.
Therefore, to boost optimization and to reduce numerical noise, the library implements two possible options while performing FFT:
in one ( 'Wave') the wavevectors beyond :math:`\mathbf G_{max}` are ignored,
in the other ( 'Rho' ) no such assumption is made.
Another crucial feature of FFTXlib is that some approximations in the electronic structure calculations
(such as usage of non-normconserving pseudopotentials) require that density is not just
norm-square of wavefunctions, but has spatially localized extra components. In that case,
these localized contributions may require higher G-vector components than the ones needed for density
(:math:`> 2 \mathbf G_{max}`).
Hence, in such systems, the density array in reciprocal space has more elements
than the norm-conserving case (in other words a finer resolution or a denser grid is needed in real space)
while the resolution needed to represent wavefunctions are left unchanged.
To accommodate for these different requirements of grid size, and to be able to make Fourier transforms back and forth between them,
the FFTXlib routines explicitly require descriptor arguments which define the grids to be used. For example,
if potential is obtained from density, the FFT operations on it should use the denser grid;
while FFT on wavefunctions should use the smoother grid (corresponding to :math:`2\mathbf G_{max}` as defined before).
When the Hamiltonian's action on wavefunctions are being calculated, the potential should be
brought from dense to smooth grid.
But when the density is being calculated, wavefunction normsquare should be carried from smooth to dense grid.
A final important feature of FFTXlib is the index mapping. In the simple case of no parallelization,
as a choice, the reciprocal space arrays are ordered in increasing order of :math:`|G|^2`
while the real space arrays are sorted in column major order.
Therefore for FFT to be performed, a map between these two orders must be known.
This index map is created and preserved by the FFTXlib.
In summary, FFTXlib allows the user to perform complex 3D fast Fourier transform (FFT) in the context of
plane wave based electronic structure software. It contains routines to initialize the array structures,
to calculate the desired grid shapes. It imposes underlying size assumptions and provides
correspondence maps for indices between the two transform domains.
Once this data structure is constructed, forward or inverse in-place FFT can be performed.
For this purpose FFTXlib can either use a local copy of an earlier version of FFTW (a commonly used open source FFT library),
or it can also serve as a wrapper to external FFT libraries via conditional compilition using pre-processor directives.
It supports both MPI and OpenMP parallelization technologies.
FFTXlib is currently employed within Quantum Espresso package, a widely used suite of codes
for electronic structure calculations and materials modeling in the nanoscale, based on
planewave and pseudopotentials. FFTXlib is also interfaced with "miniPWPP" module
that solves the Kohn Sham equations in the basis of planewaves and soon to be released as a part of E-CAM Electronic Structure Library.
Background Information
______________________
FFTXlib is mainly a rewrite and optimization of earlier versions of FFT related routines inside Quantum ESPRESSO pre-v6;
and finally their replacement.
This may shed light on some of the variable name choices, as well as the default of :math:`2\mathbf G_{max}` cut-off
for the expansion of the smooth part of the charge density, and the required format for lattice parameters in order to build the
FFT domain descriptor.
Despite many similarities, current version of FFTXlib dramatically changes the FFT strategy in the parallel execution,
from 1D+2D FFT performed in QE pre v6
to a 1D+1D+1D one; to allow for greater flexibility in parallelization.
Building and Testing
______________________________
A stable version of the module can be downloaded using `this link <https://gitlab.com/kucukben/fftxlib-esl-ecam>`_
.. when fftxlib has its own repo, this link can be moved there.
Current installation and testing are done with gfortran compiler, version 4.4.7.
The configuration uses GNU Autoconf 2.69.
The commands for installation are::
$ ./configure
$ make libfftx
As a result, the library archive "libfftx.a" is produced in src directory,
and symbolicly linked to a "lib" directory.
.. To test whether the library is working as expected, run::
.. $ make FFTXtest
.. Besides the PASS/FAIL status of the test, by changing the bash script in the tests directory, you can perform your custom tests. Read the README.test documentation in the tests subdirectory for further details about the tests.
To see how the library works in a realistic case scenario of an electronic structure calculation, run::
$make FFTXexamples
.. Besides the PASS/FAIL status of the example, by changing the bash script in the examples directory, you can create your custom examples.
A mini-app will be compiled in src directory and will be symbolicly copied into ``bin`` directory.
The mini-app simulates an FFT scenario with a test unit cell, and plane wave expansion cutoff.
It creates the FFT structures and tests forward and backward transform on sample array and reports timings.
Read the README.examples documentation in the examples subdirectory for further details.
Source Code
____________
The FFTXlib bundle corresponding to the stable release can be downloaded from this `link <https://gitlab.com/kucukben/fftxlib-esl-ecam>`_
The source code itself can be found under the subdirectory ``src``.
The development is ongoing.
The version that corresponds to the one of examples and tests can be obtained with SHA 31a6f4ecbb7ce474b0c87702c716713758f99a0a. This will soon be replaced with a version tag.
Further Information
____________________
This documentation can be found inside the ``docs`` subdirectory.
The FFTXlib is developed with the contributions of C. Cavazzoni, S. de Gironcoli,
P. Giannozzi, F. Affinito, P. Bonfa', Martin Hilgemans, Guido Roma, Pascal Thibaudeau,
Stephane Lefranc, Nicolas Lacorne, Filippo Spiga, Nicola Varini, Jason Wood, Emine Kucukbenli.
......@@ -61,6 +61,7 @@ The following modules connected to the DL_MESO_DPD code have been produced so fa
./modules/DL_MESO_DPD/dipole_af_dlmeso_dpd/readme
./modules/DL_MESO_DPD/moldip_af_dlmeso_dpd/readme
./modules/DL_MESO_DPD_onGPU/add_gpu_version/readme
./modules/DL_MESO_DPD_onGPU/fftw/readme
./modules/DL_MESO_DPD/check_dlmeso_dpd/readme
./modules/DL_MESO_DPD/tetra_dlmeso_dpd/readme
./modules/DL_MESO_DPD/sionlib_dlmeso_dpd/readme
......@@ -101,22 +102,24 @@ The following modules connected to the ParaDiS code have been produced so far:
:glob:
:maxdepth: 1
./modules/paradis_precipitate/paradis_precipitate_GC/readme.rst
./modules/paradis_precipitate/paradis_precipitate_HPC/readme.rst
./modules/paradis_precipitate/paradis_precipitate_GC/readme
./modules/paradis_precipitate/paradis_precipitate_HPC/readme
GC-AdResS
---------
Adaptive Resolution Simulation: Implementation in GROMACS
This modules are connected to the Adaptive Resolution Simulation implementation in GROMACS.
.. toctree::
:glob:
:maxdepth: 1
./modules/GC-AdResS/Abrupt_AdResS/readme.rst
./modules/GC-AdResS/Abrupt_AdResS/abrupt_adress.rst
./modules/GC-AdResS/AdResS_RDF/readme.rst
./modules/GC-AdResS/Abrupt_AdResS/readme
./modules/GC-AdResS/AdResS_RDF/readme
./modules/GC-AdResS/Abrupt_Adress_forcecap/readme
./modules/GC-AdResS/AdResS_TF/readme
./modules/GC-AdResS/LocalThermostat_AdResS/readme
.. _ALL_background:
......
.. In ReStructured Text (ReST) indentation and spacing are very important (it is how ReST knows what to do with your
document). For ReST to understand what you intend and to render it correctly please to keep the structure of this
template. Make sure that any time you use ReST syntax (such as for ".. sidebar::" below), it needs to be preceded
and followed by white space (if you see warnings when this file is built they this is a common origin for problems).
.. Firstly, let's add technical info as a sidebar and allow text below to wrap around it. This list is a work in
progress, please help us improve it. We use *definition lists* of ReST_ to make this readable.
.. sidebar:: Software Technical Information
Name
DL_MESO (DPD).
Language
Fortran, CUDA-C.
Licence
`BSD <https://opensource.org/licenses/BSD-2-Clause>`_, v. 2.7 or later
Documentation Tool
ReST files
Application Documentation
See the `DL_MESO Manual <http://www.scd.stfc.ac.uk/SCD/resources/PDF/USRMAN.pdf>`_
Relevant Training Material
See `DL_MESO webpage <http://www.scd.stfc.ac.uk/SCD/support/40694.aspx>`_
Software Module Developed by
Jony Castagna
.. In the next line you have the name of how this module will be referenced in the main documentation (which you can
reference, in this case, as ":ref:`example`"). You *MUST* change the reference below from "example" to something
unique otherwise you will cause cross-referencing errors. The reference must come right before the heading for the
reference to work (so don't insert a comment between).
.. _dl_meso_dpd_gpu_fftw:
#################################
SPME on DL_MESO_DPD (GPU version)
#################################
.. Let's add a local table of contents to help people navigate the page
.. contents:: :local:
.. Add an abstract for a *general* audience here. Write a few lines that explains the "helicopter view" of why you are
creating this module. For example, you might say that "This module is a stepping stone to incorporating XXXX effects
into YYYY process, which in turn should allow ZZZZ to be simulated. If successful, this could make it possible to
produce compound AAAA while avoiding expensive process BBBB and CCCC."
The electrostatic force calculation usually represents the main computational costs in systems where even a small amount of charged particles are present (>1%).
The Smooth Particle Mesh Ewald [SPME]_ splits the electrostatic forces in two parts: a short range, solved in the real space, and a long range, solved in the Fourier space.
An error weight function combines the two contributions. For the long range force the electrical charges are spread on a virtual particle mesh using a B-spline interpolation function.
Porting the full short and long range interactions to GPUs allowed us to achieve a speedup factor of 4x when compared to a traditional 12-core Intel CPU.
One of the main applications which includes electrical charges are the simulations of plasma.
Purpose of Module
_________________
.. Keep the helper text below around in your module by just adding ".. " in front of it, which turns it into a comment
The Ewald summation method scales with :math:`N^{1.5}` at best, where N is the number of charged particles. The SPME method allows for improved scaling, :math:`N*log(N)`,
but requires a stencil domain decomposition (i.e. decomposing the domain along one direction only) to allow the FFTW library to scale with more than 1 core.
If this is not used, as in the current master version of DL\_MESO\_DPD, FFTW rapidly becomes a bottleneck for scaling across several nodes.
On the other hand, the porting to a single GPU does not need domain decomposition and the same speedup factor (4x compared to 12-core Intel) is maintained.
Background Information
______________________
.. Keep the helper text below around in your module by just adding ".. " in front of it, which turns it into a comment
This module is part of the DL\_MESO\_DPD code. Full support and documentation is available at:
* https://www.scd.stfc.ac.uk/Pages/DL_MESO.aspx
* https://www.scd.stfc.ac.uk/Pages/USRMAN.pdf
To download the DL\_MESO\_DPD code you need to register at https://gitlab.stfc.ac.uk/dl_meso/dl_meso.
Please contact Dr. Micheal Seaton at Daresbury Laboratory (STFC) for further details.
Building and Testing
____________________
.. Keep the helper text below around in your module by just adding ".. " in front of it, which turns it into a comment
The DL\_MESO code is developed using git version control. Currently the GPU version is under a branch named "add\_gpu\_version". After downloading the code, checkout the GPU branch and look into the "DPD/gpu\_version" folder, i.e:
* git clone DL_MESO_repository_path
* cd dl_meso
* git checkout gpu_version
* cd /DPD/gpu_version
* make all
To compile and run the code you need to have installed the CUDA-toolkit and have a CUDA enabled GPU device (see http://docs.nvidia.com/cuda/#axzz4ZPtFifjw).
To run the case, compile the code using the "make all" command from the "bin" directory, copy the "FIELD" and "CONTROL" files in this directory and run "./dpd_gpu.exe".
Source Code
___________
.. Notice the syntax of a URL reference below `Text <URL>`_ the backticks matter!
This module has been merged into DL_MESO code. It is composed of the
following commits (you need to be register as developer):
* https://gitlab.stfc.ac.uk/dl_meso/dl_meso/commit/34a652fe62cadbac5e8a037b57ee9be64dcf4187
.. [SPME] J. Chem. Phys. 103, 8577 (1995)
.. _ReST: http://www.sphinx-doc.org/en/stable/rest.html
.. _Sphinx: http://www.sphinx-doc.org/en/stable/markup/index.html
......@@ -95,8 +95,7 @@ _________________
.. : .. [CIT2009] A citation (as often used in journals).
The original idea of our proposal: to work on a general implementation of AdResS in
class. MD packages. The current implementation of GC- AdResS in GROMACS has several performance problems. We know that the main performance loss of AdResS simulations in GROMACS is in the neighboring list search and the generic serial force kernel, linking the atomistic (AT) and coarse grained (CG) forces together via a smooth weighting function. Thus, to get rid of the bottleneck with respect to performance and a hindrance regarding the easy/general implementation into other codes and thus get rid of the not optimized force kernel used in GROMACS we had to change the neighborlist search. This lead to a considerable speed up of the code. Furthermore it decouples the method directly from the core of any MD code, which does not hinder the performance and makes the scheme hardware independent. For the theory, application and tests see `<https://aip.scitation.org/doi/10.1063/1.5031206>`_ or `<https://arxiv.org/abs/1806.09870>`_.
The main performance loss of AdResS simulations in GROMACS is in the neighboring list search and the generic serial force kernel, linking the atomistic (AT) and coarse grained (CG) forces together via a smooth weighting function. Thus, to get rid of the bottleneck with respect to performance and a hindrance regarding the easy/general implementation into other codes and thus get rid of the not optimized force kernel used in GROMACS we had to change the neighborlist search. This lead to a considerable speed up of the code. Furthermore it decouples the method directly from the core of any MD code, which does not hinder the performance and makes the scheme hardware independent. For the theory, application and tests see `<https://aip.scitation.org/doi/10.1063/1.5031206>`_ or `<https://arxiv.org/abs/1806.09870>`_.
.. The interface between the regions is more fluctuating and needs a more responsive thermodynamic force but it works reasonably well.
......@@ -277,7 +276,11 @@ ___________
.. Notice the syntax of a URL reference below `Text <URL>`_
To apply the patch: (:ref:`abrupt_adress_patch`)
The patch file for Abrupt GC-Adress is:
.. literalinclude:: ./abrupt_adress.patch
To apply the patch:
1) copy into the main directory (gromacs/)
......@@ -311,5 +314,3 @@ When *gmx mdrun* finished normally (with the above mentioned setup), we have sev
The files for the water example can be found here:
:download:`spc-example.tar.gz <spc-example.tar.gz>`
diff -ru /storage/mi/ck69giso/gromacs-5.1.5/src/gromacs/mdlib/update.cpp /home/mi/ck69giso/gmx-515-hck/src/gromacs/mdlib/update.cpp
--- /storage/mi/ck69giso/gromacs-5.1.5/src/gromacs/mdlib/update.cpp 2016-09-07 14:50:21.000000000 +0200
+++ /home/mi/ck69giso/gmx-515-hck/src/gromacs/mdlib/update.cpp 2018-07-24 16:07:27.000000000 +0200
@@ -67,6 +67,7 @@
#include "gromacs/utility/futil.h"
#include "gromacs/utility/gmxomp.h"
#include "gromacs/utility/smalloc.h"
+#include "adress.h"
/*For debugging, start at v(-dt/2) for velolcity verlet -- uncomment next line */
/*#define STARTFROMDT2*/
@@ -569,6 +570,8 @@
return upd;
}
+/* new */
+
static void do_update_sd1(gmx_stochd_t *sd,
int start, int nrend, double dt,
rvec accel[], ivec nFreeze[],
@@ -579,10 +582,11 @@
int ngtc, real ref_t[],
gmx_bool bDoConstr,
gmx_bool bFirstHalfConstr,
- gmx_int64_t step, int seed, int* gatindex)
+ gmx_int64_t step, int seed, int* gatindex, real fc)
{
gmx_sd_const_t *sdc;
gmx_sd_sigma_t *sig;
+
real kT;
int gf = 0, ga = 0, gt = 0;
real ism;
@@ -625,10 +629,21 @@
{
if ((ptype[n] != eptVSite) && (ptype[n] != eptShell) && !nFreeze[gf][d])
{
- real sd_V, vn;
+// real sd_V, vn;
+ real sd_V, vn, fn;
+ fn = f[n][d];
+
+// fc = 10000.;
+
+ if (fabs(fn)>fc)
+ {
+ printf("SD (I) force-cap %e\n", fn);
+ fn = fc*fn/fabs(fn);
+ }
sd_V = ism*sig[gt].V*rnd[d];
- vn = v[n][d] + (invmass[n]*f[n][d] + accel[ga][d])*dt;
+ vn = v[n][d] + (invmass[n]*fn + accel[ga][d])*dt;
+// vn = v[n][d] + (invmass[n]*f[n][d] + accel[ga][d])*dt;
v[n][d] = vn*sdc[gt].em + sd_V;
/* Here we include half of the friction+noise
* update of v into the integration of x.
@@ -668,7 +683,20 @@
{
if ((ptype[n] != eptVSite) && (ptype[n] != eptShell) && !nFreeze[gf][d])
{
- v[n][d] = v[n][d] + (im*f[n][d] + accel[ga][d])*dt;
+
+ real fn;
+
+// fc = 10000.;
+
+ fn = f[n][d];
+ if (fabs(fn)>fc)
+ {
+ printf("SD (II) force-cap %e\n", fn);
+ fn = fc*fn/fabs(fn);
+ }
+
+ v[n][d] = v[n][d] + (im*fn + accel[ga][d])*dt;
+// v[n][d] = v[n][d] + (im*f[n][d] + accel[ga][d])*dt;
xprime[n][d] = x[n][d] + v[n][d]*dt;
}
else
@@ -1644,6 +1672,8 @@
end_th = start + ((nrend-start)*(th+1))/nth;
/* The second part of the SD integration */
+ if (inputrec->bAdress)
+ {
do_update_sd1(upd->sd,
start_th, end_th, dt,
inputrec->opts.acc, inputrec->opts.nFreeze,
@@ -1653,7 +1683,23 @@
inputrec->opts.ngtc, inputrec->opts.ref_t,
bDoConstr, FALSE,
step, inputrec->ld_seed,
- DOMAINDECOMP(cr) ? cr->dd->gatindex : NULL);
+ DOMAINDECOMP(cr) ? cr->dd->gatindex : NULL,
+ inputrec->adress->ex_forcecap);
+ }
+ else
+ {
+ do_update_sd1(upd->sd,
+ start_th, end_th, dt,
+ inputrec->opts.acc, inputrec->opts.nFreeze,
+ md->invmass, md->ptype,
+ md->cFREEZE, md->cACC, md->cTC,
+ state->x, xprime, state->v, force,
+ inputrec->opts.ngtc, inputrec->opts.ref_t,
+ bDoConstr, FALSE,
+ step, inputrec->ld_seed,
+ DOMAINDECOMP(cr) ? cr->dd->gatindex : NULL,
+ 5000.);
+ }
}
inc_nrnb(nrnb, eNR_UPDATE, homenr);
wallcycle_stop(wcycle, ewcUPDATE);
@@ -2031,6 +2077,21 @@
break;
case (eiSD1):
/* With constraints, the SD1 update is done in 2 parts */
+ if (inputrec->bAdress)
+ {
+ do_update_sd1(upd->sd,
+ start_th, end_th, dt,
+ inputrec->opts.acc, inputrec->opts.nFreeze,
+ md->invmass, md->ptype,
+ md->cFREEZE, md->cACC, md->cTC,
+ state->x, xprime, state->v, force,
+ inputrec->opts.ngtc, inputrec->opts.ref_t,
+ bDoConstr, TRUE,
+ step, inputrec->ld_seed, DOMAINDECOMP(cr) ? cr->dd->gatindex : NULL,
+ inputrec->adress->ex_forcecap);
+ }
+ else
+ {
do_update_sd1(upd->sd,
start_th, end_th, dt,
inputrec->opts.acc, inputrec->opts.nFreeze,
@@ -2039,7 +2100,9 @@
state->x, xprime, state->v, force,
inputrec->opts.ngtc, inputrec->opts.ref_t,
bDoConstr, TRUE,
- step, inputrec->ld_seed, DOMAINDECOMP(cr) ? cr->dd->gatindex : NULL);
+ step, inputrec->ld_seed, DOMAINDECOMP(cr) ? cr->dd->gatindex : NULL,
+ 5000.);
+ }
break;
case (eiSD2):
/* The SD2 update is always done in 2 parts,
.. In ReStructured Text (ReST) indentation and spacing are very important (it is how ReST knows what to do with your
document). For ReST to understand what you intend and to render it correctly please to keep the structure of this
template. Make sure that any time you use ReST syntax (such as for ".. sidebar::" below), it needs to be preceded
and followed by white space (if you see warnings when this file is built they this is a common origin for problems).
.. Firstly, let's add technical info as a sidebar and allow text below to wrap around it. This list is a work in
progress, please help us improve it. We use *definition lists* of ReST_ to make this readable.
.. sidebar:: Software Technical Information
Name
GC-AdResS -Abrupt scheme- Force Capping
Language
Implemented in GROMACS version 5.1.5
Licence
See GROMACS web page: `<http://www.gromacs.org/>`_
Documentation Tool
Application Documentation
See GROMACS web page: `<http://www.gromacs.org/>`_
Relevant Training Material
See GROMACS web page: `<http://www.gromacs.org/>`_
.. In the next line you have the name of how this module will be referenced in the main documentation (which you can
reference, in this case, as ":ref:`example`"). You *MUST* change the reference below from "example" to something
unique otherwise you will cause cross-referencing errors. The reference must come right before the heading for the
reference to work (so don't insert a comment between).
_example:
#######################
Abrupt-AdResS: Forcecap
#######################
.. Let's add a local table of contents to help people navigate the page
.. contents:: :local:
.. Add an abstract for a *general* audience here. Write a few lines that explains the "helicopter view" of why you are
creating this module. For example, you might say that "This module is a stepping stone to incorporating XXXX effects
into YYYY process, which in turn should allow ZZZZ to be simulated. If successful, this could make it possible to
produce compound AAAA while avoiding expensive process BBBB and CCCC."
The original idea of our proposal was to work on a general implementation
of grand canonical adaptive resolution simulations (GC-AdResS) in
classical MD packages. The current implementation of GC- AdResS in GROMACS (up to version 5.1.5) has performance problems. The Abrupt GC-AdResS implementation is avoiding those and make AdResS more interesting for other MD developers (especially since we could remove the force interpolation and weighting functions from the force kernel).
This module works in combination with abrupt_AdResS and is at the same time important for a successful simulation. It shows how to avoid particle overlap at the interface of the atomistic and coarse grained regions.
Purpose of Module
_________________
.. Keep the helper text below around in your module by just adding ".. " in front of it, which turns it into a comment
The implementation of Abrupt GC-AdResS is in itself only working for the smallest and simplest of molecules without problems. For larger and more complex molecules the simulation crashes. This module shows a way to avoid this.
Background Information
______________________
.. Keep the helper text below around in your module by just adding ".. " in front of it, which turns it into a comment
As studies of ionic liquids and polymer melts have shown for large and complicated molecules even the standard GC-AdResS is not working without additional force capping. The reason for that is when a molecule from the coarse grained region enters the hybrid region the atomistic representations, which are present due to the technically necessary double resolution, interact. It is possible for atoms to be too close together, which results in a too high force and thus in too high velocities of those particles. Since in Abrupt AdResS we can avoid the generic force kernel from GROMACS, the force capping (which was previously implemented at the end of the force calculation) had to be shifted and replaced. We finally looked at the integrator (in our case the stochastic dynamics integrator), which is the place where each force has to be read and handled.
This implementation of force capping is a rudimentary approach. The basic principle is when two particles are too close together, and thus the force are far higher than the average forces in the simulation, the force on the particles are re-scaled to a given value. That tactic makes sure that the insertion of particles in the atomistic region is introduced at reasonable velocities and temperature. As a side effect, the area of disturbance due to the introduction of a particle is limited.
Building and Testing
____________________
.. Keep the helper ttxt below around in your module by just adding ".. " in front of it, which turns it into a comment
We have used this new addition to the code on two systems: water and ionic liquids. The results have been published in Ref. `<https://aip.scitation.org/doi/10.1063/1.5031206>`_ or `<https://arxiv.org/abs/1806.09870>`_. All the information about studied systems and the performance can be found there.
The patch provided can be applied alone without the Abrupt AdResS patch, in the main directory of GROMACS.
The important part of the patch is that it adds an upper force limit in the stochastic dynamics integrator. If that upper limit is triggered the force is re-scaled to the given force cap. The rest is basically to make sure that the integrator is called correctly.
There is a *print* command which is triggered once the force on a particle is higher than a given force cap value. The force capping simulation can in some cases cause lincs warnings. Since we take care of faulty configuration that way, we can disabling those warnings (*export GMX_MAXBACKUP=-1* ; *export GMX_MAXCONSTRWARN=-1*). Otherwise it generates too many files and can crash as well.
Source Code
___________
.. Notice the syntax of a URL reference below `Text <URL>`_
A note of caution: the chosen force cap trigger has to be a high enough value, otherwise normal interactions (interactions with forces around the average forces in a simulation) will trigger the force capping. That would change the dynamics and the structure of a system. Also it would decrease the performance of the code. If chosen too high it might run into impossible and unstable configuration, which will result in a program crash.
Recipe for hard coded force capping:
.. literalinclude:: ./forcecap.patch
with:
fc = Chosen upper force limit for the ionic liquids simulations
fn = force acting on a particle
The patch provided can be applied in the main directory of GROMACS via:
::
patch < forcecap.patch
#!/bin/bash
source /<our path here>/gromacs/bin/GMXRC
j=$1
l=$2
k=$3
m=$4
n=$5
rmin= "Start of interpolation zone"
rmax= "End of interpolation zone"
rbox= "total length of the box"
rref= "Reference coordinate used in the AdResS input"
lc= "how fine the grid for the density should be"
prefac= "either select the prefactor from the formula or define a value here"
rstep=$(echo "$rbox/($lc-1)" | bc -l)
r_cut=$rmin
rm_c=$rmax
echo $r_cut, $rm_c
prep_sys=$j
case $prep_sys in
[0]*)
if [ ! -f F_th_step0 ]; then
seq -f%g 0.000 $rstep $rbox | sed 's/,/./' | awk '{print $1, "0.0", "0.0"}'> F_th_step0
fi
cp F_th_step0 tabletf_WCG.xvg
gmx mdrun -v
;;
[1]*)
if [ -f tabletf_CAT.xvg ]; then
awk 'BEGIN{OFS="\t"}(NR>9){print $1, $2, $3}' tabletf_CAT.xvg > F_th_step0
fi
if [ -f tabletf_ANN.xvg ]; then
awk 'BEGIN{OFS="\t"}(NR>9){print $1, $2, $3}' tabletf_ANN.xvg > F_th_step0
fi
;;
[2]*)
if [ -f tabletf_WCG.xvg ]; then
awk 'BEGIN{OFS="\t"}(NR>9){print $1, $2, $3}' tabletf_WCG.xvg > F_th_step0
fi
;;
esac
#rn=$(tail -n 1 F_th_step0 | awk '{printf("%f\n",$1)}')
rn1=$(echo $rn+$rstep | bc -l)
echo $rn $rn1
dens_sys=$n
dens_chk=$n
case $dens_sys in
[0]*)
csg_density --axis r --rmax 9.6 --ref [5.5,5.5,5.5] --trj ../7000_mmin/traj_co