1
|
:-) GROMACS - gmx mdrun, 2018.1-dev-20180313-3500c87 (-:
|
2
|
|
3
|
GROMACS is written by:
|
4
|
Emile Apol Rossen Apostolov Paul Bauer Herman J.C. Berendsen
|
5
|
Par Bjelkmar Aldert van Buuren Rudi van Drunen Anton Feenstra
|
6
|
Gerrit Groenhof Aleksei Iupinov Christoph Junghans Anca Hamuraru
|
7
|
Vincent Hindriksen Dimitrios Karkoulis Peter Kasson Jiri Kraus
|
8
|
Carsten Kutzner Per Larsson Justin A. Lemkul Viveca Lindahl
|
9
|
Magnus Lundborg Pieter Meulenhoff Erik Marklund Teemu Murtola
|
10
|
Szilard Pall Sander Pronk Roland Schulz Alexey Shvetsov
|
11
|
Michael Shirts Alfons Sijbers Peter Tieleman Teemu Virolainen
|
12
|
Christian Wennberg Maarten Wolf
|
13
|
and the project leaders:
|
14
|
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
|
15
|
|
16
|
Copyright (c) 1991-2000, University of Groningen, The Netherlands.
|
17
|
Copyright (c) 2001-2017, The GROMACS development team at
|
18
|
Uppsala University, Stockholm University and
|
19
|
the Royal Institute of Technology, Sweden.
|
20
|
check out http://www.gromacs.org for more information.
|
21
|
|
22
|
GROMACS is free software; you can redistribute it and/or modify it
|
23
|
under the terms of the GNU Lesser General Public License
|
24
|
as published by the Free Software Foundation; either version 2.1
|
25
|
of the License, or (at your option) any later version.
|
26
|
|
27
|
GROMACS: gmx mdrun, version 2018.1-dev-20180313-3500c87
|
28
|
Executable: /data/viveca/gromacs/build-release-2018-debug-mpi/bin/gmx_mpi_debug
|
29
|
Data prefix: /home/viveca/gromacs (source tree)
|
30
|
Working dir: /home/viveca/tmp/test-maxh/dna/replica-0
|
31
|
Command line:
|
32
|
gmx_mpi_debug mdrun -v -stepout 1 -nstlist 1 -npme 0 -ntomp 1 -maxh 0.0001 -multidir sim-0/
|
33
|
|
34
|
Reading file topol.tpr, VERSION 2018.1-dev-20180313-3500c87 (single precision)
|
35
|
Changing nstlist from 10 to 1, rlist from 1 to 1
|
36
|
|
37
|
This is simulation 0 out of 1 running as a composite GROMACS
|
38
|
multi-simulation job. Setup for this simulation:
|
39
|
|
40
|
Using 1 MPI process
|
41
|
Using 1 OpenMP thread
|
42
|
|
43
|
[tcbl07:2852] *** An error occurred in MPI_comm_size
|
44
|
[tcbl07:2852] *** on communicator MPI_COMM_WORLD
|
45
|
[tcbl07:2852] *** MPI_ERR_COMM: invalid communicator
|
46
|
[tcbl07:2852] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
|
47
|
--------------------------------------------------------------------------
|
48
|
mpirun has exited due to process rank 0 with PID 2852 on
|
49
|
node tcbl07 exiting improperly. There are two reasons this could occur:
|
50
|
|
51
|
1. this process did not call "init" before exiting, but others in
|
52
|
the job did. This can cause a job to hang indefinitely while it waits
|
53
|
for all processes to call "init". By rule, if one process calls "init",
|
54
|
then ALL processes must call "init" prior to termination.
|
55
|
|
56
|
2. this process called "init", but exited without calling "finalize".
|
57
|
By rule, all processes that call "init" MUST call "finalize" prior to
|
58
|
exiting or it will be considered an "abnormal termination"
|
59
|
|
60
|
This may have caused other processes in the application to be
|
61
|
terminated by signals sent by mpirun (as reported here).
|
62
|
--------------------------------------------------------------------------
|