problem with particle decomposition in gromacs 4.0.4
Created an attachment (id=367)
tpr file for C30 alkane chain in SPC/E water at 300K. mdrun_mpi crashes at the beginning for particle decomposition in gromacs 4.0.4.
I am trying particle decomposition with gromacs 4.0.4 for a small polymer (30 atoms) in SPC/E water. Unfortunately every run crashes at the very beginning, when I use version 4.0.4 with option -pd. Whereas everything remaining same it's
still running fine in version 3.3.3 and also with domain decomposition in 4.0.4.
Here is the error I got when I tried running a 2-processor job with the attached tpr file.
mpiexec -np 2 mdrun_mpi -deffnm c30.G43a2.300k -pd -v
starting mdrun 'Protein in water'
500000 steps, 1000.0 ps.
[node48:11810] * Process received signal
[node48:11810] Signal: Segmentation fault (11)
[node48:11810] Signal code: Address not mapped (1)
[node48:11810] Failing at address: (nil)
[node48:11810] [ 0] /lib64/libpthread.so.0 [0x3abb80e540]
[node48:11810] [ 1] mdrun_mpi(update+0x3289) [0x4e3c49]
[node48:11810] [ 2] mdrun_mpi(do_md+0x2f26) [0x4302b6]
[node48:11810] [ 3] mdrun_mpi(mdrunner+0x811) [0x432dd1]
[node48:11810] [ 4] mdrun_mpi(main+0x3c4) [0x433cf4]
[node48:11810] [ 5] /lib64/libc.so.6(__libc_start_main+0xf4) [0x3abac1e074]
[node48:11810] [ 6] mdrun_mpi [0x4192e9]
[node48:11810] End of error message *
Received the TERM signal, stopping at the next step
mpiexec noticed that job rank 1 with PID 11810 on node node48.cluster.in exited on signal 11 (Segmentation fault).
1 process killed (possibly by Open MPI)
#2 Updated by Berk Hess about 11 years ago
This is caused by a bug in the combination of parrinello-rahman pressure
coupling with V-rescale (or Berendsen) temperature coupling.
Originally it would make much sense to use Berendsen with PR coupling,
but V-rescale with PR makes more sense.
I fixed it for 4.0.6 and 4.1.
A fixed src/mdlib/update.c is attached if you need it now.