Error in MPI_Allreduce when applying AWH biasing and for multiple sharing biases
When running e.g.
mpirun -np 4 $gmx_mpi mdrun -v -multidir walker-1 walker-2 there is an error from MPI_Allreduce:
*** An error occurred in MPI_Allreduce *** on communicator MPI_COMM_WORLD *** MPI_ERR_COMM: invalid communicator
on my machine, or when running on the cluster
Rank 8 [Thu Mar 1 16:03:40 2018] [c1-0c0s8n3] Fatal error in MPI_Allreduce: Invalid communicator, error stack: MPI_Allreduce(1007): MPI_Allreduce(sbuf=MPI_IN_PLACE, rbuf=0x22e08f0, count=337, MPI_INT, MPI_SUM, MPI_COMM_NULL) failed MPI_Allreduce(926).: Null communicator
A tpr for this is attached. This is similar to https://redmine.gromacs.org/issues/2403 in that there is no error when there only one rank per directory multidir argument. I.e.
mpirun -np 2 $gmx_mpi mdrun -v -multidir walker-1 walker-2 runs error-free. I have all related fixes up until now applied.