Project

General

Profile

Bug #1404

hang if #threads < #cores

Added by Xavier Periole over 6 years ago. Updated over 4 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
mdrun
Target version:
-
Affected version - extra info:
Affected version:
Difficulty:
uncategorized
Close

Description

The problem: GMX (4.6.1/4.6.4) hangs (or is very very slow) when it is asked to run a number of threads different of the number of cores allocated that results in a non-homogeneous distribution of tasks.
eg: 256 tasks submitted to 264 threads (11 nodes with 24 CPUs each). These end up decomposed as 23 threads on 8 nodes and 24 threads on 3 nodes for a total of 256 threads. I can image in that the problem could happen on less nodes but I got trouble as the numbers get quickly prime numbers and mdrun complains before running.

I could not attach a tpr file that runs on 256 nodes … the system is too large (tpr of 55 MB). I'd imagine that would happen with any system.

I hope this help.

History

#1 Updated by Berk Hess over 6 years ago

I don't understand what the exact run setup is here. Did you request 11 MPI ranks? Did you request a certain number of OpenMP threads? Or maybe you don't use any threads at all, but only MPI processes?
Could you provide the number of MPI ranks you requested and the mdrun command line?

#2 Updated by Xavier Periole over 6 years ago

Hi Berk,

This is assigned to you automatically I guess but Mark asked me to get this issue into redmine because it needs to be looked at and potentially fixed.

It is all MPI and the threads are actually tasks, not threads as threads!

Sorry for the confusion.

#3 Updated by Mark Abraham over 6 years ago

Thanks, Xavier. We really need a command line, and any relevant settings on your machine for OMP_NUM_THREADS, or your mpirun.

#4 Updated by Berk Hess over 6 years ago

You're not giving enough details. But from what you wrote it seems like you are asking for 256 MPI ranks (without OpenMP) on 11 nodes with each 24 (=264 cores). This is a bad idea a general. No solution would work will here. You should ask for 264 MPI ranks, or probably better, use 10 nodes and ask for 240 ranks.

#5 Updated by Xavier Periole over 6 years ago

Ok, let me see.

I completely understand that it is not the optimal way to use the computer. It was happened as they changed nodes from 16 cores to 24 cores to 256 tasks did not distribute optimally anymore. I did then switch to 264 tasks but it seems that the hanging go the codes is not wanted … it is not clear to me why this might get the code stuck …

The environment is setup as such:

OMP_NUM_THREADS=1

The relevant command lines within a script:

#SBATCH -n 256
#SBATCH -t 00:20:00

srun mdrun -rdd 1.6 -dlb yes -maxh 0.3 -cpt 5 -deffnm dynamics

This all MPI … well not threads_mpi.

I hope this helps.

#6 Updated by Berk Hess over 6 years ago

At what point does it get stuck? What is the last output to stderr/stdlog and md.log?
Could you run with -debug 1 and tell us what the last lines in mdrun0.debug are?

#7 Updated by Xavier Periole over 6 years ago

At what point does it get stuck?

It does not even make the first step ...

What is the last output to stderr/stdlog and md.log?
Could you run with -debug 1 and tell us what the last lines in mdrun0.debug are?

The last line of the md.log file are:
There are: 1502395 Atoms
Charge group distribution at step 0: XX YY ….

Running with -debug 1 I see that:

the last output in the mdrun0.debug (and the others):
cell_x0 16.712780 - 22.283707 skew_fac 1.000000
cell_x0 33.425560 - 44.567413 skew_fac 1.000000
cell_x2 10.584405 - 21.168810 skew_fac 1.000000
Set grid boundaries dim 0: 16.712780 22.283707
Set grid boundaries dim 1: 33.425560 44.567413
Set grid boundaries dim 2: 10.584405 21.168810

I hope this helps.

#8 Updated by Erik Lindahl over 4 years ago

  • Status changed from New to Closed

Hi,

To prepare for the upcoming 2016 release I've been going through (very) old redmine issues to see what their state is. For some of them, including this one, it is unclear whether the issue is still a concern, or if it has been fixed in newer versions. Apologies for not looking into it better if it never got fixed.

However, since we only fix things actively in the latest two releases I'm closing this for now; please feel free to reopen if it is still a problem in Gromacs-5 or later.

For this bug in particular, there has been a lot of changes in the domain decomposition code.

Also available in: Atom PDF