Device-side update&constraits, buffer ops and multi-gpu comms
Gromacs is sub-optimal on modern GPU Servers.
When running on a single GPU, all force calculations are now done on the device, but the buffer operations plus update & constraints are done on the host, and repeated PCI-e transfers are required. Such CPU computation and PCI-e communication comprise an increasingly significant overhead as the performance of the GPU continues to increase with each subsequent generation.
On multi-GPU the situation is ever worse because the required multi-GPU communications are routed through the CPU.
NVIDIA have developed prototype code with all compute and communication parts now device-side, with coordinate and force PCIe transfers removed for regular timesteps. Gerrit patch 8506 introduces device-side buffer ops, and patch 8859 (based on the buffer ops patch) demonstrates the remainder of the new developments:
- GPU Update and Constraints
- Device MPI: PME/PP Gather and Scatter
- Relatively straightforward solution using CUDA-Aware MPI
- Device MPI: PP local/nonlocal exchanges
- New functionality to pack device-buffers and exchange using CUDA-aware MPI
- Similar D2D exchanges also for Constraints Lincs part
See the attached slides for more info.
These developments show major performance improvements, but are still in prototype form, and the purpose of this issue is to track the work required to integrate properly into the master branch.
Test for LINCS and SHAKE constraints.
This version updates the tests making the selection of the
constraining algorithm more abstract. Makes it possible
to use the same test routines for new implementations (e.g.
CPU- or GPU-based) or (and) algorithms (e.g. LINCS or SHAKE).
Partly this is preparation for the GPU-based version of
the constraints (Refs #2816).
CUDA version of LINCS constraints.
Implementation of the LINCS constraints for NVIDIA GPUs.
Currently works isolated from the other parts of the code:
coordinates and velocities are copied to and from GPU on
every integration timestep. Part of the GPU-only loop.
Loosely based on change 9162 by Alan Gray. To enable,
set the environmental variable GMX_LINCS_GPU.
1. Works only if the constraints can be split in short
uncoupled groups (currently < 256, designed for H-bonds
2. Does not change the matrix inversion order for costraints
3. Does not support free energy computations.
4. Assumes no communications between domains (i.e. assumes that
there is no constraints connecting atoms from two different
5. Number of thread per blocks should be a power of 2 for
reduction of virial to work.
1. Move more data from the global memory to local.
2. Change .at() to 
3. Add sorting by the number of coupled constraints to decrease
4. numAtoms should be changeable (for multi-GPU case).
CUDA version of SETTLE algorithm with basic tests
CUDA-based GPU implementation of SETTLE. This is a part of
all-GPU loop. Can work isolated from other parts of the code
since coordinates are copied to (from) device before (after)
SETTLE kernel call. The velocity update as well as virial
evaluations can be enabled.
To enable, set GMX_SETTLE_GPU environment variable.
1. Does not work when domain decomposition is enabled.
2. Projection of the derivative is not implemented.
3. Not fully integrated/unified with the CPU version.
1. Multi-GPU case.
2. Better virial reduction. This is a more general feature,
not only related to constraints.
5. More cleanup in constr.cpp needed.
6. Better unit tests.
CUDA version of Leap-Frog integrator with basic tests
Part of the GPU-only loop. Curent version is as a stand-alone module,
with its own coordinate, velocities and forces data management.
To activate, set environment variable GMX_INTEGRATE_GPU.
-- Only basic Leap-Frog is implemented.
-- No temperature control.
-- No pressure control.
#7 Updated by Szilárd Páll 2 months ago
We need to decouple these changes; there are several distinct features that are proposed here, so we need redmine issues for those. I would also prefer to organize trees of issues around a certain target feature-set, e.g. single-GPU no-DD all offloaded, or multi-GPU with-DD, most offloaded, etc. While feature sets may overlap, the higher-level features are these parallelization functionalities that will depend/be related to both common and individual tasks.
Consequently, at least a separate LINCS, SETTLE, Update, halo exchange, and PP-PME comm issues would be desirable, possibly even separate ones for with/without communication (when this makes sense).