Project

General

Profile

Feature #2890

Feature #2816: Device-side update&constraits, buffer ops and multi-gpu comms

GPU Halo Exchange

Added by Alan Gray 5 months ago. Updated 17 days ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
Difficulty:
uncategorized
Close

Description

When utilizing multiple GPUs with PP domain decomposition, halo data must be exchanged between PP tasks. Currently, this is routed through the host CPUs: D2H transfer; buffer packing on CPU; host-side MPI; and same in reverse. This is true for both the position and force buffers.

Instead, we can transfer data directly between GPU memory spaces using GPU peer-to-peer communication. Modern MPI implementations are CUDA-aware and support this, or D2D cudaMemcopies can be used directly when in threadMPI mode. The complication is that we need to pack buffers directly on the GPU. This is done by first preparing (on the CPU) a mapping of array indices involved in the exchange, which can then be used in GPU buffer packing/unpacking kernels. This mapping array only needs to populated and transferred during neighbor search steps.

Limitation: still only supports 1D data decomposition. Higher numbers of dimensions should be relatively straightforward to implement by extending the existing developments, but this still requires design and testing.

TODO: implement support for higher numbers of dimensions.
TODO: integrate call to force buffer halo exchange, when force buffer ops
patches accepted.


Related issues

Related to GROMACS - Feature #3052: GPU virial reduction/calculationNew

Associated revisions

Revision 8f42be1e (diff)
Added by Alan Gray 2 days ago

GPU halo exchange

Activate with GMX_GPU_DD_COMMS environment variable.

Class to initialize and apply halo exchange functionality directly on
GPU memory space.

Fully operational for position buffer. Functionality also present for
force buffer, but not yet called (awaiting acceptance of force buffer
ops patches).

Data transfer for halo exchange is wrapped and has 2 implementations:
cuda-aware MPI (default with "real" MPI) and direct cuda memcpy
(default with thread MPI). With the latter, the P2P path will be taken
if the hardware support it, otherwise D2H,H2D.

Limitation: still only supports 1D data decomposition

TODO: implement support for higher numbers of dimensions.
TODO: integrate call to force buffer halo exchange, when force buffer ops
patches accepted.

Implements part of #2890
Associated with #2915

Change-Id: I8e6473481ad4d943df78d7019681bfa821bd5798

History

#1 Updated by Alan Gray 5 months ago

  • Target version set to 2020

#2 Updated by Gerrit Code Review Bot 5 months ago

Gerrit received a related patchset '8' for Issue #2890.
Uploader: Alan Gray ()
Change-Id: gromacs~master~I8e6473481ad4d943df78d7019681bfa821bd5798
Gerrit URL: https://gerrit.gromacs.org/9225

#3 Updated by Alan Gray 5 months ago

  • Description updated (diff)

#4 Updated by Alan Gray 5 months ago

  • Description updated (diff)

#5 Updated by Alan Gray 5 months ago

Position halo exchange patch awaiting review.

#6 Updated by Alan Gray 3 months ago

  • Description updated (diff)

#7 Updated by Alan Gray 3 months ago

  • Description updated (diff)

#8 Updated by Szilárd Páll 17 days ago

  • Related to Feature #3052: GPU virial reduction/calculation added

#9 Updated by Szilárd Páll 17 days ago

Note that the f exchange needs to consider #3052. Short-term solution could be to have the direct communication disabled on virial steps, but a final solution should ideally avoid such complexities in control code.

Also available in: Atom PDF