Project

General

Profile

Feature #2890

Updated by Alan Gray 5 months ago

When utilizing multiple GPUs with PP domain decomposition, halo data must be exchanged between PP tasks. Currently, this is routed through the host CPUs: D2H transfer; buffer packing on CPU; host-side MPI; and same in reverse. This is true for both the position and force buffers.

Instead, we can transfer data directly between GPU memory spaces using GPU peer-to-peer communication. Modern MPI implementations are CUDA-aware and support this, or D2D cudaMemcopies can be used directly when in threadMPI mode. The complication is that we need to pack buffers directly on the GPU. This is done by first preparing (on the CPU) a mapping of array indices involved in the exchange, which can then be used in GPU buffer packing/unpacking kernels. This mapping array only needs to populated and transferred during neighbor search steps.

Limitation: still only supports 1D data decomposition. Higher numbers of dimensions should be relatively straightforward to implement by extending the existing developments, but this still requires design and testing. decomposition

TODO: implement support for higher numbers of dimensions.
TODO: integrate call to force buffer halo exchange, when force buffer ops
patches accepted.

Back