- Registered on: 12/21/2018
- Last connection: 10/22/2019
- 08:47 AM GROMACS Revision 643e75da: Enable GPU Peer Access in GPU Utilities
- When using the new GPU communication features, enabling peer access
between pairs of GPUs (where supported) will allo...
- 11:32 AM GROMACS Revision c5595a8e: GPU Coordinate PME/PP Communications
- Extends PmePpCommGpu class to provide PP-side support for coordinate
transfers from either GPU or CPU to PME task, an...
- 12:52 AM GROMACS Revision 5b594f3b: GPU Receive for PME/PP GPU Force Communications
- This change extends the PME/PP GPU force communication functionality
to allow the force buffer to be recieved direct ...
- 12:04 AM GROMACS Revision ec0aa356: PME/PP GPU Pimpl Class and GPU->CPU Communication for force buffer
- Activate with GMX_GPU_PME_PP_COMMS env variable
Implements new pimpl class for PME-PP GPU communications. Performs
- 03:33 PM GROMACS Bug #3100: crash with GPU comm DD
- Fix at https://gerrit.gromacs.org/c/gromacs/+/13401. @Szilard, can you just check that this also fixes your run?
- 02:06 PM GROMACS Bug #3100: crash with GPU comm DD
- Ok, yes the previous code was also using the same for the coordinates. I don't understand why these are different - c...
- 11:15 AM GROMACS Bug #3100: crash with GPU comm DD
- I've found the bug. It's not in the GPU halo exchange, but was introduced in
c69e061 Decouple coordinates buffer ma...
- 11:46 PM GROMACS Bug #3100: crash with GPU comm DD
- I’ve looked at it some more and there is a possibility that the above extra copy caused the run to succeed not becaus...
- 04:39 PM GROMACS Bug #3100: crash with GPU comm DD
- It's looking like the issue is with the conditional D2H copy of nonlocal coordinate data after the X halo exchange:
- 12:40 PM GROMACS Bug #3100: crash with GPU comm DD
- > then there should be a sync to ensure the local X buffer ops has completed, which I think may be missing in the cod...
Also available in: Atom