Task #3370
Further improvements to GPU Buffer Ops and Comms
Description
Umbrella task for follow-up improvements.¶
[HIGH PRIORITY] Unification of code-paths across different types of step in do_force¶
- Allow GPU Force Buffer ops to be active on virial steps
- Fix uploaded https://gerrit.gromacs.org/c/gromacs/+/15960
Unify X/F Buffer ops flagsAbove fix unifies to single stepWork flaghttps://gerrit.gromacs.org/c/gromacs/+/15961 moves to unified simulationWork flag- see comments below
- Allow GPU PME-PP comms to be active on virial steps
- Allow GPU halo exchange to be active on virial steps (requires extension to include shift force contribution)
Unify and simplify X/F Halo exchange triggers.See comments below.Allow GPU X buffer ops to be active on search steps. Update: realized this is not required since there are no X buffer ops calls from do_force on search steps.
[HIGH PRIORITY] Refactoring¶
eliminate regression due to moving gmx_pme_send_coordinates()subtask https://redmine.gromacs.org/issues/3159- Fix uploaded https://gerrit.gromacs.org/c/gromacs/+/15200
- move ddUsesGpuDirectCommunication and related conditionals into the workload data structures
- Fix uploaded https://gerrit.gromacs.org/c/gromacs/+/15437
[HIGH PRIORITY] Force buffer op and reduction cleanup/improvement¶
Previous general discussion at https://redmine.gromacs.org/issues/3029- Rework GPU direct halo-exchange related force reduction complexities
- Centralize and clarify GPU force buffer clearing: The responsibility of (rvec) force buffer clearing should be moved into StatePropagatorDataGpu and arranged for such that this is not a task on the critical path (as it as right now in GpuHaloExchange::Impl::communicateHaloForces()).
- Previous discussion at https://redmine.gromacs.org/issues/3142
- At the same time, we need to
- skip CPU-side force buffer clearing if there are no CPU forces computed
- check all code-paths and make sure we can not end up with reduction kernels accumulating into non-initialized buffers.
- Launch the transform kernel back-to-back after the nonbonded rather than later, next to the CPU buffer ops/reduction
- the transform+reduce kernels can use simple or atomic accumulation into a reduced f output buffer; the former will require exclusive access to the target force buffer (need to wait for the completion of any kernel that produces forces into it) while the latter would only require a wait on the source force buffer(s) to be reduced into the target (e.g. GPU NB and/or CPU force buffer).
- consider inline transform function for on-the-fly transform within the nonbonded kernel; in particular for high parallelization the performance hit in the nonbonded kernel may be less than the cost of launching an extra kernel.
- Ideally the force-reduction should not be called from a method of the nonbonded module (especially due to the complexities of CPU/GPU code-paths) - consider reorganizing reductions
Remove Limitations¶
Implement multiple pulses within GPU halo exchange communicationsubtask https://redmine.gromacs.org/issues/3106- Fix uploaded https://gerrit.gromacs.org/c/gromacs/+/14723
- Implement multiple dimensions within GPU halo exchange communication
- Fix uploaded https://gerrit.gromacs.org/c/gromacs/+/16181
- Extend PME-PP communication to support case where PME is on CPU and PP is on GPU.
- subtask https://redmine.gromacs.org/issues/3160
- fix uploaded https://gerrit.gromacs.org/c/gromacs/+/14223
- Extend PME-PP communication to support coordinate send from CPU
- subtask https://redmine.gromacs.org/issues/3160
- fix uploaded https://gerrit.gromacs.org/c/gromacs/+/14238
Timing¶
- add missing cycle counters related to buffer ops/reduction launches
Improve synchronization¶
- Implement better receiver ready / notify in halo exchange: Current notification mechanisms render the one-sided communication synchronous two-sided. Alternatives should be considered.
- Separate PME x receive sync: the data dependency sychronization should be implemented on the consumer task's end which is PME spread in the case of PME. PME-only ranks have the receive enqueue wait as soon as MPI returns. Consider assembling a list of events and passed to spread instead. Consider whether having to receive from multiple PP ranks actually makes is more beneficial to overlap some receive with event wait enqueue.
Investigate GPU f buffer ops use cases¶
Check if there is any performance benefits to be had and in which regimes for x / f buffer opts without GPU update in:- runs with DD and CPU update
- x buffer ops: offloadable with a likely simple crossover heuristic threshold; i.e. below N atoms/core not offloaded (locals or also nonlocals, with/without CPU work?)
- f buffer ops: heuristics likely more complex criteria (as it is combined with reductions)
- runs with / without DD and vsites
- with GPU update requires D2H and H2D -- is it worth it, test use-cases (e.g. multiple ranks per GPU, both ensemble and DD runs, transfers might be overlapped)
- without GPU update: same applies as above non-vistes runs just wait on D2H needs to be earlier
evaluate what is #atoms threshold under which it is not worth taking the 10-15 us overhead of kernel launch (especially for non-local buffer ops)
Subtasks
Associated revisions
Redevelopment of GPU Force Reduction/Buffer Ops
Introduces a new purpose-build class for GPU force reduction, which
replaces the previous force buffer ops mechanism.
Refs #3370
Use workload data structures for GPU halo exchange triggers
Move GPU halo exchange trigger booleans and related conditionals into
workload data structures, and remove unnecessary assertion on GPU
buffer ops being active (since it is now automatically activated when
GPU halo exchange is active).
Partly addresses #3370
History
#16 Updated by Szilárd Páll 11 months ago
- Unify X/F Buffer ops flags
- x buffer ops: offloadable with a likely simple crossover heuristic threshold; i.e. below N atoms/core not offloaded (locals or also nonlocals, with/without CPU work?)
Have we done measurements of the cross-over of CPU time of CPU vs GPU buffer ops? If we have not, the above goals do conflict. Unifying the flags means the x buffer ops trigger can not be tuned based on a #atoms threshold.
Additionally, the x/f buffer ops are entirely different tasks so I see little benefit in merging their workload flags -- other than saving a few bytes in the workload data structure.
Allow GPU X buffer ops to be active on search steps. Update: realized this is not required since there are no X buffer ops calls from do_force on search steps.
On search steps the search produces nonbonded-layout x, so technically it is not needed. We could change that and avoid having the search store the coordinates and call the buffer ops instead. The benefit would be uniform behavior on the GPU across all steps but different behavior for CPU and GPU search.
#17 Updated by Szilárd Páll 11 months ago
Szilárd Páll wrote:
- Unify X/F Buffer ops flags
- x buffer ops: offloadable with a likely simple crossover heuristic threshold; i.e. below N atoms/core not offloaded (locals or also nonlocals, with/without CPU work?)
Have we done measurements of the cross-over of CPU time of CPU vs GPU buffer ops?
See https://redmine.gromacs.org/issues/3029#note-1; IIRC that was CPU vs GPU kernel time, but CPU critical path will be affected more by kernel launch cost (at least until we can overlap GPU launch with CPU execution).
#19 Updated by Alan Gray 11 months ago
The idea is to try and simplify the logic in do_force (and do_md) by unifying flags and ultimately code-paths, to improve readability and maintainability of the code (and reduce the scope of required test coverage). I acknowledge that this is in conflict with the idea of developments that add more flexibility to hardware scheduling through heuristics, which may have some performance benefits but would further increase complexity. I suggest that we focus on simplification/cleanup in the short term, and put the latter idea on the backburner as a possible future optimization task.
#20 Updated by Artem Zhmurov 11 months ago
Alan Gray wrote:
The idea is to try and simplify the logic in do_force (and do_md) by unifying flags and ultimately code-paths, to improve readability and maintainability of the code (and reduce the scope of required test coverage). I acknowledge that this is in conflict with the idea of developments that add more flexibility to hardware scheduling through heuristics, which may have some performance benefits but would further increase complexity. I suggest that we focus on simplification/cleanup in the short term, and put the latter idea on the backburner as a possible future optimization task.
I agree with Szilard here. The X buffer ops most likely should be enabled when we need coordinates in nbat format on the GPU. I think we can even live with re-doing them on the GPU on search steps, thus the XBuffOps flag will naturally go away. This will also eliminate the need of copy_nbat_coordinates_host_to_device function and logic around it. F buffer ops need more work before the corresponding flag is eliminated the same way.
#21 Updated by Szilárd Páll 11 months ago
This is not just a matter of leaving room for optimization. The two tasks in question. x buffer ops and f buffer ops + reduction, are entirely different tasks, so it does make sense to keep them separate.
Also note that tasks themselves will not be "eliminated" (unless underlying algorithms change) and therefore it is entirely reasonable to have workload flags corresponding to these. These flags define the schedule and therefore, unless a task becomes trivial or merged into another across all code paths (i.e. all GPU code-paths support X buffer ops and always schedule it together with some other nbnxm task), the XBuffOps flag can't go away.
Last, I see only a very small code (LOC/logic) siomplification in using one simulationWorkload.useGpuBufferOps versus two stepWorkload workload flags.
Sode-note: the current thinking is that we should have an inclusive stepWorkload data structure that contains all the higher level flags and is constructed ahead of time for N steps.
Remove single-dimension limitation from GPU halo exchange
Allows GPU halo exchange to be active when the number of dimensions is
greater than 1, thus simplifying the codepath logic.
Partly addresses #3370