Skip to content

Commit

Permalink
[mlir] minor documentation fix in GPUTransformOps.td (#121157)
Browse files Browse the repository at this point in the history
- do not refer to handles as `PDLOperation`, this is an outdated and
incorrect vision of what they are based on the type used in the early
days;
 - use backticks around inline code.
  • Loading branch information
ftynse authored Dec 26, 2024
1 parent 62c39d7 commit 776ac21
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions mlir/include/mlir/Dialect/GPU/TransformOps/GPUTransformOps.td
Original file line number Diff line number Diff line change
Expand Up @@ -168,13 +168,13 @@ def MapNestedForallToThreads :

#### Return modes:

This operation ignores non-gpu_launch ops and drops them in the return.
This operation ignores non-`gpu_launch` ops and drops them in the return.

If any scf.forall with tensors is found, the transform definitely
fails.

If all the scf.forall operations with gpu.thread mapping contained
within the LaunchOp referred to by the `target` PDLOperation lower to GPU
If all the `scf.forall` operations with gpu.thread mapping contained
within the `LaunchOp` referred to by the `target` handle lower to GPU
properly, the transform succeeds. Otherwise the transform definitely
fails.

Expand Down Expand Up @@ -277,8 +277,8 @@ def MapForallToBlocks :
If any scf.forall with tensors is found, the transform definitely
fails.

If all the scf.forall operations contained within the LaunchOp
referred to by the `target` PDLOperation lower to GPU properly, the
If all the `scf.forall` operations contained within the LaunchOp
referred to by the `target` handle lower to GPU properly, the
transform succeeds. Otherwise the transform definitely fails.

The returned handle points to the same LaunchOp operand, consuming it and
Expand Down

0 comments on commit 776ac21

Please sign in to comment.