Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI测试不review[fluid_ops] Modify _legacy_C_ops.c_softmax_with_cross_entropy #70211

Open
wants to merge 8 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion paddle/fluid/pir/dialect/op_generator/ops_api_gen.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,6 @@
'c_reducescatter',
'c_allreduce_min_',
'c_allreduce_prod_',
'c_softmax_with_cross_entropy',
'c_softmax_with_multi_label_cross_entropy',
'distributed_fused_lamb_init',
'distributed_fused_lamb_init_',
Expand Down
13 changes: 13 additions & 0 deletions paddle/phi/ops/yaml/backward.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -308,6 +308,19 @@
data_type : out_grad
no_need_buffer : input

- backward_op : c_softmax_with_cross_entropy_grad
forward: c_softmax_with_cross_entropy (Tensor logits, Tensor label, int64_t ignore_index=-100, int ring_id=0, int rank=0, int nranks=0) -> Tensor(softmax), Tensor(loss)
args: (Tensor softmax, Tensor label, Tensor loss_grad,int64_t ignore_index=-100, int ring_id=0, int rank=0, int nranks=0)
output: Tensor(logits_grad)
infer_meta :
func: CSoftmaxWithCrossEntropyGradInferMeta
spmd_rule : CSoftmaxWithCrossEntropyGradSpmd
param: [softmax, label, loss_grad, ignore_index, rank, nranks]
kernel:
func: c_softmax_with_cross_entropy_grad
data_type: loss_grad
param: [softmax, label, loss_grad, ignore_index, rank, nranks]

- backward_op : cast_grad
forward : cast (Tensor x, DataType dtype) -> Tensor(out)
args : (Tensor x, Tensor out_grad)
Expand Down
13 changes: 0 additions & 13 deletions paddle/phi/ops/yaml/inconsistent/static_backward.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -107,19 +107,6 @@
func : c_embedding_grad
no_need_buffer : weight

- backward_op : c_softmax_with_cross_entropy_grad
forward: c_softmax_with_cross_entropy (Tensor logits, Tensor label, int64_t ignore_index=-100, int ring_id=0, int rank=0, int nranks=0) -> Tensor(softmax), Tensor(loss)
args: (Tensor softmax, Tensor label, Tensor loss_grad,int64_t ignore_index=-100, int ring_id=0, int rank=0, int nranks=0)
output: Tensor(logits_grad)
infer_meta :
func: CSoftmaxWithCrossEntropyGradInferMeta
spmd_rule : CSoftmaxWithCrossEntropyGradSpmd
param: [softmax, label, loss_grad, ignore_index, rank, nranks]
kernel:
func: c_softmax_with_cross_entropy_grad
data_type: loss_grad
param: [softmax, label, loss_grad, ignore_index, rank, nranks]

- backward_op : c_softmax_with_multi_label_cross_entropy_grad
forward: c_softmax_with_multi_label_cross_entropy (Tensor logits, Tensor label, Tensor smooth_weight, int64_t ignore_index=-100, bool sum_multi_label_loss=true, int ring_id=0, int rank=0, int nranks=0) -> Tensor(softmax), Tensor(loss)
args: (Tensor softmax, Tensor label, Tensor smooth_weight, Tensor loss_grad, int64_t ignore_index=-100, bool sum_multi_label_loss=true, int ring_id=0, int rank=0, int nranks=0)
Expand Down
13 changes: 0 additions & 13 deletions paddle/phi/ops/yaml/inconsistent/static_ops.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -988,19 +988,6 @@
args : (Tensor i, Tensor x)
output : Tensor[](out)

- op: c_softmax_with_cross_entropy
args: (Tensor logits, Tensor label, int64_t ignore_index=-100, int ring_id=0, int rank=0, int nranks=0)
output: Tensor(softmax), Tensor(loss)
infer_meta:
func : CSoftmaxWithCrossEntropyInferMeta
spmd_rule : CSoftmaxWithCrossEntropyInferSpmd
param: [logits, label, ignore_index, rank, nranks]
kernel:
func: c_softmax_with_cross_entropy
data_type : logits
param: [logits, label, ignore_index, rank, nranks]
backward: c_softmax_with_cross_entropy_grad

- op: c_softmax_with_multi_label_cross_entropy
args: (Tensor logits, Tensor label, Tensor smooth_weight, int64_t ignore_index=-100, bool sum_multi_label_loss=true, int ring_id=0, int rank=0, int nranks=0)
output: Tensor(softmax), Tensor(loss)
Expand Down
1 change: 1 addition & 0 deletions paddle/phi/ops/yaml/legacy/backward_exclude.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

- amax_grad
- amin_grad
- c_softmax_with_cross_entropy_grad
- cast_grad
- conv2d_transpose_double_grad
- conv2d_transpose_grad
Expand Down
1 change: 1 addition & 0 deletions paddle/phi/ops/yaml/legacy/ops_exclude.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
- c_concat
- c_identity
- c_reduce_sum
- c_softmax_with_cross_entropy
- c_split
- cast
- conv2d_transpose
Expand Down
13 changes: 13 additions & 0 deletions paddle/phi/ops/yaml/ops.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -849,6 +849,19 @@
func : c_scatter
traits : paddle::dialect::ForwardOnlyTrait

- op : c_softmax_with_cross_entropy
args: (Tensor logits, Tensor label, int64_t ignore_index=-100, int ring_id=0, int rank=0, int nranks=0)
output: Tensor(softmax), Tensor(loss)
infer_meta:
func : CSoftmaxWithCrossEntropyInferMeta
spmd_rule : CSoftmaxWithCrossEntropyInferSpmd
param: [logits, label, ignore_index, rank, nranks]
kernel:
func: c_softmax_with_cross_entropy
data_type : logits
param: [logits, label, ignore_index, rank, nranks]
backward: c_softmax_with_cross_entropy_grad

- op : c_split
args : (Tensor x, int rank = 0, int nranks = 1, int ring_id = 0, bool use_model_parallel = true)
output : Tensor(out)
Expand Down
13 changes: 2 additions & 11 deletions python/paddle/distributed/fleet/layers/mpu/mp_ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -440,17 +440,8 @@ def _c_softmax_with_cross_entropy(
)

if in_dynamic_mode():
softmax, loss = _legacy_C_ops.c_softmax_with_cross_entropy(
logits,
label,
'ring_id',
ring_id,
'rank',
rank,
'nranks',
nranks,
'ignore_index',
ignore_index,
softmax, loss = _C_ops.c_softmax_with_cross_entropy(
logits, label, ignore_index, ring_id, rank, nranks
)
if not return_softmax:
return loss
Expand Down
Loading