Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【SCU】【Paddle TensorRT No.57】Add pd_op.temporal_shift converter #69848

Open
wants to merge 12 commits into
base: develop
Choose a base branch
from

Conversation

PolaKuma
Copy link
Contributor

PR Category

User Experience

PR Types

New features

Description

新增了 pd_op.temporal_shift Marker和Converter

Copy link

paddle-bot bot commented Nov 30, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Nov 30, 2024
return false;
}
#if IS_TRT_VERSION_LT(8200)
VLOG(3) << "temporal_shift is not supported when TensorRT < 8.2";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

temporal_shift和TensorRT的版本有关联吗?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

旧版本的好像有看到,但是后面说默认tensorrt>8.4了,已去掉。

post_pad = add_1D_constant_layer(network, [0, 1, 0, 0, 0])
dims = 5
zeros = add_1D_constant_layer(network, [0] * dims)
start = trt_sum(network, zeros, pre_pad)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里start应该是trt_sub

slice3_layer.set_input(2, slice_size3)

if slice_c == 0:
concat_inputs = [slice2_layer.get_output(0), slice3_layer.get_output(0)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个concat_inputs前面没定义的话,会报错吧,建议前面先定义一个

reshape_layer.reshape_dims = trt.Dims(inputs[0].shape)

if data_format == "NHWC":
transpose_layer = network.add_shuffle(reshape_layer.get_output(0))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个改个名字吧,和前面的transpose_layer重名了,相当于重新赋值了

concat_layer.axis = 2

# Reshape output to [N*T,C,H,W]
reshape_layer = network.add_shuffle(concat_layer.get_output(0))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里和上面也重名,改成和旧ir相同的名字吧

self.min_shape = {"x": [4, 9, 7, 7]}
self.max_shape = {"x": [8, 9, 7, 7]}

def test_trt_result(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个加一个fp16测试吧

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants