Skip to content
This repository has been archived by the owner on Mar 12, 2024. It is now read-only.

I have some problem about object query. #596

Open
Zhong-Zi-Zeng opened this issue Aug 3, 2023 · 8 comments
Open

I have some problem about object query. #596

Zhong-Zi-Zeng opened this issue Aug 3, 2023 · 8 comments

Comments

@Zhong-Zi-Zeng
Copy link

The last page of the original paper shows a simple code for DETR. The decoder's input is just a random value of size (100, 256). However, on your GitHub, I can't understand what you did about the object query. Why did you use the embedding layer's weight instead of a random value?

@AjibolaPy
Copy link

That's also my confusion. I guess instead of using random values, the embedding weights was used and reshaped. Maybe it's the same. But is it trainable?. I'll appreciate an answer if you've gotten the answer

@Zhong-Zi-Zeng
Copy link
Author

I have implemented DETR and found that embedding weights is more convenient than random values when building a model.

@AjibolaPy
Copy link

I have implemented DETR and found that embedding weights is more convenient than random values when building a model.

Thanks for helping.
Assuming:

embedding=nn.Embedding(45, 2)


weights=embedding.weight.unsqueeze(1).repeat(1,3)

Did you keep the weights requires_grad=True. This is just my confusion

@Zhong-Zi-Zeng
Copy link
Author

Yes, so that the gradient of embedding weight will have the same value.

@AjibolaPy
Copy link

Yes, so that the gradient of embedding weight will have the same value.

Thanks for your answer.
I should keep requires_grad set to 'True.' That means it will also be trained during backpropagation, and the value will change. This applies to the embedding weights, specifically the query embedding.

@AjibolaPy
Copy link

Yes, so that the gradient of embedding weight will have the same value.

I think it's in (num_queries, batch_size, dim) and not (batch_size, num_queries, dim)

@Zhong-Zi-Zeng
Copy link
Author

You are right!

@MLDeS
Copy link

MLDeS commented Nov 7, 2023

I have implemented DETR and found that embedding weights is more convenient than random values when building a model.

@Zhong-Zi-Zeng What do you mean with more convenient? Is it that the results are better? Because, as shown in the DETR colab notebook, if you use, nn.Parameter(torch.rand(100, hidden_dim)) as the queries with 100 being the num_queries and update all parameters of the model, it should still work because the nn.parameter has requires_grad=True by default, so it will still update the parameters, right? It will not be random values thereafter.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants