Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Random DH parameters #5

Open
JonathanKuelz opened this issue Mar 12, 2024 · 2 comments
Open

Random DH parameters #5

JonathanKuelz opened this issue Mar 12, 2024 · 2 comments
Assignees

Comments

@JonathanKuelz
Copy link

JonathanKuelz commented Mar 12, 2024

Hello again,

I am continuing to work on #3 and while creating some tests, I had some question regarding the random DH parameter generation process mentioned in your paper. I'll paste a screenshot below that I'll refer to (taken from your latest preprint).

  1. Why do you enforce $\alpha_0 \neq 0$? As far as I see, this enforces the first and second joint to be non-parallel. Is there a reason why they shouldn't be?
  2. If one would allow the joints to be parallel, I assume you'd set $a_0 \neq 0$ and $d_0 = 0$ in that case.
  3. The constraints for $a_{n-1}$ and $\alpha_{n-1}$ seem to assert a TCP frame which lies on the axis of the last joint. Still, $d_{n-1}$ can be set arbitrarily -- is there any specific reason why? (See second screenshot, taken from "Sicialiano et al, Modelling, Planning and Control 2009" for a reasonable example where $a_{n-1} != 0$)
  4. In your paper, you state to randomly sample $a_k$ as long as $\alpha_k \neq 0$. However, in this piece of code, you are setting $a_k$ to zero. Is this intentional? If I see this correctly, this implies that joint axes are always intersecting.

I understand that you don't want to cover all possible kinematics, I was just wondering whether you had a specific reason for these limitations which were, from my perspectives, the only ones that kind of limit the generality of the "random" robots.

Regarding nr. 4: I ran some tests and computed the average loss for a "pose goal" as introduced in #4 . I get a translation error of ~3cm when I run my experiments, but as soon as I get rid of this assumption and follow the pseudo-code in the paper, with my pre-trained network, the translation error increases to ~25cm on average.

Thanks for your help!

image

image

@filipmrc filipmrc self-assigned this Mar 14, 2024
@filipmrc
Copy link

filipmrc commented Mar 14, 2024

Hi Jonathan! Thanks for taking an interest in this work and helping us with the codebase.

First of all, I'm afraid the randomization code isn't exactly up to date in the public repo, as it probably got buried in the final sprint to get the paper submitted. I will take a more detailed look at it ASAP as part of reviewing your pull requests.

(1) and (2) This is a good catch, the first and second joint could indeed be parallel and have $a \neq 0$, $d=0$. This should have no ambiguities in the distance-based representation. I think I set this to restrict the randomized robots a little bit given our computational resources.
(3) edit: @JonathanKuelz In GraphIK we make the assumption that the final joint does not have joint limits, so only the final joint axis needs to be at a proper positon and orientation - which is a rigid transformation from the tool frame. This is also the case in the example you've attached. However, since the points corresponding to the end-effector axis are rigidly attached to the base points, the final transform can be arbitrary (the axis can't "flip").
(4) This is an error in the pseudocode provided in the preprint! I will double check this, but If the axes are not parallel (i.e., $\alpha \neq 0$), then it should be $a=0$ and $d = [0, d_{max}]$. If $a \neq 0$, then there's a problem with possible reflections of the next rotation axis. The original GraphIK paper makes an assumption that the rotation axes are coplanar, because it does local optimization purely over a distance loss. Theoretically, this may not matter with learning approaches but we did restrict our work to the original assumptions.

@JonathanKuelz
Copy link
Author

Hello Filip, thanks a lot for your quick reply and the information! I started another training yesterday without constraining the random robots as done in (1)+(2)+(4) -- I will come back when I have results. (3) is now more clear, thanks for that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants