Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement GNN perception processing module #1110

Open
lichtefeld opened this issue Mar 4, 2022 · 2 comments
Open

Implement GNN perception processing module #1110

lichtefeld opened this issue Mar 4, 2022 · 2 comments

Comments

@lichtefeld
Copy link
Collaborator

Integration Reasons

The goal of integrating the GNN perception processing into the ADAM pipeline is to enable faster experimentation on the entire learning pipeline including the ability to adapt the parameters for the GNN models dynamically based on the learner feedback. This will also enable us to consider training multiple stroke GNNs at different specificity levels (determined by ADAM) to make some more feature points available about an object.

Design goals:

  • Generic module which can be specialized for processing different type of graph features (e.g. stroke Vs. trajectory)
  • Integration with existing learners with an easy-to-use API
  • Support: Categorization & Similarity calculations

Result Comparison

  • When just giving categorization for objects I should be able to hit the same score as Sheng did.
@lichtefeld
Copy link
Collaborator Author

Further discussion on this topic occurred at a team meeting. There we discussed having shared ownership over the GNN & stroke processing modules. That way API evolution can occur from either team based on experimental debugging & results. That said we are going to place the GNN module into its own python environment to avoid any mismatch in requirement versions.

Desired interactions from ADAM's side

  • Ability to have multiple trained GNN modules
    • Perhaps different granularity of shape detection (think having a 'cube, sphere, pyramid' decode for every object and then a second GNN detecting each object individually)
    • Trajectory (which may be 3D or 2 different 2D views)
  • Implementation thoughts
    • Initial implementation can probably be offline for all recognition modules (e.g. the current decode for objects)
    • Contrastive GNN inspection may need to be online processing the same inputs as ADAM for a given scene and have a closer integration
    • Should each unique GNN be its own program execution ? (probably yes?)
  • Supports:
    • Categorization
    • Similarity calculations
    • Contrastive feature detection / improved training potentials (the exact goal here is TBD)

@spigo900 -- I'm going to leave working out an interface with @blakeharrison-ai @shengcheng out to you.

@spigo900
Copy link
Collaborator

spigo900 commented Jun 16, 2022

@lichtefeld and I discucsed this again yesterday. Generally, are planning to keep the object GNN module as a separate preprocessing step with its own environment.

As part of this, I think we want to change the learner script:

  • To parameterize where it gets its data (currently this is hardcoded)
  • So we can input/output data in the ADAM curriculum format
    • ETA: As part of this, load the already-extracted strokes from the YAML. (We could try to figure out how to run MATLAB on the cluster but that is more of a rabbit hole than I want to go down at the moment.)

I think we'll want to modify the decode script along similar lines. We may want to add things to the decode but that is probably a separate issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants