Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solve mathematical programs in parallel #21957

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

AlexandreAmice
Copy link
Contributor

@AlexandreAmice AlexandreAmice commented Sep 26, 2024

This change is Reviewable

Copy link
Contributor Author

@AlexandreAmice AlexandreAmice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would close #19119 . I have not yet written the tests nor the bindings yet, but am looking for feedback on the interface design before finalizing the tests.

cc @hongkai-dai, @jwnimmer-tri for design
cc @iamsavva, @cohnt since I know you are wanting this feature and I want to be sure these would feel natural to you both.

Reviewable status: 2 unresolved discussions, needs platform reviewer assigned, needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes (waiting on @AlexandreAmice)


solvers/BUILD.bazel line 733 at r1 (raw file):

    implementation_deps = [
        ":choose_best_solver",
        ":get_program_type",

nit: This is not useful.


solvers/solve.cc line 68 at r1 (raw file):

    DRAKE_THROW_UNLESS(
        solvers.at(solver_id)->AreProgramAttributesSatisfied(*(progs.at(i))));
  }

As far as I can tell, all of our solvers are reentrant thread-safe and so it suffices to have only one copy of each solver. I may be wrong about this, in which case I would want to have each thread have it's own list of solvers.

For the case of safety, it may be a good idea to do this anyway. Thoughts?

Code quote:

  std::unordered_map<SolverId, std::unique_ptr<SolverInterface>> solvers;
  for (int i = 0; i < ssize(progs); ++i) {
    // Identity the best solver for the program and store it in the map of
    // available solvers.
    const SolverId solver_id =
        solver_ids->at(i).value_or(ChooseBestSolver(*(progs.at(i))));
    if (!solvers.contains(solver_id)) {
      solvers.insert({solver_id, MakeSolver(solver_id)});
    }
    DRAKE_THROW_UNLESS(
        solvers.at(solver_id)->AreProgramAttributesSatisfied(*(progs.at(i))));
  }

@jwnimmer-tri jwnimmer-tri self-assigned this Sep 26, 2024
@cohnt
Copy link
Contributor

cohnt commented Sep 27, 2024

Very happy with how the interface looks -- this should do be able to do everything I'm looking for.

I'm not sure what dynamic scheduling is, but I'm guessing I shouldn't really worry about it?

@jwnimmer-tri
Copy link
Collaborator

(We should be sure the docs explain this.)

Static scheduling means if you have N cores, each core solves 1/N'th of the programs based on a fixed schedule (e.g., every N'th program on the list). This is most efficient (less overhead) in case the programs all take a comparable amount of time to solve.

Dynamic scheduling is a pool where each core takes the next unsolved program on the list. This works better in for programs that might take widely variable amount of time to solve, but has a higher overhead for each program (needs some kind of semaphore or memory barrier to figure out what's next on the list, and claim it).

See "Scheduling_clauses" here: https://en.wikipedia.org/wiki/OpenMP

@cohnt
Copy link
Contributor

cohnt commented Sep 27, 2024

Ah, in that case, it is relevant for me! I would appreciate this being included in the documentation in some form.

Copy link
Contributor

@cohnt cohnt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 4 unresolved discussions, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes (waiting on @AlexandreAmice)


solvers/solve.cc line 112 at r1 (raw file):

    // Convert the initial guess into the requisite optional.
    std::optional<Eigen::VectorXd> initial_guess{std::nullopt};
    if (initial_guesses->at(i) != nullptr) {

Need to check if initial_guesses is a nullptr before trying to dereference it. How about

if (initial_guesses != nullptr && initial_guesses->at(i) != nullptr)

Code quote:

if (initial_guesses->at(i) != nullptr)

solvers/solve.cc line 146 at r1 (raw file):

      // Convert the initial guess into the requisite optional.
      std::optional<Eigen::VectorXd> initial_guess{std::nullopt};
      if (initial_guesses->at(i) != nullptr) {

Need to check if initial_guesses is a nullptr before trying to dereference it. How about

if (initial_guesses != nullptr && initial_guesses->at(i) != nullptr)

Code quote:

if (initial_guesses->at(i) != nullptr)

Copy link
Contributor

@cohnt cohnt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 5 unresolved discussions, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes (waiting on @AlexandreAmice)


solvers/solve.cc line 119 at r1 (raw file):

    // Solve the program.
    MathematicalProgramResult* result_local{nullptr};

This should be a MathematicalProgramResult, and then passed into solvers.at(solver_id)->Solve(...). That function is expecting a pointer that it can store its data into -- it shouldn't be nullptr.

Copy link
Collaborator

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two questions / suggestions on the API:

(1) Now that I see that SolverInterface::Solve is phrased in terms of optional instead of nullable, we should probably do the same here to avoid extra copying.

(2) For the parameters where we want to allow for a either single value (broadcast) or an array value (per program), instead of overloading we could use a variant. This would allow for the user to mix and match how they want to broadcast versus not.

SolveInParallel(
    ...,
    const std::variant<
        std::optional<SolverOptions>,
        std::vector<std::optional<SolverOptions>>
            >& solver_options = {},
    const std::variant<
        std::optional<SolverId>,
        std::vector<std::optional<SolverId>>,
            >& solver_ids = {},

Reviewable status: 10 unresolved discussions, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes (waiting on @AlexandreAmice)


solvers/solve.h line 54 at r1 (raw file):

 * the solver and the problem formulation. If solver_ids == nullptr then this is
 * done for every progs[i]. Uses at most parallelism cores, with dynamic
 * scheduling by default.

BTW As per the discussion above, we need to explain dynamic vs static scheduling more clearly. That might be best as a @param instead of inline in the overview here.


solvers/solve.h line 60 at r1 (raw file):

 * manner.
 *
 * @throws if initial guess and solver options are provided and not the same

typos

Suggestion:

initial_guesses or solver_options or solver_ids

solvers/solve.h line 75 at r1 (raw file):

    bool dynamic_schedule = true);

/**

nit As with code, we should not copy-paste documentation, because it's brittle and annoying to keep large sections of text identical over time as people change things.

If we keep this as two separate functions, we should change this one to explain its contract by citing the other overload and explaining what's different.


solvers/solve.cc line 68 at r1 (raw file):

Previously, AlexandreAmice (Alexandre Amice) wrote…

As far as I can tell, all of our solvers are reentrant thread-safe and so it suffices to have only one copy of each solver. I may be wrong about this, in which case I would want to have each thread have it's own list of solvers.

For the case of safety, it may be a good idea to do this anyway. Thoughts?

The right question isn't whether they currently happen to be thread safe, it's whether we want to document them as such, and thus write regression tests for all of them to guard that documented promise. Currently, only SNOPT has such a regression test.

It is probably easiest to not try to coalesce them in this first PR. We can always circle back and do that in a follow-up if we find that it would be meaningfully faster.

Note that for the non-coalesced design, we should not pre-allocate all 1000 solvers. Each inner work loop should create (and destroy) the solver it needs moment by moment.


solvers/solve.cc line 160 at r1 (raw file):

                                   &(results.at(i)));
    } else {
      results[i] = *results_parallel[i];

It's somewhat wasteful to copy around the results again. How about in the workers we solve directly into the final results, and have a separate bookkeeping vector for those which were unsafe? (Don't use vector<bool>.)


solvers/solve.cc line 178 at r1 (raw file):

  }));
  if (initial_guesses != nullptr) {
    DRAKE_THROW_UNLESS(progs.size() == initial_guesses->size());

nit The next three checks all have the LHS and RHS switched. The suspicious quantity being examined should appear on the LHS.

@jwnimmer-tri
Copy link
Collaborator

Two questions / suggestions on the API: ...

Actually, let me retract that suggestion for now. Doing variant probably increases the copying, and doing optional probably helps C++ callers but hurts Python callers. I need to pencil out the details more carefully.

Copy link
Collaborator

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(1) Okay, I think actually we should stick with nullptr instead of optional and (eventually) change SolverBase to accept nullptr to save the extra copying from pydrake.

(2) Using variant is not efficient for C++ callers, so we should avoid that.

In short, I think the proposed two function signatures are good as-is.

Reviewable status: 11 unresolved discussions, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes (waiting on @AlexandreAmice)


solvers/solve.cc line 198 at r1 (raw file):

    const std::optional<SolverId>& solver_id, const Parallelism parallelism,
    bool dynamic_schedule) {
  DRAKE_THROW_UNLESS(std::all_of(progs.begin(), progs.end(), [](auto prog) {

There is no reason for using the Impl stuff and repeating these argument checks in both overloads. This overload can broadcast the solver_options into a std::vector and then call the other overload, like we already do the for the IDs.

Copy link
Contributor Author

@AlexandreAmice AlexandreAmice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking a look at that! I hope to get to this tomorrow.

Reviewable status: 11 unresolved discussions, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes


solvers/solve.cc line 68 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

The right question isn't whether they currently happen to be thread safe, it's whether we want to document them as such, and thus write regression tests for all of them to guard that documented promise. Currently, only SNOPT has such a regression test.

It is probably easiest to not try to coalesce them in this first PR. We can always circle back and do that in a follow-up if we find that it would be meaningfully faster.

Note that for the non-coalesced design, we should not pre-allocate all 1000 solvers. Each inner work loop should create (and destroy) the solver it needs moment by moment.

To avoid having to create and destroy each solver I was thinking about having the solvers stored as:

std::vector<std::unordered<map<SolverId, std::unique_ptr<SolverInterface>> solvers(num_threads);

The ith entry of this vector would be all the solvers that thread i has needed up until this moment. If it encounters a program which needs a solver that the ith thread has not yet used, then it constructs that solver, otherwise it just accesses the already made solver.

My reasoning is if I want to solve thousands of small programs, I don't want to be allocating the solver memory thousands of times.

Copy link
Collaborator

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 11 unresolved discussions, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes (waiting on @AlexandreAmice)


solvers/solve.cc line 68 at r1 (raw file):

Previously, AlexandreAmice (Alexandre Amice) wrote…

To avoid having to create and destroy each solver I was thinking about having the solvers stored as:

std::vector<std::unordered<map<SolverId, std::unique_ptr<SolverInterface>> solvers(num_threads);

The ith entry of this vector would be all the solvers that thread i has needed up until this moment. If it encounters a program which needs a solver that the ith thread has not yet used, then it constructs that solver, otherwise it just accesses the already made solver.

My reasoning is if I want to solve thousands of small programs, I don't want to be allocating the solver memory thousands of times.

That approach seems safe, but I'm not sure it's necessary. I would suggest running a performance experiment and see whether it matters or not.

My mental model is that creating and destroyed solver instances is an order (or two) of magnitude cheaper than the other things that happen during solving (e.g., populating the results), so would not actually make much a different compared to how much extra complexity that would introduce in the implementation.

There is also the problem that certain solver instances (e.g., mosek) hold a license as a member field, so keeping them around longer than strictly necessary might actually make things worse (in case license seats are a limited resource).

Copy link
Contributor Author

@AlexandreAmice AlexandreAmice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 11 unresolved discussions, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes


solvers/solve.cc line 68 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

That approach seems safe, but I'm not sure it's necessary. I would suggest running a performance experiment and see whether it matters or not.

My mental model is that creating and destroyed solver instances is an order (or two) of magnitude cheaper than the other things that happen during solving (e.g., populating the results), so would not actually make much a different compared to how much extra complexity that would introduce in the implementation.

There is also the problem that certain solver instances (e.g., mosek) hold a license as a member field, so keeping them around longer than strictly necessary might actually make things worse (in case license seats are a limited resource).

The license seats are actually an excellent reason why I shouldn't keep the solvers around and in fact reveals a subtlety I had not considered.

Say that an organization has two licenses to (e.g. Mosek) and they attempt to solve in parallel using 16 threads. If I create 16 solvers, this will crash their code as they don't have enough licenses to solve their problems. However, if I created only one solver and re-used it across all the threads, then they can safely use 16 threads.

Should I:
1.) Only create as many solvers of a particular kind as their are licenses? How would I even do this without a try-catch?
2.) Allow the error and throw a better error message?
3.) Stick with the current design and test our solvers for the promise that they are re-entrant thread safe?

Copy link
Collaborator

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 11 unresolved discussions, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes (waiting on @AlexandreAmice)


solvers/solve.cc line 68 at r1 (raw file):

However, if I created only one solver and re-used it across all the threads, then they can safely use 16 threads.

I suppose they could use 16 threads in some sense, but I think only one at a time? As I understand it, each (e.g.) mosek seat buys you one concurrent call to (e.g.) MSK_optimizetrm. They would be probably better served by using a serial solve and giving MOSEK itself all 16 cores.

My initial thought is that the user should pass in a parallelism = N which does not exceed the number of seats they have purchased. Then, so long as each thread has at most 1 solver at a time, we know that the seats will never be exceeded.

Copy link
Contributor Author

@AlexandreAmice AlexandreAmice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is a complete PR now. Are there any volunteers for a thorough feature review?

Reviewable status: 4 unresolved discussions, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes (waiting on @AlexandreAmice)


solvers/solve.cc line 68 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

However, if I created only one solver and re-used it across all the threads, then they can safely use 16 threads.

I suppose they could use 16 threads in some sense, but I think only one at a time? As I understand it, each (e.g.) mosek seat buys you one concurrent call to (e.g.) MSK_optimizetrm. They would be probably better served by using a serial solve and giving MOSEK itself all 16 cores.

My initial thought is that the user should pass in a parallelism = N which does not exceed the number of seats they have purchased. Then, so long as each thread has at most 1 solver at a time, we know that the seats will never be exceeded.

I added a note to this effect in the documentation. I kept the cached map scheme in the code however.


solvers/solve.cc line 112 at r1 (raw file):

Previously, cohnt (Thomas Cohn) wrote…

Need to check if initial_guesses is a nullptr before trying to dereference it. How about

if (initial_guesses != nullptr && initial_guesses->at(i) != nullptr)

Done.


solvers/solve.cc line 119 at r1 (raw file):

Previously, cohnt (Thomas Cohn) wrote…

This should be a MathematicalProgramResult, and then passed into solvers.at(solver_id)->Solve(...). That function is expecting a pointer that it can store its data into -- it shouldn't be nullptr.

Done.


solvers/solve.cc line 146 at r1 (raw file):

Previously, cohnt (Thomas Cohn) wrote…

Need to check if initial_guesses is a nullptr before trying to dereference it. How about

if (initial_guesses != nullptr && initial_guesses->at(i) != nullptr)

Done.

Copy link
Collaborator

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After we come to an agreement on the solver instance caching (see below), my plan is to push a bunch of cleanups to the implementation for simplicity, conciseness, and reduced copying.

Reviewable status: 5 unresolved discussions, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes (waiting on @AlexandreAmice)


solvers/solve.cc line 68 at r1 (raw file):

Previously, AlexandreAmice (Alexandre Amice) wrote…

I added a note to this effect in the documentation. I kept the cached map scheme in the code however.

The solver caching is a plausible design, but a handful of messages upthread I talked about if we keep the caching then means we need to lay a lot more groundwork first -- write a regression test for each one of our solvers, checking that it is threadsafe (and having that pass under all of our CI flavors). Currently only SNOPT has such a test. I still think it would be easier for this first spin not to cache. The only possible slow thing about create-destroy is the license check for two of the proprietary solvers, which users can (for now) work around by calling AcquireLicense themselves before doing anything in their app, in case it's a real problem. They would need to do that anyway, in case they are calling a higher-layer algorithm which itself calls SolveInParallel more than once.


solvers/solve.cc line 113 at r3 (raw file):

    }
    const SolverInterface* solver = solvers.at(thread_num).at(solver_id).get();
    DRAKE_THROW_UNLESS(solver->AreProgramAttributesSatisfied(*(progs.at(i))));

nit We don't need this check. The SolverBase already checks for this and throws, with a much nicer message.

Copy link
Collaborator

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, with a closer look, this is ready for me to push/propose some changes. I'll try to get that by tomorrow.

Reviewable status: 4 unresolved discussions, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes (waiting on @AlexandreAmice)


solvers/solve.cc line 68 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

The solver caching is a plausible design, but a handful of messages upthread I talked about if we keep the caching then means we need to lay a lot more groundwork first -- write a regression test for each one of our solvers, checking that it is threadsafe (and having that pass under all of our CI flavors). Currently only SNOPT has such a test. I still think it would be easier for this first spin not to cache. The only possible slow thing about create-destroy is the license check for two of the proprietary solvers, which users can (for now) work around by calling AcquireLicense themselves before doing anything in their app, in case it's a real problem. They would need to do that anyway, in case they are calling a higher-layer algorithm which itself calls SolveInParallel more than once.

Oops, I take it back. I misread the code -- it has a separate cache per thread. That's fine as-is.

AlexandreAmice and others added 6 commits October 4, 2024 11:41
* remove the pointless impl indirection
* remove non-GSG const in header
* don't pay the cost of broadcasting null options
fix bug where `parallelism` was not obeyed in the serial section
Copy link
Collaborator

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed some changes. Please have a look.

Dismissed @cohnt from 3 discussions.
Reviewable status: 1 unresolved discussion, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes (waiting on @AlexandreAmice)


bindings/pydrake/solvers/test/mathematicalprogram_test.py line 1530 at r4 (raw file):

                         mp.SolutionResult.kSolutionResultNotSet)

    def test_solve_in_parallel(self):

This is missing test coverage where an argument is non-None but its list contains a mixture of i'th-specific values and None.

I think it would be fine just to have the hard-coded initial_guesses and solver_ids and solver_options lists have 1 or 2 null elements each. We don't really need to vary our test cases to have different amounts of nulls, we just need coverage of at least one mixture.

Copy link
Contributor Author

@AlexandreAmice AlexandreAmice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes! I think this is ready for review. Any takers?

Reviewable status: 1 unresolved discussion, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes (waiting on @AlexandreAmice)


bindings/pydrake/solvers/solvers_py_mathematicalprogram.cc line 1696 at r6 (raw file):

              const std::optional<std::vector<std::optional<SolverId>>>&
                  solver_ids,
              const Parallelism& parallelism, bool dynamic_schedule) {

Note to reviewer:

This looks terrible but is necessary. If you make this

py::overload_cast<const std::vector<const MathematicalProgram*>&,
                          const std::vector<const Eigen::VectorXd*>*,
                          const std::vector<const SolverOptions*>*,
                          const std::vector<std::optional<SolverId>>*,
                          Parallelism,
                          bool>&SolveInParallel)

you end up with std::bad_alloc errors.

If you make this

 [](const std::vector<const MathematicalProgram*>& progs,
              const std::optional<std::vector<Eigen::VectorXd>>&
                  initial_guesses,
              const std::optional<std::vector<SolverOptions>>& solver_options,
              const std::optional<std::vector<std::optional<SolverId>>>&
                  solver_ids,
              const Parallelism& parallelism, bool dynamic_schedule) 

you can't mix values of None into the list as desired.

So we end up with an optional<vector<optional<T>> :(

If anyone can think of a better way, please let me know.

Code quote:

          [](const std::vector<const MathematicalProgram*>& progs,
              const std::optional<std::vector<std::optional<Eigen::VectorXd>>>&
                  initial_guesses,
              const std::optional<std::vector<std::optional<SolverOptions>>>&
                  solver_options,
              const std::optional<std::vector<std::optional<SolverId>>>&
                  solver_ids,
              const Parallelism& parallelism, bool dynamic_schedule) {

bindings/pydrake/solvers/test/mathematicalprogram_test.py line 1530 at r4 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

This is missing test coverage where an argument is non-None but its list contains a mixture of i'th-specific values and None.

I think it would be fine just to have the hard-coded initial_guesses and solver_ids and solver_options lists have 1 or 2 null elements each. We don't really need to vary our test cases to have different amounts of nulls, we just need coverage of at least one mixture.

Good catch. Mixing in None arguments indeed made the old version of the bindings fail. The new bindings signature is quite complex, but I think it has to be sadly.

Copy link
Contributor Author

@AlexandreAmice AlexandreAmice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+@hongkai-dai for feature review. Thanks!

Reviewable status: 1 unresolved discussion, LGTM missing from assignees jwnimmer-tri(platform),hongkai-dai, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits), missing label for release notes (waiting on @AlexandreAmice)

@AlexandreAmice AlexandreAmice added the release notes: feature This pull request contains a new feature label Oct 10, 2024
Copy link
Contributor Author

@AlexandreAmice AlexandreAmice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-@hongkai-dai for feature review.

+@jwnimmer-tri for feature review please. Thanks!

+(release notes: feature)

Reviewable status: 1 unresolved discussion, LGTM missing from assignee jwnimmer-tri(platform), needs at least two assigned reviewers, commits need curation (https://drake.mit.edu/reviewable.html#curated-commits) (waiting on @AlexandreAmice)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release notes: feature This pull request contains a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants