Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] optimize_acqf_homotopy doesn't handle constraints #2579

Open
CompRhys opened this issue Oct 16, 2024 · 5 comments
Open

[Feature Request] optimize_acqf_homotopy doesn't handle constraints #2579

CompRhys opened this issue Oct 16, 2024 · 5 comments
Labels
enhancement New feature or request

Comments

@CompRhys
Copy link
Contributor

🚀 Feature Request

Would you accept a PR to add constraints to optimize_acqf_homotopy, e.g. inequality_constraints, equality_constraints and nonlinear_inequality_constraints. Looking at the code this seems more like an omission than a technical challenge to add?

Motivation

I want to compare gradient free MOO vs SEBO for a mixtures problem requiring sparsity. I believe other people are working on similar problems and would benefit from this functionality (facebook/Ax#2790)

Pitch

Describe the solution you'd like
The change appears as though it should be as simple as making all the arguements of optimize_acqf arguements of optimize_acqf_homotopy. This can either be done explicitly or via kwargs (whatever the maintainers prefer).

Are you willing to open a pull request? (See CONTRIBUTING)

I am happy to make the PR.

@CompRhys CompRhys added the enhancement New feature or request label Oct 16, 2024
@saitcakmak
Copy link
Contributor

Hi @CompRhys. Thanks for flagging this. A PR adding constraint support to optimize_acqf_homotopy would be very welcome!

@CompRhys
Copy link
Contributor Author

My preference to implement this would be to add optimize_acqf_loop_kwargs: dict[str, Any] | None, and optimize_acqf_final_kwargs: dict[str, Any] | None = None, as the args to the optimize_acqf_homotopy function as this is cleanest and prevents divergence in future, however this would be a breaking change to the code. I think it's worth splitting out two kwarg dicts because whoever implemented this originally decided to split out options and final_options suggesting that there were use cases where the final optimization should use different settings. Is this acceptable or do we need to devise a way to have this not be breaking?

@saitcakmak
Copy link
Contributor

I'd like to avoid adding catch-all dict valued arguments to the code. options kwarg goes against this but it is a common arg across optimize_acqf variant that is used to modify low-level settings that are not worth exposing as top level kwargs.

For constraints, would we expect them to be different between the homotopy steps and final optimization? They only constrain the search space, so I would expect them to be the same for both. If that's the case, we can just mirror the signature of optimize_acqf to add the three args and pass them down to both steps.

    inequality_constraints: list[tuple[Tensor, Tensor, float]] | None = None,
    equality_constraints: list[tuple[Tensor, Tensor, float]] | None = None,
    nonlinear_inequality_constraints: list[tuple[Callable, bool]] | None = None,

@CompRhys
Copy link
Contributor Author

I think one problem with not doing it via catch alls it that you end up with what's currently in the code which is very confusing about whether arguments to the loop and final optimizations are intentionally missing or accidentally missing. For example currently in the code if you passed fixed_features it would only be used in the loop optimize_acqf and not the final optimize_acqf. I think that the catch all here avoids the chances that this code needs repeated (and not clearly telegraphed) maintenance over time.

Accepting that it will lead to greater maintenance burden over time I am happy to just add in the extra arguments but if we choose not to create that two catch all dicts the only way to be philosophically consistent in the design here would be to remove the final_options argument. The fact that this was differentiated in the code already makes me believe that the original author could in principal have thought that there were important cases where optimization should be different for the loop and final optimizations and additional constraints could be an example of that.

In the draft PR I have implemented the catch-all dict method (https://github.com/pytorch/botorch/pull/2580/files) it still needs some extra tests to check that the constraints work out of the box but to me the logic is fairly clean (although I added a bunch of warnings about incorrect use of arguments).

@saitcakmak
Copy link
Contributor

For example currently in the code if you passed fixed_features it would only be used in the loop optimize_acqf and not the final optimize_acqf.

That very much sounds like a bug. I think we should fix it rather than using it as inspiration for broader design.

Accepting that it will lead to greater maintenance burden over time I am happy to just add in the extra arguments but if we choose not to create that two catch all dicts the only way to be philosophically consistent in the design here would be to remove the final_options argument. The fact that this was differentiated in the code already makes me believe that the original author could in principal have thought that there were important cases where optimization should be different for the loop and final optimizations and additional constraints could be an example of that.

@dme65 can you provide context on why we have different options at different stages?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants