-
Notifications
You must be signed in to change notification settings - Fork 456
Tutorial addition: Feasibility-driven trust Region Bayesian Optimization #3048
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Commit with the tutorial on Feasibility-Driven trust Region Bayesian Optimization.
Title update
SCBO url update
Title update
|
Hi @paoloascia! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
|
Thanks @paoloascia for the contribution! Overall I think this looks good and about ready to be merged. I'll get back later today with a few suggestions for improvements. |
|
Hi @hvarfner ! Thank you for the comment. I'm looking forward to your feedback :). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good implementation-wise, but there are some changes that would be good to see.
- Please make the notebook more informative: The user should get a sense of what the method does from the markdown cells and plots. Now, they have to go through the code. Since the community is pretty well-informed on what TurBO does by now (and probably even SCBO to some extent), try to focus on what distinguishes FurBO from those two. This may entail some extra work, such as adding an informative plot, some math and maybe even a comparison to SCBO.
- Please use a linter to clean up the code. This includes unused imports, having comments in the right places and using docstrings instead of comments where appropriate.
- Get rid of arguments that are not actually used. Max_cholesky_size and "method": L-BFGS-B are covered by current defaults, so they shouldn't be needed.
- Warnings & try/catches: It's odd to me that GP fitting fails - I really don't think it should with this few observations. Please have a look at this. I'm happy to help here.
| "from gpytorch.likelihoods import GaussianLikelihood\n", | ||
| "from gpytorch.mlls import ExactMarginalLogLikelihood\n", | ||
| "\n", | ||
| "from scipy.stats import invgauss\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused imports
| "from gpytorch.mlls import ExactMarginalLogLikelihood\n", | ||
| "\n", | ||
| "from scipy.stats import invgauss\n", | ||
| "from scipy.stats import ecdf\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused imports
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "class norm_():\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good to see the utilities we have in botorch used here. The class Normalize(d=dim, bounds=bounds) should do the trick to normalize the inputs to [0, 1] for the models. Similarly, creating a different class ack that wraps Ackley can be avoided by using the same transforms.
| "from botorch.models.model_list_gp_regression import ModelListGP\n", | ||
| "from torch.quasirandom import SobolEngine\n", | ||
| "\n", | ||
| "class Furbo_state():\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use pythonic class naming (i.e. FurboState). Ideally, use type hints too, so that users understan what is going on.
| "class Furbo_state():\n", | ||
| " '''Class to track optimization status with restart'''\n", | ||
| " # Initialization of the status\n", | ||
| " def __init__(self, \n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank for commenting what each variable does. However, the conventional way to do this would be via a docstring. This is also cleaner, and should be a quick fix.
| "import numpy as np\n", | ||
| "from matplotlib import rc\n", | ||
| "\n", | ||
| "def ensure_tensor(x, tkwargs, dim=0):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't torch.cat(x).to(**tkwargs) enough here?
| " samples = multivariate_circular(x_candidate, radius, n_samples, lb=lb, ub=ub, **tkwargs)\n", | ||
| " \n", | ||
| " # Evaluate samples on the models of the objective -> yy Tensor\n", | ||
| " state.Y_model.eval()\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
posterior() sets the model in eval mode, so these can be removed!
| " samples_yy = posterior.mean.squeeze()\n", | ||
| " \n", | ||
| " # Evaluate samples on the models of the constraints -> yy Tensor\n", | ||
| " state.C_model.eval()\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and here
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "def multivariate_circular(centre, # Centre of the multivariate distribution\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This to me seems like the secret sauce for the method, right? If so, it could benefit from some additional explanation in the markdown cells!
| "outputs": [], | ||
| "source": [ | ||
| "def stopping_criterion(state, n_iteration):\n", | ||
| " '''Function to evaluate if the maximum number of allowed iterations is reached.'''\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return state.it_counter <= n_iteration
Review of the FuRBO tutorial after the changes requested.
Added graphical abstract
|
Hi @hvarfner! Sorry for taking so long to get back to your comments. We reworked the entire tutorial to better address your comments. We divided the notebook into two parts: 1. the workflow of FuRBO, 2. two examples to compare with SCBO. Let us know what you think about it :). |
|
Hi @paoloascia, This looks really nice! Love the effort of explaining the method in greater detail. I think that will come in handy for people who want to understand and use the method for themselves. I'll go through the code one more time (I would guess tomorrow), and then we should be good to merge. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3048 +/- ##
==========================================
+ Coverage 99.98% 99.99% +0.01%
==========================================
Files 216 219 +3
Lines 20581 20887 +306
==========================================
+ Hits 20577 20886 +309
+ Misses 4 1 -3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
hvarfner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @paoloascia ,
On the newest version of BoTorch, I get the following error when trying to run the notebook:
ValueError: Expected `X` to be within the bounds of the test problem.
Traceback
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[36], line 13
10 X_ini = SobolEngine(dimension=fun.dim, scramble=True, seed=1).draw(n=n_init).to(**tkwargs)
12 # Run optimization loop
---> 13 X_all, Y_all, C_all = furbo_optimize(fun,
14 eval_objective,
15 [eval_c1, eval_c2],
16 X_ini,
17 batch_size = batch_size,
18 n_init = n_init,
19 max_budget = max_budget,
20 N_CANDIDATES = N_CANDIDATES)
Newer botorch versions throw this error when you evaluate a point outside the domain. However, it is likely that points outside the domain were evaluated before the latest changes. I think the offset here is the culprit:
def eval_objective(x):
"""This is a helper function we use to unnormalize and evalaute a point"""
return fun(unnormalize(x - 0.3, fun.bounds))
Please make sure that the notebook runs as intended on the latest version, and that the boundary behavior is as intended. Then, we're good to merge!
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "class speedReducer():\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| "\n", | ||
| "def eval_objective(x):\n", | ||
| " \"\"\"This is a helper function we use to unnormalize and evalaute a point\"\"\"\n", | ||
| " return fun(unnormalize(x - 0.3, fun.bounds))\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The boundary issue likely stems from this offset term
| "fun.bounds[0, :].fill_(-5)\n", | ||
| "fun.bounds[1, :].fill_(10)\n", | ||
| "\n", | ||
| "def eval_objective(x):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use input_transform=Normalize(d=objective.dim, bounds=objective.bounds) when creating the model. Then, the untransforms and helpers are not needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hvarfner Thank you for the input. However, I'm not sure this approach would work with the current definition of the trust region. To make it compatible, I would guess that the trust region should be scaled by the bounds of the problem. I can try out the changes for both algorithms, but before doing so, I'd like to ask if there are any reasons why Normalize wasn't used in the SCBO tutorial. Given the necessary scaling, I think that using Normalize could increase the overall complexity.
| " self.Y = torch.cat((self.Y, Y_next), dim=0)\n", | ||
| " self.C = torch.cat((self.C, C_next), dim=0)\n", | ||
| "\n", | ||
| " # update GPR surrogates\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@paoloascia Did you check this?
Motivation
Hi! This tutorial explains how to implement the Feasibility-driven trust Region Bayesian Optimization (FuRBO) method presented last September at AutoML2025. We believe it is a relevant contribution, as it builds upon Scalable Constrained BO to enhance the treatment of constraints and accelerate convergence to a feasible region.
Have you read the Contributing Guidelines on pull requests?
Yes
Test Plan
We did not change the previous code. The tutorial has been tested prior to upload.
Related PRs
No documentation needs to be added.