Skip to content

Conversation

@paoloascia
Copy link

Motivation

Hi! This tutorial explains how to implement the Feasibility-driven trust Region Bayesian Optimization (FuRBO) method presented last September at AutoML2025. We believe it is a relevant contribution, as it builds upon Scalable Constrained BO to enhance the treatment of constraints and accelerate convergence to a feasible region.

Have you read the Contributing Guidelines on pull requests?

Yes

Test Plan

We did not change the previous code. The tutorial has been tested prior to upload.

Related PRs

No documentation needs to be added.

Commit with the tutorial on Feasibility-Driven trust Region Bayesian Optimization.
Title update
SCBO url update
Title update
@meta-cla
Copy link

meta-cla bot commented Oct 17, 2025

Hi @paoloascia!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@meta-cla
Copy link

meta-cla bot commented Oct 17, 2025

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@meta-cla meta-cla bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Oct 17, 2025
@hvarfner
Copy link
Contributor

Thanks @paoloascia for the contribution! Overall I think this looks good and about ready to be merged. I'll get back later today with a few suggestions for improvements.

@paoloascia
Copy link
Author

Hi @hvarfner ! Thank you for the comment. I'm looking forward to your feedback :).

Copy link
Contributor

@hvarfner hvarfner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good implementation-wise, but there are some changes that would be good to see.

  1. Please make the notebook more informative: The user should get a sense of what the method does from the markdown cells and plots. Now, they have to go through the code. Since the community is pretty well-informed on what TurBO does by now (and probably even SCBO to some extent), try to focus on what distinguishes FurBO from those two. This may entail some extra work, such as adding an informative plot, some math and maybe even a comparison to SCBO.
  2. Please use a linter to clean up the code. This includes unused imports, having comments in the right places and using docstrings instead of comments where appropriate.
  3. Get rid of arguments that are not actually used. Max_cholesky_size and "method": L-BFGS-B are covered by current defaults, so they shouldn't be needed.
  4. Warnings & try/catches: It's odd to me that GP fitting fails - I really don't think it should with this few observations. Please have a look at this. I'm happy to help here.

"from gpytorch.likelihoods import GaussianLikelihood\n",
"from gpytorch.mlls import ExactMarginalLogLikelihood\n",
"\n",
"from scipy.stats import invgauss\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused imports

"from gpytorch.mlls import ExactMarginalLogLikelihood\n",
"\n",
"from scipy.stats import invgauss\n",
"from scipy.stats import ecdf\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused imports

"metadata": {},
"outputs": [],
"source": [
"class norm_():\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to see the utilities we have in botorch used here. The class Normalize(d=dim, bounds=bounds) should do the trick to normalize the inputs to [0, 1] for the models. Similarly, creating a different class ack that wraps Ackley can be avoided by using the same transforms.

"from botorch.models.model_list_gp_regression import ModelListGP\n",
"from torch.quasirandom import SobolEngine\n",
"\n",
"class Furbo_state():\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use pythonic class naming (i.e. FurboState). Ideally, use type hints too, so that users understan what is going on.

"class Furbo_state():\n",
" '''Class to track optimization status with restart'''\n",
" # Initialization of the status\n",
" def __init__(self, \n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank for commenting what each variable does. However, the conventional way to do this would be via a docstring. This is also cleaner, and should be a quick fix.

"import numpy as np\n",
"from matplotlib import rc\n",
"\n",
"def ensure_tensor(x, tkwargs, dim=0):\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't torch.cat(x).to(**tkwargs) enough here?

" samples = multivariate_circular(x_candidate, radius, n_samples, lb=lb, ub=ub, **tkwargs)\n",
" \n",
" # Evaluate samples on the models of the objective -> yy Tensor\n",
" state.Y_model.eval()\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

posterior() sets the model in eval mode, so these can be removed!

" samples_yy = posterior.mean.squeeze()\n",
" \n",
" # Evaluate samples on the models of the constraints -> yy Tensor\n",
" state.C_model.eval()\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and here

"metadata": {},
"outputs": [],
"source": [
"def multivariate_circular(centre, # Centre of the multivariate distribution\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This to me seems like the secret sauce for the method, right? If so, it could benefit from some additional explanation in the markdown cells!

"outputs": [],
"source": [
"def stopping_criterion(state, n_iteration):\n",
" '''Function to evaluate if the maximum number of allowed iterations is reached.'''\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return state.it_counter <= n_iteration

Review of the FuRBO tutorial after the changes requested.
Added graphical abstract
@paoloascia
Copy link
Author

Hi @hvarfner!

Sorry for taking so long to get back to your comments. We reworked the entire tutorial to better address your comments. We divided the notebook into two parts: 1. the workflow of FuRBO, 2. two examples to compare with SCBO.

Let us know what you think about it :).
Cheers!

@hvarfner
Copy link
Contributor

hvarfner commented Dec 5, 2025

Hi @paoloascia,

This looks really nice! Love the effort of explaining the method in greater detail. I think that will come in handy for people who want to understand and use the method for themselves.

I'll go through the code one more time (I would guess tomorrow), and then we should be good to merge.

@meta-codesync
Copy link

meta-codesync bot commented Dec 8, 2025

@hvarfner has imported this pull request. If you are a Meta employee, you can view this in D88643885.

@codecov
Copy link

codecov bot commented Dec 8, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 99.99%. Comparing base (09502f9) to head (c7aa7b0).
⚠️ Report is 34 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3048      +/-   ##
==========================================
+ Coverage   99.98%   99.99%   +0.01%     
==========================================
  Files         216      219       +3     
  Lines       20581    20887     +306     
==========================================
+ Hits        20577    20886     +309     
+ Misses          4        1       -3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Contributor

@hvarfner hvarfner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @paoloascia ,

On the newest version of BoTorch, I get the following error when trying to run the notebook:

ValueError: Expected `X` to be within the bounds of the test problem.
Traceback
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[36], line 13
     10 X_ini = SobolEngine(dimension=fun.dim, scramble=True, seed=1).draw(n=n_init).to(**tkwargs)
     12 # Run optimization loop    
---> 13 X_all, Y_all, C_all = furbo_optimize(fun,
     14                                      eval_objective, 
     15                                      [eval_c1, eval_c2],
     16                                      X_ini,
     17                                      batch_size = batch_size,
     18                                      n_init = n_init,
     19                                      max_budget = max_budget,
     20                                      N_CANDIDATES = N_CANDIDATES) 

Newer botorch versions throw this error when you evaluate a point outside the domain. However, it is likely that points outside the domain were evaluated before the latest changes. I think the offset here is the culprit:

def eval_objective(x):
    """This is a helper function we use to unnormalize and evalaute a point"""
    return fun(unnormalize(x - 0.3, fun.bounds))

Please make sure that the notebook runs as intended on the latest version, and that the boundary behavior is as intended. Then, we're good to merge!

"metadata": {},
"outputs": [],
"source": [
"class speedReducer():\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"\n",
"def eval_objective(x):\n",
" \"\"\"This is a helper function we use to unnormalize and evalaute a point\"\"\"\n",
" return fun(unnormalize(x - 0.3, fun.bounds))\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The boundary issue likely stems from this offset term

"fun.bounds[0, :].fill_(-5)\n",
"fun.bounds[1, :].fill_(10)\n",
"\n",
"def eval_objective(x):\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use input_transform=Normalize(d=objective.dim, bounds=objective.bounds) when creating the model. Then, the untransforms and helpers are not needed.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hvarfner Thank you for the input. However, I'm not sure this approach would work with the current definition of the trust region. To make it compatible, I would guess that the trust region should be scaled by the bounds of the problem. I can try out the changes for both algorithms, but before doing so, I'd like to ask if there are any reasons why Normalize wasn't used in the SCBO tutorial. Given the necessary scaling, I think that using Normalize could increase the overall complexity.

" self.Y = torch.cat((self.Y, Y_next), dim=0)\n",
" self.C = torch.cat((self.C, C_next), dim=0)\n",
"\n",
" # update GPR surrogates\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@paoloascia Did you check this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed Do not delete this pull request or issue due to inactivity.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants