PyBayesOpt.jl Documentation
Julia wrapper for Bayesian optimization tools available in Python.
Overview
PyBayesOpt.jl provides Julia interfaces to popular Python Bayesian optimization libraries:
- BayesianOptimization - A Python library for Bayesian optimization
- BoTorch - A Bayesian optimization library built on PyTorch for q-batch optimization
The package implements the Optim.jl interface for seamless integration with Julia's optimization ecosystem.
Quick Start
Basic BoTorch Q-Batch Optimization
using PyBayesOpt
using Optim
# Define a test function to minimize
objective(x) = (x[1] - 2)^2 + (x[2] - 1)^2
# Set up optimization parameters
params = BoTorchQBatch(
bounds = [-5.0 5.0; -5.0 5.0], # [min max] for each dimension
nbatch = 4, # batch size
ninit = 10, # initialization iterations
nopt = 20 # optimization iterations
)
# Run optimization using Optim.jl interface
result = optimize(objective, params)
# Access results
println("Best point: ", result.minimizer)
println("Best value: ", result.minimum)
println("Function evaluations: ", result.f_calls)BayesianOptimization Wrapper
# Using the BayesianOptimization wrapper
params = BayesianOptimization(
bounds = [-5.0 5.0; -5.0 5.0],
ninit = 10,
nopt = 50
)
result = optimize(objective, params)Interactive Optimization Loop
For more control over the optimization process:
# Interactive optimization with BoTorch
state = BoTorchQBatchState(params=BoTorchQBatch(bounds=[-5.0 5.0; -5.0 5.0]))
while !finished(state)
# Get next batch of points to evaluate
pts = ask!(state)
# Evaluate function (can be parallelized)
values = [objective(pts[:, i]) for i in 1:size(pts, 2)]
# Provide results back to optimizer
tell!(state, pts, values)
# Check current best
if state.optimization_complete
best_pt, best_val = bestpoint(state)
println("Current best: $best_pt -> $best_val")
end
endBenchmark Functions
The package includes several standard optimization benchmarks:
# Use benchmark functions
branin = BraninFunction()
result = optimize(branin, BoTorchQBatch(bounds=branin.bounds))
ackley = AckleyFunction(dim=3) # 3D Ackley function
result = optimize(ackley, BoTorchQBatch(bounds=ackley.bounds))
rosenbrock = RosenbrockFunction(dim=2)
result = optimize(rosenbrock, BoTorchQBatch(bounds=rosenbrock.bounds))Using Optim.jl Interface
PyBayesOpt implements the Optim.jl optimize function interface, making it a drop-in replacement for other Optim.jl optimizers:
using Optim
using PyBayesOpt
# Your objective function
function objective(x)
return sum((x .- [2.0, 3.0]).^2) # minimize at [2.0, 3.0]
end
# Create method instance
method = BoTorchQBatch(
bounds = [0.0 5.0; 0.0 5.0],
nbatch = 2,
ninit = 8,
nopt = 15,
acqmethod = :qEI
)
# Use with Optim.optimize
result = Optim.optimize(objective, method)
# Standard Optim.jl result interface
println("Minimizer: ", Optim.minimizer(result))
println("Minimum: ", Optim.minimum(result))
println("F calls: ", Optim.f_calls(result))
# Additional methods from BoTorch
if isa(result, BoTorchQBatchState)
# Evaluate posterior at any point
mean, var = evalposterior(result, [2.0, 3.0])
println("Posterior at [2,3]: mean=$mean, variance=$var")
# Sample from posterior maximum distribution
max_point, std_dev = sampleposteriormin(result, nsamples=1000)
println("Estimated maximum: $max_point ± $std_dev")
endAdvanced Usage
Acquisition Functions
BoTorch supports various acquisition functions:
method = BoTorchQBatch(
acqmethod = :qLogEI, # Options: :qEI, :qNEI, :qLogEI, :qLogNEI, :qUCB, :qPI
qUCB_beta = 2.5 # Only used for :qUCB
)Parallel Evaluation
The q-batch approach naturally supports parallel function evaluation:
using Distributed
addprocs(4) # Add 4 worker processes
@everywhere using PyBayesOpt
function parallel_eval(objective, state)
while !finished(state)
pts = ask!(state)
# Parallel evaluation using pmap
values = pmap(i -> objective(pts[:, i]), 1:size(pts, 2))
tell!(state, pts, values)
end
return state
end
result = parallel_eval(objective, BoTorchQBatchState(params=method))API Reference
PyBayesOpt — ModulePyBayesOptPyBayesOpt.jl
Julia wrapper to some Bayesian optimization tools available in Python:
- BayesianOptimization aka bayes_opt or bayesian-optimization
- A q-batch Bayesian optimizer implemented using BoTorch
It uses a very partial implementation of the optimizer interface provided by Optim.jl.
Installation
Python prerequisites
The package uses PyCall.jl to access the python code, and therefore requires a python installation which is working well with this package. See requirements.txt for the python packages to be installed.
pip install -r requirements.txtIf you use this under linux, you may have to set the environment variable TORCH_USE_RTLD_GLOBAL=1 in order to avoid some loading problems with the mkl library:
export TORCH_USE_RTLD_GLOBAL=1Installation via PackageNursery registry
The package can be installed with the Julia package manager in a standard way For the time being, it is registered in the julia package registry https://github.com/j-fu/PackageNursery To add the registry (needed only once), and to install the package, from the Julia REPL, type ] to enter the Pkg REPL mode and run:
pkg> registry add https://github.com/j-fu/PackageNurseryPlease be aware that adding a registry to your Julia installation requires to trust the registry maintainer for handling things in a correct way. In particular, the registry should not register higher versions of packages which are already registered in the Julia General Registry. One can check this by visiting the above mentionend github repository URL and inspecting the contents.
Installation via repository URL
using Pkg
Pkg.add(url="https://github.com/j-fu/PyBayesOpt.jl")Quick Start
Basic Usage with Optim.jl Interface
using PyBayesOpt
using Optim
# Define a function to minimize
f(x) = (x[1] - 2.0)^2 + (x[2] - 1.0)^2
# Create optimizer - BoTorch q-batch is recommended
method = BoTorchQBatch(
bounds = [0.0 4.0; -1.0 3.0], # [min max] for each dimension
nbatch = 4, # evaluate 4 points per iteration
ninit = 10, # 10 initialization iterations
nopt = 15 # 15 optimization iterations
)
# Optimize using standard Optim.jl interface
result = optimize(f, method)
# Access results
println("Best point: ", result.minimizer) # [2.0, 1.0]
println("Best value: ", result.minimum) # ≈ 0.0
println("Evaluations: ", result.f_calls) # 100 total evaluationsInteractive Optimization Loop
For maximum control over the optimization process:
state = BoTorchQBatchState(params=method)
while !finished(state)
# Get next batch of points to evaluate
candidates = ask!(state)
# Evaluate function (can be parallelized)
values = [f(candidates[:, i]) for i in 1:size(candidates, 2)]
# Provide results back to optimizer
tell!(state, candidates, values)
end
# Final result
best_point, best_value = bestpoint(state)Using Benchmark Functions
# Built-in benchmark functions
branin = BraninFunction()
result = optimize(branin, BoTorchQBatch(bounds=branin.bounds))
ackley = AckleyFunction(dim=5) # 5-dimensional
result = optimize(ackley, BoTorchQBatch(bounds=ackley.bounds))Features
Acquisition Functions
BoTorch supports multiple acquisition functions:
:qEI/:qExpectedImprovement- Expected Improvement:qLogEI/:qLogExpectedImprovement- Log Expected Improvement (recommended):qNEI/:qNoisyExpectedImprovement- Noisy Expected Improvement:qLogNEI/:qLogNoisyExpectedImprovement- Log Noisy Expected Improvement:qUCB/:qUpperConfidenceBound- Upper Confidence Bound:qPI/:qProbabilityOfImprovement- Probability of Improvement
Parallel Evaluation
The q-batch approach naturally supports parallel function evaluation:
using Base.Threads
# In the optimization loop:
values = zeros(nbatch)
@threads for i in 1:nbatch
values[i] = expensive_function(candidates[:, i])
endPosterior Analysis
After optimization, you can analyze the Gaussian process posterior:
# Evaluate posterior at any point
mean, variance = evalposterior(result, [1.0, 2.0])
# Sample from posterior maximum distribution
max_point, std_dev = samplemaxpost(result, nsamples=1000)
println("Estimated global optimum: $max_point ± $std_dev")Optimizers
BoTorchQBatch (Recommended)
Advanced q-batch Bayesian optimization using BoTorch:
BoTorchQBatch(
bounds = [-1.0 1.0; -1.0 1.0], # Optimization bounds
nbatch = 4, # Batch size
ninit = 10, # Initialization iterations
nopt = 20, # Optimization iterations
acqmethod = :qLogEI, # Acquisition function
seed = 1234, # Random seed
verbose = true, # Print progress
acq_nrestarts = 20, # Acquisition optimization restarts
acq_nsamples = 512, # Raw samples for acquisition optimization
qUCB_beta = 2.0 # Beta parameter for qUCB
)BayesianOptimization
Classical Bayesian optimization wrapper:
BayesianOptimization(
bounds = [-1.0 1.0; -1.0 1.0], # Optimization bounds
ninit = 10, # Initial random samples
nopt = 50, # Optimization iterations
verbose = 0, # Verbosity level
seed = 1 # Random seed
)Examples
See the examples/ directory for:
quick_start.jl- Basic usage examplesoptim_interface_example.jl- Comprehensive Optim.jl interface demo
AI usage statement
Github copilot with Claude Sonnet 4 and GPT-5 was used to design an initial python version of the BoTorch based algorithm, and to brush up documentation and testing infrastructure.
PyBayesOpt.BoTorchQBatch — Typestruct BoTorchQBatchStruct describing optimization for BoTorch based q-batch Bayesian optimization
Fields:
bounds::Matrix{Float64} = [-1 1]':ndim x 2matrix of evaluation boundsnbatch::Int = 1: batch size for evaluations of black box modelninit::Int = 10: number of initialization iterations resulting innbatch*ninitevaluationsnopt::Int = 10: number of optimization iterations resulting innbatch*noptevaluationsacqmethod::Symbol = :qLogEI: acquisition method.Valid metods (see BoTorch docmentation):
:qEI,:qExpectedImprovement:qNEI,:qNoisyExpectedImprovement:qLogEI,:qLogExpectedImprovement:qLogNEI,:qLogNoisyExpetedImprovement:qUCB,:qUpperConfidenceBound:qPI,:qProbabilityOfImprovement
seed::Int = 1234: random seedverbose::Bool = true: verbosityacq_nrestarts::Int = 20:num_restartsparameter inoptimize_acqfacq_nsamples::Int = 512:raw_samplesparameter inoptimize_acqfqUCB_beta::Float64 = 2.0: beta parameter for qUCB_beta acquisition method
PyBayesOpt.BoTorchQBatchState — Typestruct BoTorchQBatchStateState for BoTorchQBatch, also used as result struct.
An instance of this state can be used either as the method paramer for the optimize function, or in user implemented loop as seen below:
state = BoTorchQBatchState(; params)
while !finished(state)
pts = ask!(state)
values = zeros(state.params.nbatch)
Threads.@threads for i in 1:size(pts, 2)
values[i] = func(pts[:, i])
end
tell!(state, pts, values)
endFields
X_ini::Union{Nothing, Matrix{Float64}} = nothing: initialization pointsX_obs::Union{Nothing, PyObject} = nothing: training pointsY_obs::Union{Nothing, PyObject} = nothing: training valuesgpmodel::Union{Nothing, PyObject} = nothing: Gaussian process modelevaluations_used::Int = 0: number of evaluations doneinitialization_complete::Bool = false: flag indicating initialization stateoptimization_complete::Bool = false: flag indicating optimization stateinit_iterations_done::Int = 0: initial iterations performedoptim_iterations_done::Int = 0: optimization iterations performed
PyBayesOpt.BayesianOptimization — Typestruct BayesianOptimizationOptimizer wrapping BayesianOptimization.
Fields:
- 'bounds::Matrix{Float64}'
- 'ninit::Int = 10'
- 'nopt::Int = 100'
- 'verbose::Int = 0'
- 'seed::Int = 1'
Optim.optimize — FunctionOptim.optimize(func, params::BoTorchQBatch)Minimize black-box function func using the BoTorchQBatch Bayesian optimization method.
This provides an Optim.jl-compatible interface so you can call optimize(func, method) or Optim.optimize(func, method) directly. The optimization proceeds in two phases:
- Initialization:
params.ninitbatches of random (Sobol) points are evaluated. - Optimization:
params.noptiterations of model-based batch acquisition optimization.
Multi-threading: If Threads.nthreads() > 1, function evaluations within a batch are threaded.
Arguments
func::Function: Objective to minimize (should accept a vector of reals and return a scalar).params::BoTorchQBatch: Configuration object.
Returns
BoTorchQBatchState: Final state which behaves like anOptim.MultivariateOptimizationResultsobject (supportsminimum,minimizer,f_calls, etc.).
Optim.optimize(f, params::BayesianOptimization)Minimize black-box function f using the Python BayesianOptimization library via the BayesianOptimization parameter struct.
Workflow
- Construct bounds dictionary.
- Wrap objective as a maximization target (the Python library maximizes) by negating.
- Run random initialization (
ninit) followed by model-guided iterations (nopt). - Convert best (max) record back to a minimization result.
Returns a BayesianOptimizationResult with minimum, minimizer, and bookkeeping fields.
Base.summary — FunctionOptim.summary(state::BoTorchQBatchState)Print a concise human-readable summary of a BoTorchQBatchState including phase, current best point, best value, and (once optimization is complete) posterior statistics at the incumbent.
Intended mainly for verbose / debugging output; called automatically when params.verbose=true.
PyBayesOpt.initializing — Functioninitializing(qbatch_state)
Tell if optimization is in initialization state.
PyBayesOpt.optimizing — Functionoptimizing(qbatch_state)
Tell if optimization is in optimization loop.
PyBayesOpt.finished — Functionfinished(qbatch_state)
Tell if optimization is finished
PyBayesOpt.ask! — Functionask!(qbatch_state)Ask for a new batch of points to be avaluated. Returns dim x batchsize matrix. At once may generate intial candidates or optimize the acquisition.
PyBayesOpt.tell! — Functiontell!(qbatch_state, candidates, valus)Provide newly evaluated candidate points to the optimizatio, update the initialization resp. training set.
PyBayesOpt.bestpoint — Functionbestpoint(qbatch_state)Return the best point and function value from the observation set.
PyBayesOpt.evalposterior — Functionevalposterior(qbatch_state, point)Evaluate posterior at given point. Returns posterior mean and variance.
PyBayesOpt.sampleposteriormin — Functionsampleposteriormin(qbatch_state; nsamples)Sample the posterior minimum using MaxPosteriorSampling.
Returns the estimated minimum point and the estimated standard deviation of its coordinates. Use evalposterior to obtain the function value in that point.
PyBayesOpt.BayesianOptimizationResult — Typestruct BayesianOptimizationResult <: Optim.OptimizationResultsResult struct for BayesianOptimization, compatible with Optim.jl interface.
Fields
params::BayesianOptimization: The optimization parameters usedf_calls::Int: Number of function evaluations performedminimizer::Vector{Float64}: The best point foundminimum::Float64: The best function value found
PyBayesOpt.AbstractBenchmarkFunction — Typeabstract type AbstractBenchmarkFunctionAbstract base type for benchmark optimization functions.
All benchmark functions should implement:
- Function call syntax
f(x)wherexis a vector - Fields
bounds,optimal_value, and optionallyoptimal_point
PyBayesOpt.SimpleFunction — TypeSimpleFunction <: AbstractBenchmarkFunctionA simple quadratic test function for optimization benchmarking.
Function: f(x) = x[1]² + (x[2] - 1)² - 1
Fields
bounds::Matrix{Float64}: Optimization bounds[[-10 10]; [-10 10]]optimal_value::Float64: Known optimal value-1.0optimal_point::Vector{Float64}: Known optimal point[0.0, 1.0]
Example
simple_func = SimpleFunction()
result = optimize(simple_func, BoTorchQBatch(bounds=simple_func.bounds))PyBayesOpt.BraninFunction — TypeBraninFunction <: AbstractBenchmarkFunctionThe Branin function - a common benchmark for global optimization.
Function: f(x₁,x₂) = a(x₂ - bx₁² + cx₁ - r)² + s(1-t)cos(x₁) + s where a=1, b=5.1/(4π²), c=5/π, r=6, s=10, t=1/(8π)
Fields
bounds::Matrix{Float64}: Optimization bounds[[-5 10]; [0 15]]optimal_value::Float64: Global minimum value0.397887
The function has 3 global minima at approximately:
(-π, 12.275),(π, 2.275),(9.42478, 2.475)
Example
branin_func = BraninFunction()
result = optimize(branin_func, BoTorchQBatch(bounds=branin_func.bounds))PyBayesOpt.AckleyFunction — TypeAckleyFunction <: AbstractBenchmarkFunctionThe Ackley function - a widely used benchmark function for global optimization.
Function: f(x) = -a·exp(-b·√(1/d·∑xᵢ²)) - exp(1/d·∑cos(c·xᵢ)) + a + e where typically a=20, b=0.2, c=2π
Fields
dim::Int: Problem dimension (default: 2)bounds::Matrix{Float64}: Optimization bounds (default:[-32.768, 32.768]for each dimension)optimal_value::Float64: Global minimum value0.0optimal_point::Vector{Float64}: Global minimum at origin
The function has a global minimum at the origin with many local minima.
Example
ackley_func = AckleyFunction(dim=5) # 5-dimensional Ackley function
result = optimize(ackley_func, BoTorchQBatch(bounds=ackley_func.bounds))PyBayesOpt.RosenbrockFunction — TypeRosenbrockFunction <: AbstractBenchmarkFunctionThe Rosenbrock function (also known as "Rosenbrock's valley" or "banana function").
Function: f(x) = ∑[100(xᵢ₊₁ - xᵢ²)² + (1 - xᵢ)²] for i=1 to n-1
Fields
dim::Int: Problem dimension (default: 2)bounds::Matrix{Float64}: Optimization bounds (default:[-5, 10]for each dimension)optimal_value::Float64: Global minimum value0.0optimal_point::Vector{Float64}: Global minimum atones(dim)
The function has a global minimum in a narrow, curved valley making it challenging for optimization algorithms.
Example
rosenbrock_func = RosenbrockFunction(dim=3) # 3-dimensional Rosenbrock function
result = optimize(rosenbrock_func, BoTorchQBatch(bounds=rosenbrock_func.bounds))