Hyperparameter Tuning with Non-linear Optimization
Source:R/TunerBatchNLoptr.R
mlr_tuners_nloptr.Rd
Subclass for non-linear optimization (NLopt). Calls nloptr::nloptr from package nloptr.
Source
Johnson, G S (2020). “The NLopt nonlinear-optimization package.” https://github.com/stevengj/nlopt.
Details
The termination conditions stopval
, maxtime
and maxeval
of nloptr::nloptr()
are deactivated and replaced by the bbotk::Terminator subclasses.
The x and function value tolerance termination conditions (xtol_rel = 10^-4
, xtol_abs = rep(0.0, length(x0))
, ftol_rel = 0.0
and ftol_abs = 0.0
) are still available and implemented with their package defaults.
To deactivate these conditions, set them to -1
.
Logging
All Tuners use a logger (as implemented in lgr) from package
bbotk.
Use lgr::get_logger("bbotk")
to access and control the logger.
Optimizer
This Tuner is based on bbotk::OptimizerBatchNLoptr which can be applied on any black box optimization problem. See also the documentation of bbotk.
Parameters
algorithm
character(1)
eval_g_ineq
function()
xtol_rel
numeric(1)
xtol_abs
numeric(1)
ftol_rel
numeric(1)
ftol_abs
numeric(1)
start_values
character(1)
Createrandom
start values or based oncenter
of search space? In the latter case, it is the center of the parameters before a trafo is applied.
For the meaning of the control parameters, see nloptr::nloptr()
and
nloptr::nloptr.print.options()
.
The termination conditions stopval
, maxtime
and maxeval
of
nloptr::nloptr()
are deactivated and replaced by the Terminator
subclasses. The x and function value tolerance termination conditions
(xtol_rel = 10^-4
, xtol_abs = rep(0.0, length(x0))
, ftol_rel = 0.0
and
ftol_abs = 0.0
) are still available and implemented with their package
defaults. To deactivate these conditions, set them to -1
.
Resources
There are several sections about hyperparameter optimization in the mlr3book.
Getting started with hyperparameter optimization.
An overview of all tuners can be found on our website.
Tune a support vector machine on the Sonar data set.
Learn about tuning spaces.
Estimate the model performance with nested resampling.
Learn about multi-objective optimization.
Simultaneously optimize hyperparameters and use early stopping with XGBoost.
Automate the tuning.
The gallery features a collection of case studies and demos about optimization.
Learn more advanced methods with the Practical Tuning Series.
Learn about hotstarting models.
Run the default hyperparameter configuration of learners as a baseline.
Use the Hyperband optimizer with different budget parameters.
The cheatsheet summarizes the most important functions of mlr3tuning.
Progress Bars
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
Super classes
mlr3tuning::Tuner
-> mlr3tuning::TunerBatch
-> mlr3tuning::TunerBatchFromOptimizerBatch
-> TunerBatchNLoptr
Examples
# Hyperparameter Optimization
# \donttest{
# load learner and set search space
learner = lrn("classif.rpart",
cp = to_tune(1e-04, 1e-1, logscale = TRUE)
)
# run hyperparameter tuning on the Palmer Penguins data set
instance = tune(
tuner = tnr("nloptr", algorithm = "NLOPT_LN_BOBYQA"),
task = tsk("penguins"),
learner = learner,
resampling = rsmp("holdout"),
measure = msr("classif.ce")
)
# best performing hyperparameter configuration
instance$result
#> cp learner_param_vals x_domain classif.ce
#> <num> <list> <list> <num>
#> 1: -5.081957 <list[2]> <list[1]> 0.07826087
# all evaluated hyperparameter configuration
as.data.table(instance$archive)
#> cp classif.ce runtime_learners timestamp warnings errors
#> <num> <num> <num> <POSc> <int> <int>
#> 1: -5.081957 0.07826087 0.005 2024-11-22 11:43:53 0 0
#> 2: -5.081957 0.07826087 0.005 2024-11-22 11:43:53 0 0
#> 3: -5.081957 0.07826087 0.005 2024-11-22 11:43:53 0 0
#> 4: -3.355018 0.07826087 0.005 2024-11-22 11:43:53 0 0
#> 5: -6.808896 0.07826087 0.005 2024-11-22 11:43:53 0 0
#> 6: -5.064688 0.07826087 0.005 2024-11-22 11:43:53 0 0
#> 7: -5.099226 0.07826087 0.005 2024-11-22 11:43:53 0 0
#> 8: -5.080230 0.07826087 0.005 2024-11-22 11:43:54 0 0
#> 9: -5.083684 0.07826087 0.005 2024-11-22 11:43:54 0 0
#> 10: -5.081957 0.07826087 0.006 2024-11-22 11:43:54 0 0
#> x_domain batch_nr resample_result
#> <list> <int> <list>
#> 1: <list[1]> 1 <ResampleResult>
#> 2: <list[1]> 2 <ResampleResult>
#> 3: <list[1]> 3 <ResampleResult>
#> 4: <list[1]> 4 <ResampleResult>
#> 5: <list[1]> 5 <ResampleResult>
#> 6: <list[1]> 6 <ResampleResult>
#> 7: <list[1]> 7 <ResampleResult>
#> 8: <list[1]> 8 <ResampleResult>
#> 9: <list[1]> 9 <ResampleResult>
#> 10: <list[1]> 10 <ResampleResult>
# fit final model on complete data set
learner$param_set$values = instance$result_learner_param_vals
learner$train(tsk("penguins"))
# }