
Hyperparameter Tuning with Covariance Matrix Adaptation Evolution Strategy
Source:R/TunerCmaes.R
mlr_tuners_cmaes.Rd
Subclass for Covariance Matrix Adaptation Evolution Strategy (CMA-ES).
Calls adagio::pureCMAES()
from package adagio.
Control Parameters
start_values
character(1)
Createrandom
start values or based oncenter
of search space? In the latter case, it is the center of the parameters before a trafo is applied.
For the meaning of the control parameters, see adagio::pureCMAES()
.
Note that we have removed all control parameters which refer to the termination of the algorithm and where our terminators allow to obtain the same behavior.
Progress Bars
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
Logging
All Tuners use a logger (as implemented in lgr) from package
bbotk.
Use lgr::get_logger("bbotk")
to access and control the logger.
Optimizer
This Tuner is based on bbotk::OptimizerCmaes which can be applied on any black box optimization problem. See also the documentation of bbotk.
Resources
There are several sections about hyperparameter optimization in the mlr3book.
Learn more about tuners.
The gallery features a collection of case studies and demos about optimization.
Use the Hyperband optimizer with different budget parameters.
Super classes
mlr3tuning::Tuner
-> mlr3tuning::TunerFromOptimizer
-> TunerCmaes
Examples
# Hyperparameter Optimization
# load learner and set search space
learner = lrn("classif.rpart",
cp = to_tune(1e-04, 1e-1, logscale = TRUE),
minsplit = to_tune(p_dbl(2, 128, trafo = as.integer)),
minbucket = to_tune(p_dbl(1, 64, trafo = as.integer))
)
# run hyperparameter tuning on the Palmer Penguins data set
instance = tune(
tuner = tnr("cmaes"),
task = tsk("penguins"),
learner = learner,
resampling = rsmp("holdout"),
measure = msr("classif.ce"),
term_evals = 10)
# best performing hyperparameter configuration
instance$result
#> cp minsplit minbucket learner_param_vals x_domain classif.ce
#> 1: -6.755619 2 1.623453 <list[4]> <list[3]> 0.0173913
# all evaluated hyperparameter configuration
as.data.table(instance$archive)
#> cp minsplit minbucket classif.ce x_domain_cp x_domain_minsplit
#> 1: -7.871627 66.71059 11.843669 0.03478261 0.0003814133 66
#> 2: -6.755619 2.00000 1.623453 0.01739130 0.0011643195 2
#> 3: -2.302585 128.00000 1.000000 0.03478261 0.1000000000 128
#> 4: -3.421345 128.00000 16.450970 0.03478261 0.0326684756 128
#> 5: -9.210340 78.97977 1.000000 0.02608696 0.0001000000 78
#> 6: -9.210340 39.27754 1.000000 0.01739130 0.0001000000 39
#> 7: -5.299497 2.00000 23.805251 0.03478261 0.0049941030 2
#> 8: -6.788993 84.96175 1.000000 0.02608696 0.0011261022 84
#> 9: -6.601750 52.27690 1.000000 0.02608696 0.0013579899 52
#> 10: -9.210340 2.00000 1.777742 0.01739130 0.0001000000 2
#> x_domain_minbucket runtime_learners timestamp batch_nr warnings
#> 1: 11 0.006 2023-11-28 14:29:33 1 0
#> 2: 1 0.007 2023-11-28 14:29:34 2 0
#> 3: 1 0.007 2023-11-28 14:29:34 3 0
#> 4: 16 0.007 2023-11-28 14:29:34 4 0
#> 5: 1 0.006 2023-11-28 14:29:34 5 0
#> 6: 1 0.005 2023-11-28 14:29:34 6 0
#> 7: 23 0.007 2023-11-28 14:29:34 7 0
#> 8: 1 0.006 2023-11-28 14:29:34 8 0
#> 9: 1 0.005 2023-11-28 14:29:34 9 0
#> 10: 1 0.007 2023-11-28 14:29:34 10 0
#> errors resample_result
#> 1: 0 <ResampleResult[21]>
#> 2: 0 <ResampleResult[21]>
#> 3: 0 <ResampleResult[21]>
#> 4: 0 <ResampleResult[21]>
#> 5: 0 <ResampleResult[21]>
#> 6: 0 <ResampleResult[21]>
#> 7: 0 <ResampleResult[21]>
#> 8: 0 <ResampleResult[21]>
#> 9: 0 <ResampleResult[21]>
#> 10: 0 <ResampleResult[21]>
# fit final model on complete data set
learner$param_set$values = instance$result_learner_param_vals
learner$train(tsk("penguins"))