![](../logo.png)
Hyperparameter Tuning with Covariance Matrix Adaptation Evolution Strategy
Source:R/TunerBatchCmaes.R
mlr_tuners_cmaes.Rd
Subclass for Covariance Matrix Adaptation Evolution Strategy (CMA-ES).
Calls adagio::pureCMAES()
from package adagio.
Control Parameters
start_values
character(1)
Createrandom
start values or based oncenter
of search space? In the latter case, it is the center of the parameters before a trafo is applied.
For the meaning of the control parameters, see adagio::pureCMAES()
.
Note that we have removed all control parameters which refer to the termination of the algorithm and where our terminators allow to obtain the same behavior.
Progress Bars
$optimize()
supports progress bars via the package progressr
combined with a bbotk::Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
Logging
All Tuners use a logger (as implemented in lgr) from package
bbotk.
Use lgr::get_logger("bbotk")
to access and control the logger.
Optimizer
This Tuner is based on bbotk::OptimizerBatchCmaes which can be applied on any black box optimization problem. See also the documentation of bbotk.
Resources
There are several sections about hyperparameter optimization in the mlr3book.
The gallery features a collection of case studies and demos about optimization.
Use the Hyperband optimizer with different budget parameters.
Super classes
mlr3tuning::Tuner
-> mlr3tuning::TunerBatch
-> mlr3tuning::TunerBatchFromOptimizerBatch
-> TunerBatchCmaes
Examples
# Hyperparameter Optimization
# load learner and set search space
learner = lrn("classif.rpart",
cp = to_tune(1e-04, 1e-1, logscale = TRUE),
minsplit = to_tune(p_dbl(2, 128, trafo = as.integer)),
minbucket = to_tune(p_dbl(1, 64, trafo = as.integer))
)
# run hyperparameter tuning on the Palmer Penguins data set
instance = tune(
tuner = tnr("cmaes"),
task = tsk("penguins"),
learner = learner,
resampling = rsmp("holdout"),
measure = msr("classif.ce"),
term_evals = 10)
# best performing hyperparameter configuration
instance$result
#> cp minbucket minsplit learner_param_vals x_domain classif.ce
#> <num> <num> <num> <list> <list> <num>
#> 1: -4.044809 1 2 <list[4]> <list[3]> 0.06086957
# all evaluated hyperparameter configuration
as.data.table(instance$archive)
#> cp minbucket minsplit classif.ce x_domain_cp x_domain_minbucket
#> <num> <num> <num> <num> <num> <int>
#> 1: -2.302585 1.000000 2.00000 0.07826087 0.100000000 1
#> 2: -4.873326 1.000000 2.00000 0.06956522 0.007647886 1
#> 3: -2.302585 33.716295 128.00000 0.07826087 0.100000000 33
#> 4: -2.302585 64.000000 128.00000 0.16521739 0.100000000 64
#> 5: -2.302585 7.299205 69.73445 0.07826087 0.100000000 7
#> 6: -2.302585 26.503901 33.07759 0.07826087 0.100000000 26
#> 7: -2.302585 1.000000 80.52637 0.07826087 0.100000000 1
#> 8: -5.708454 18.462738 2.00000 0.07826087 0.003317798 18
#> 9: -4.044809 1.000000 2.00000 0.06086957 0.017513057 1
#> 10: -4.204426 1.000000 2.00000 0.06086957 0.014929359 1
#> x_domain_minsplit runtime_learners timestamp batch_nr warnings
#> <int> <num> <POSc> <int> <int>
#> 1: 2 0.010 2024-06-30 09:41:22 1 0
#> 2: 2 0.010 2024-06-30 09:41:22 2 0
#> 3: 128 0.010 2024-06-30 09:41:22 3 0
#> 4: 128 0.010 2024-06-30 09:41:22 4 0
#> 5: 69 0.010 2024-06-30 09:41:22 5 0
#> 6: 33 0.010 2024-06-30 09:41:22 6 0
#> 7: 80 0.009 2024-06-30 09:41:22 7 0
#> 8: 2 0.013 2024-06-30 09:41:22 8 0
#> 9: 2 0.009 2024-06-30 09:41:22 9 0
#> 10: 2 0.009 2024-06-30 09:41:22 10 0
#> errors resample_result
#> <int> <list>
#> 1: 0 <ResampleResult>
#> 2: 0 <ResampleResult>
#> 3: 0 <ResampleResult>
#> 4: 0 <ResampleResult>
#> 5: 0 <ResampleResult>
#> 6: 0 <ResampleResult>
#> 7: 0 <ResampleResult>
#> 8: 0 <ResampleResult>
#> 9: 0 <ResampleResult>
#> 10: 0 <ResampleResult>
# fit final model on complete data set
learner$param_set$values = instance$result_learner_param_vals
learner$train(tsk("penguins"))