Subclass that implements CMA-ES calling adagio::pureCMAES() from package adagio.

Source

Hansen N (2016). “The CMA Evolution Strategy: A Tutorial.” 1604.00772.

Dictionary

This Tuner can be instantiated via the dictionary mlr_tuners or with the associated sugar function tnr():

TunerCmaes$new()
mlr_tuners$get("cmaes")
tnr("cmaes")

Parameters

sigma

numeric(1)

start_values

character(1)
Create random start values or based on center of search space? In the latter case, it is the center of the parameters before a trafo is applied.

For the meaning of the control parameters, see adagio::pureCMAES(). Note that we have removed all control parameters which refer to the termination of the algorithm and where our terminators allow to obtain the same behavior.

Progress Bars

$optimize() supports progress bars via the package progressr combined with a Terminator. Simply wrap the function in progressr::with_progress() to enable them. We recommend to use package progress as backend; enable with progressr::handlers("progress").

Logging

All Tuners use a logger (as implemented in lgr) from package bbotk. Use lgr::get_logger("bbotk") to access and control the logger.

See also

Super classes

mlr3tuning::Tuner -> mlr3tuning::TunerFromOptimizer -> TunerCmaes

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage

TunerCmaes$new()


Method clone()

The objects of this class are cloneable with this method.

Usage

TunerCmaes$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

library(data.table) # retrieve task task = tsk("pima") # load learner and set search space learner = lrn("classif.rpart", cp = to_tune(1e-04, 1e-1, logscale = TRUE), minsplit = to_tune(p_dbl(2, 128, trafo = as.integer)), minbucket = to_tune(p_dbl(1, 64, trafo = as.integer)) ) # hyperparameter tuning on the pima indians diabetes data set instance = tune( method = "cmaes", task = task, learner = learner, resampling = rsmp("holdout"), measure = msr("classif.ce"), term_evals = 10) # best performing hyperparameter configuration instance$result
#> cp minsplit minbucket learner_param_vals x_domain classif.ce #> 1: -9.21034 2 46.62247 <list[4]> <list[3]> 0.2226562
# all evaluated hyperparameter configuration as.data.table(instance$archive)
#> cp minsplit minbucket classif.ce x_domain_cp x_domain_minsplit #> 1: -9.210340 128.00000 30.73779 0.2929688 0.0001000000 128 #> 2: -7.047217 111.00733 38.21590 0.2343750 0.0008698260 111 #> 3: -8.779414 128.00000 57.35737 0.2929688 0.0001538682 128 #> 4: -3.511509 108.14531 64.00000 0.2929688 0.0298518308 108 #> 5: -9.210340 2.00000 46.62247 0.2226562 0.0001000000 2 #> 6: -9.210340 2.00000 48.16649 0.2226562 0.0001000000 2 #> 7: -9.210340 28.75251 41.18789 0.2343750 0.0001000000 28 #> 8: -9.210340 38.86257 12.30461 0.2578125 0.0001000000 38 #> 9: -9.210340 2.00000 55.84191 0.2265625 0.0001000000 2 #> 10: -9.210340 56.06075 43.25456 0.2343750 0.0001000000 56 #> x_domain_minbucket runtime_learners timestamp batch_nr #> 1: 30 0.011 2021-09-16 04:23:14 1 #> 2: 38 0.011 2021-09-16 04:23:14 2 #> 3: 57 0.012 2021-09-16 04:23:14 3 #> 4: 64 0.012 2021-09-16 04:23:14 4 #> 5: 46 0.014 2021-09-16 04:23:15 5 #> 6: 48 0.013 2021-09-16 04:23:15 6 #> 7: 41 0.010 2021-09-16 04:23:15 7 #> 8: 12 0.011 2021-09-16 04:23:15 8 #> 9: 55 0.010 2021-09-16 04:23:15 9 #> 10: 43 0.012 2021-09-16 04:23:15 10 #> resample_result #> 1: <ResampleResult[20]> #> 2: <ResampleResult[20]> #> 3: <ResampleResult[20]> #> 4: <ResampleResult[20]> #> 5: <ResampleResult[20]> #> 6: <ResampleResult[20]> #> 7: <ResampleResult[20]> #> 8: <ResampleResult[20]> #> 9: <ResampleResult[20]> #> 10: <ResampleResult[20]>
# fit final model on complete data set learner$param_set$values = instance$result_learner_param_vals learner$train(task)