Skip to contents

Subclass for generalized simulated annealing tuning calling GenSA::GenSA() from package GenSA.

In contrast to the GenSA::GenSA() defaults, we set smooth = FALSE as a default.

Source

Tsallis C, Stariolo DA (1996). “Generalized simulated annealing.” Physica A: Statistical Mechanics and its Applications, 233(1-2), 395--406. doi:10.1016/s0378-4371(96)00271-3 .

Xiang Y, Gubian S, Suomela B, Hoeng J (2013). “Generalized Simulated Annealing for Global Optimization: The GenSA Package.” The R Journal, 5(1), 13. doi:10.32614/rj-2013-002 .

Dictionary

This Tuner can be instantiated via the dictionary mlr_tuners or with the associated sugar function tnr():

TunerGenSA$new()
mlr_tuners$get("gensa")
tnr("gensa")

Parallelization

In order to support general termination criteria and parallelization, we evaluate points in a batch-fashion of size batch_size. Larger batches mean we can parallelize more, smaller batches imply a more fine-grained checking of termination criteria. A batch contains of batch_size times resampling$iters jobs. E.g., if you set a batch size of 10 points and do a 5-fold cross validation, you can utilize up to 50 cores.

Parallelization is supported via package future (see mlr3::benchmark()'s section on parallelization for more details).

Logging

All Tuners use a logger (as implemented in lgr) from package bbotk. Use lgr::get_logger("bbotk") to access and control the logger.

Optimizer

This Tuner is based on bbotk::OptimizerGenSA which can be applied on any black box optimization problem. See also the documentation of bbotk.

Parameters

smooth

logical(1)

temperature

numeric(1)

acceptance.param

numeric(1)

verbose

logical(1)

trace.mat

logical(1)

For the meaning of the control parameters, see GenSA::GenSA(). Note that we have removed all control parameters which refer to the termination of the algorithm and where our terminators allow to obtain the same behavior.

Progress Bars

$optimize() supports progress bars via the package progressr combined with a Terminator. Simply wrap the function in progressr::with_progress() to enable them. We recommend to use package progress as backend; enable with progressr::handlers("progress").

Super classes

mlr3tuning::Tuner -> mlr3tuning::TunerFromOptimizer -> TunerGenSA

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method clone()

The objects of this class are cloneable with this method.

Usage

TunerGenSA$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# retrieve task
task = tsk("pima")

# load learner and set search space
learner = lrn("classif.rpart", cp = to_tune(1e-04, 1e-1, logscale = TRUE))

# hyperparameter tuning on the pima indians diabetes data set
instance = tune(
  method = "gensa",
  task = task,
  learner = learner,
  resampling = rsmp("holdout"),
  measure = msr("classif.ce"),
  term_evals = 10
)
#> Warning: one-dimensional optimization by Nelder-Mead is unreliable:
#> use "Brent" or optimize() directly

# best performing hyperparameter configuration
instance$result
#>           cp learner_param_vals  x_domain classif.ce
#> 1: -4.339531          <list[2]> <list[1]>  0.2617188

# all evaluated hyperparameter configuration
as.data.table(instance$archive)
#>            cp classif.ce x_domain_cp runtime_learners           timestamp
#>  1: -2.491578  0.2773438 0.082779263            0.013 2022-08-25 11:28:17
#>  2: -6.529005  0.3007812 0.001460458            0.015 2022-08-25 11:28:17
#>  3: -4.339531  0.2617188 0.013042650            0.015 2022-08-25 11:28:17
#>  4: -4.339531  0.2617188 0.013042650            0.013 2022-08-25 11:28:17
#>  5: -4.339531  0.2617188 0.013042650            0.015 2022-08-25 11:28:18
#>  6: -4.339531  0.2617188 0.013042650            0.014 2022-08-25 11:28:18
#>  7: -3.905577  0.2656250 0.020129327            0.015 2022-08-25 11:28:18
#>  8: -4.773484  0.2617188 0.008450890            0.013 2022-08-25 11:28:18
#>  9: -4.556507  0.2617188 0.010498666            0.015 2022-08-25 11:28:18
#> 10: -4.122554  0.2656250 0.016203079            0.013 2022-08-25 11:28:18
#>     batch_nr warnings errors      resample_result
#>  1:        1        0      0 <ResampleResult[21]>
#>  2:        2        0      0 <ResampleResult[21]>
#>  3:        3        0      0 <ResampleResult[21]>
#>  4:        4        0      0 <ResampleResult[21]>
#>  5:        5        0      0 <ResampleResult[21]>
#>  6:        6        0      0 <ResampleResult[21]>
#>  7:        7        0      0 <ResampleResult[21]>
#>  8:        8        0      0 <ResampleResult[21]>
#>  9:        9        0      0 <ResampleResult[21]>
#> 10:       10        0      0 <ResampleResult[21]>

# fit final model on complete data set
learner$param_set$values = instance$result_learner_param_vals
learner$train(task)