Skip to contents

Function to tune a mlr3::Learner. The function internally creates a TuningInstanceSingleCrit or TuningInstanceMultiCrit which describe the tuning problem. It executes the tuning with the Tuner (method) and returns the result with the tuning instance ($result). The ArchiveTuning ($archive) stores all evaluated hyperparameter configurations and performance scores.

Usage

tune(
  method,
  task,
  learner,
  resampling,
  measures = NULL,
  term_evals = NULL,
  term_time = NULL,
  terminator = NULL,
  search_space = NULL,
  store_benchmark_result = TRUE,
  store_models = FALSE,
  check_values = FALSE,
  allow_hotstart = FALSE,
  keep_hotstart_stack = FALSE,
  evaluate_default = FALSE,
  ...
)

Arguments

method

(character(1) | Tuner)
Key to retrieve tuner from mlr_tuners dictionary or Tuner object.

task

(mlr3::Task)
Task to operate on.

learner

(mlr3::Learner)
Learner to tune.

resampling

(mlr3::Resampling)
Resampling that is used to evaluated the performance of the hyperparameter configurations. Uninstantiated resamplings are instantiated during construction so that all configurations are evaluated on the same data splits. Already instantiated resamplings are kept unchanged. Specialized Tuner change the resampling e.g. to evaluate a hyperparameter configuration on different data splits. This field, however, always returns the resampling passed in construction.

measures

(mlr3::Measure or list of mlr3::Measure)
A single measure creates a TuningInstanceSingleCrit and multiple measures a TuningInstanceMultiCrit. If NULL, default measure is used.

term_evals

(integer(1))
Number of allowed evaluations.

term_time

(integer(1))
Maximum allowed time in seconds.

terminator

(Terminator)
Stop criterion of the tuning process.

search_space

(paradox::ParamSet)
Hyperparameter search space. If NULL (default), the search space is constructed from the TuneToken of the learner's parameter set (learner$param_set).

store_benchmark_result

(logical(1))
If TRUE (default), store resample result of evaluated hyperparameter configurations in archive as mlr3::BenchmarkResult.

store_models

(logical(1))
If TRUE, fitted models are stored in the benchmark result (archive$benchmark_result). If store_benchmark_result = FALSE, models are only stored temporarily and not accessible after the tuning. This combination is needed for measures that require a model.

check_values

(logical(1))
If TRUE, hyperparameter values are checked before evaluation and performance scores after. If FALSE (default), values are unchecked but computational overhead is reduced.

allow_hotstart

(logical(1))
Allow to hotstart learners with previously fitted models. See also mlr3::HotstartStack. The learner must support hotstarting. Sets store_models = TRUE.

keep_hotstart_stack

(logical(1))
If TRUE, mlr3::HotstartStack is kept in $objective$hotstart_stack after tuning.

evaluate_default

(logical(1))
If TRUE, learner is evaluated with hyperparameters set to their default values at the start of the optimization.

...

(named list())
Named arguments to be set as parameters of the tuner.

Details

The mlr3::Task, mlr3::Learner, mlr3::Resampling, mlr3::Measure and Terminator are used to construct a TuningInstanceSingleCrit. If multiple performance Measures are supplied, a TuningInstanceMultiCrit is created. The parameter term_evals and term_time are shortcuts to create a Terminator. If both parameters are passed, a TerminatorCombo is constructed. For other Terminators, pass one with terminator. If no termination criterion is needed, set term_evals, term_time and terminator to NULL. The search space is created from paradox::TuneToken or is supplied by search_space.

Resources

Analysis

For analyzing the tuning results, it is recommended to pass the ArchiveTuning to as.data.table(). The returned data table is joined with the benchmark result which adds the mlr3::ResampleResult for each hyperparameter evaluation.

The archive provides various getters (e.g. $learners()) to ease the access. All getters extract by position (i) or unique hash (uhash). For a complete list of all getters see the methods section.

The benchmark result ($benchmark_result) allows to score the hyperparameter configurations again on a different measure. Alternatively, measures can be supplied to as.data.table().

The mlr3viz package provides visualizations for tuning results.

Examples

# get learner and define search space
learner = lrn("classif.rpart", cp = to_tune(1e-04, 1e-1, logscale = TRUE))

# construct tuning instance
instance = ti(
  task = tsk("pima"),
  learner = learner,
  resampling = rsmp ("holdout"),
  measures = msr("classif.ce"),
  terminator = trm("run_time", secs = 10)
)

# get tuner
tuner = tnr("random_search", batch_size = 10)

# tune classification tree on pima data set
tuner$optimize(instance)
#>           cp learner_param_vals  x_domain classif.ce
#> 1: -5.109751          <list[2]> <list[1]>  0.2304688

# get result
instance$result
#>           cp learner_param_vals  x_domain classif.ce
#> 1: -5.109751          <list[2]> <list[1]>  0.2304688