The AutoTuner is a mlr3::Learner which wraps another mlr3::Learner and performs the following steps during $train():

  1. The hyperparameters of the wrapped (inner) learner are trained on the training data via resampling. The tuning can be specified by providing a Tuner, a bbotk::Terminator, a search space as paradox::ParamSet, a mlr3::Resampling and a mlr3::Measure.

  2. The best found hyperparameter configuration is set as hyperparameters for the wrapped (inner) learner.

  3. A final model is fit on the complete training data using the now parametrized wrapped learner.

During $predict() the AutoTuner just calls the predict method of the wrapped (inner) learner.

Note that this approach allows to perform nested resampling by passing an AutoTuner object to mlr3::resample() or mlr3::benchmark(). To access the inner resampling results, set store_tuning_instance = TRUE and execute mlr3::resample() or mlr3::benchmark() with store_models = TRUE (see examples).

Super class

mlr3::Learner -> AutoTuner

Public fields

instance_args

(list())
All arguments from construction to create the TuningInstanceSingleCrit.

tuner

(Tuner).

Active bindings

archive

ArchiveTuning
Archive of the TuningInstanceSingleCrit.

learner

(mlr3::Learner)
Trained learner

tuning_instance

(TuningInstanceSingleCrit)
Internally created tuning instance with all intermediate results.

tuning_result

(named list())
Short-cut to result from TuningInstanceSingleCrit.

param_set

paradox::ParamSet.

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage

AutoTuner$new(
  learner,
  resampling,
  measure,
  search_space,
  terminator,
  tuner,
  store_tuning_instance = TRUE,
  store_benchmark_result = TRUE,
  store_models = FALSE,
  check_values = FALSE
)

Arguments

learner

(mlr3::Learner)
Learner to tune, see TuningInstanceSingleCrit.

resampling

(mlr3::Resampling)
Resampling strategy during tuning, see TuningInstanceSingleCrit. This mlr3::Resampling is meant to be the inner resampling, operating on the training set of an arbitrary outer resampling. For this reason it is not feasible to pass an instantiated mlr3::Resampling here.

measure

(list of mlr3::Measure)
Performance measure to optimize.

search_space

(paradox::ParamSet)
Hyperparameter search space, see TuningInstanceSingleCrit.

terminator

(bbotk::Terminator)
When to stop tuning, see TuningInstanceSingleCrit.

tuner

(Tuner)
Tuning algorithm to run.

store_tuning_instance

(logical(1))
If TRUE (default), stores the internally created TuningInstanceSingleCrit with all intermediate results in slot $tuning_instance.

store_benchmark_result

(logical(1))
Store benchmark result in archive?

store_models

(logical(1))
Store models in benchmark result?

check_values

(logical(1))
Should parameters before the evaluation and the results be checked for validity?


Method clone()

The objects of this class are cloneable with this method.

Usage

AutoTuner$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

library(mlr3) library(paradox) task = tsk("iris") learner = lrn("classif.rpart") resampling = rsmp("holdout") measure = msr("classif.ce") search_space = ParamSet$new( params = list(ParamDbl$new("cp", lower = 0.001, upper = 0.1))) terminator = trm("evals", n_evals = 5) tuner = tnr("grid_search") at = AutoTuner$new( learner, resampling, measure, search_space, terminator, tuner, store_tuning_instance = TRUE) at$train(task) at$model
#> $learner #> <LearnerClassifRpart:classif.rpart> #> * Model: rpart #> * Parameters: xval=0, cp=0.056 #> * Packages: rpart #> * Predict Type: response #> * Feature types: logical, integer, numeric, factor, ordered #> * Properties: importance, missings, multiclass, selected_features, #> twoclass, weights #> #> $tuning_instance #> <TuningInstanceSingleCrit> #> * State: Optimized #> * Objective: <ObjectiveTuning:classif.rpart_on_iris> #> * Search Space: #> <ParamSet> #> id class lower upper levels default value #> 1: cp ParamDbl 0.001 0.1 <NoDefault[3]> #> * Terminator: <TerminatorEvals> #> * Terminated: TRUE #> * Result: #> cp learner_param_vals x_domain classif.ce #> 1: 0.056 <list[2]> <list[1]> 0.06 #> * Archive: #> <ArchiveTuning> #> cp classif.ce uhash x_domain #> 1: 0.056 0.06 74624df9-3a9c-42c8-a04c-89c31bdcdb85 <list[1]> #> 2: 0.045 0.06 1e806354-6b1f-42ff-bd15-1ab5bca41c9b <list[1]> #> 3: 0.023 0.06 3c879480-31a2-4dd7-b809-d0adc7496710 <list[1]> #> 4: 0.012 0.06 827a0758-da12-4858-a88d-1eb17cf390b9 <list[1]> #> 5: 0.089 0.06 ac953e90-4fce-4dfe-9561-c5f2d4bd9901 <list[1]> #> timestamp batch_nr #> 1: 2020-09-28 04:30:37 1 #> 2: 2020-09-28 04:30:38 2 #> 3: 2020-09-28 04:30:38 3 #> 4: 2020-09-28 04:30:38 4 #> 5: 2020-09-28 04:30:38 5 #>
at$learner
#> <LearnerClassifRpart:classif.rpart> #> * Model: rpart #> * Parameters: xval=0, cp=0.056 #> * Packages: rpart #> * Predict Type: response #> * Feature types: logical, integer, numeric, factor, ordered #> * Properties: importance, missings, multiclass, selected_features, #> twoclass, weights
# Nested resampling at = AutoTuner$new(learner, resampling, measure, search_space, terminator, tuner, store_tuning_instance = TRUE) resampling_outer = rsmp("cv", folds = 2) rr = resample(task, at, resampling_outer, store_models = TRUE) # Aggregate performance of outer results rr$aggregate()
#> classif.ce #> 0.05333333
# Retrieve inner tuning results. as.data.table(rr)$learner[[1]]$tuning_result
#> cp learner_param_vals x_domain classif.ce #> 1: 0.056 <list[2]> <list[1]> 0.08