Skip to contents

The TuningInstanceSingleCrit specifies a tuning problem for Tuners. The function ti() creates a TuningInstanceSingleCrit and the function tune() creates an instance internally.

Details

The instance contains an ObjectiveTuning object that encodes the black box objective function a Tuner has to optimize. The instance allows the basic operations of querying the objective at design points ($eval_batch()). This operation is usually done by the Tuner. Evaluations of hyperparameter configurations are performed in batches by calling mlr3::benchmark() internally. The evaluated hyperparameter configurations are stored in the Archive ($archive). Before a batch is evaluated, the bbotk::Terminator is queried for the remaining budget. If the available budget is exhausted, an exception is raised, and no further evaluations can be performed from this point on. The tuner is also supposed to store its final result, consisting of a selected hyperparameter configuration and associated estimated performance values, by calling the method instance$assign_result.

Default Measures

If no measure is passed, the default measure is used. The default measure depends on the task type.

TaskDefault MeasurePackage
"classif""classif.ce"mlr3
"regr""regr.mse"mlr3
"surv""surv.cindex"mlr3proba
"dens""dens.logloss"mlr3proba
"classif_st""classif.ce"mlr3spatial
"regr_st""regr.mse"mlr3spatial
"clust""clust.dunn"mlr3cluster

Resources

There are several sections about hyperparameter optimization in the mlr3book.

The gallery features a collection of case studies and demos about optimization.

Extension Packages

mlr3tuning is extended by the following packages.

  • mlr3tuningspaces is a collection of search spaces from scientific articles for commonly used learners.

  • mlr3hyperband adds the Hyperband and Successive Halving algorithm.

  • mlr3mbo adds Bayesian optimization methods.

Analysis

For analyzing the tuning results, it is recommended to pass the ArchiveTuning to as.data.table(). The returned data table is joined with the benchmark result which adds the mlr3::ResampleResult for each hyperparameter evaluation.

The archive provides various getters (e.g. $learners()) to ease the access. All getters extract by position (i) or unique hash (uhash). For a complete list of all getters see the methods section.

The benchmark result ($benchmark_result) allows to score the hyperparameter configurations again on a different measure. Alternatively, measures can be supplied to as.data.table().

The mlr3viz package provides visualizations for tuning results.

Super classes

bbotk::OptimInstance -> bbotk::OptimInstanceSingleCrit -> TuningInstanceSingleCrit

Active bindings

result_learner_param_vals

(list())
Param values for the optimal learner call.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage

TuningInstanceSingleCrit$new(
  task,
  learner,
  resampling,
  measure = NULL,
  terminator,
  search_space = NULL,
  store_benchmark_result = TRUE,
  store_models = FALSE,
  check_values = FALSE,
  allow_hotstart = FALSE,
  keep_hotstart_stack = FALSE,
  evaluate_default = FALSE,
  callbacks = list()
)

Arguments

task

(mlr3::Task)
Task to operate on.

learner

(mlr3::Learner)
Learner to tune.

resampling

(mlr3::Resampling)
Resampling that is used to evaluate the performance of the hyperparameter configurations. Uninstantiated resamplings are instantiated during construction so that all configurations are evaluated on the same data splits. Already instantiated resamplings are kept unchanged. Specialized Tuner change the resampling e.g. to evaluate a hyperparameter configuration on different data splits. This field, however, always returns the resampling passed in construction.

measure

(mlr3::Measure)
Measure to optimize. If NULL, default measure is used.

terminator

(Terminator)
Stop criterion of the tuning process.

search_space

(paradox::ParamSet)
Hyperparameter search space. If NULL (default), the search space is constructed from the TuneToken of the learner's parameter set (learner$param_set).

store_benchmark_result

(logical(1))
If TRUE (default), store resample result of evaluated hyperparameter configurations in archive as mlr3::BenchmarkResult.

store_models

(logical(1))
If TRUE, fitted models are stored in the benchmark result (archive$benchmark_result). If store_benchmark_result = FALSE, models are only stored temporarily and not accessible after the tuning. This combination is needed for measures that require a model.

check_values

(logical(1))
If TRUE, hyperparameter values are checked before evaluation and performance scores after. If FALSE (default), values are unchecked but computational overhead is reduced.

allow_hotstart

(logical(1))
Allow to hotstart learners with previously fitted models. See also mlr3::HotstartStack. The learner must support hotstarting. Sets store_models = TRUE.

keep_hotstart_stack

(logical(1))
If TRUE, mlr3::HotstartStack is kept in $objective$hotstart_stack after tuning.

evaluate_default

(logical(1))
If TRUE, learner is evaluated with hyperparameters set to their default values at the start of the optimization.

callbacks

(list of CallbackTuning)
List of callbacks.


Method assign_result()

The Tuner object writes the best found point and estimated performance value here. For internal use.

Usage

TuningInstanceSingleCrit$assign_result(xdt, y, learner_param_vals = NULL)

Arguments

xdt

(data.table::data.table())
Hyperparameter values as data.table::data.table(). Each row is one configuration. Contains values in the search space. Can contain additional columns for extra information.

y

(numeric(1))
Optimal outcome.

learner_param_vals

(List of named list()s)
Fixed parameter values of the learner that are neither part of the


Method clone()

The objects of this class are cloneable with this method.

Usage

TuningInstanceSingleCrit$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Hyperparameter optimization on the Palmer Penguins data set
task = tsk("penguins")

# Load learner and set search space
learner = lrn("classif.rpart",
  cp = to_tune(1e-04, 1e-1, logscale = TRUE)
)

# Construct tuning instance
instance = ti(
  task = task,
  learner = learner,
  resampling = rsmp("cv", folds = 3),
  measures = msr("classif.ce"),
  terminator = trm("evals", n_evals = 4)
)

# Choose optimization algorithm
tuner = tnr("random_search", batch_size = 2)

# Run tuning
tuner$optimize(instance)
#>           cp learner_param_vals  x_domain classif.ce
#>        <num>             <list>    <list>      <num>
#> 1: -7.564052          <list[2]> <list[1]>  0.0726926

# Set optimal hyperparameter configuration to learner
learner$param_set$values = instance$result_learner_param_vals

# Train the learner on the full data set
learner$train(task)

# Inspect all evaluated configurations
as.data.table(instance$archive)
#>           cp classif.ce  x_domain_cp runtime_learners           timestamp
#>        <num>      <num>        <num>            <num>              <POSc>
#> 1: -7.564052  0.0726926 0.0005187690            0.014 2024-03-06 08:51:13
#> 2: -7.420301  0.0726926 0.0005989688            0.018 2024-03-06 08:51:13
#> 3: -4.928764  0.0726926 0.0072354403            0.016 2024-03-06 08:51:13
#> 4: -7.684273  0.0726926 0.0004600050            0.015 2024-03-06 08:51:13
#>    batch_nr warnings errors  resample_result
#>       <int>    <int>  <int>           <list>
#> 1:        1        0      0 <ResampleResult>
#> 2:        1        0      0 <ResampleResult>
#> 3:        2        0      0 <ResampleResult>
#> 4:        2        0      0 <ResampleResult>