Skip to contents

Specifies a general single-criteria tuning scenario, including objective function and archive for Tuners to act upon. This class stores an ObjectiveTuning object that encodes the black box objective function which a Tuner has to optimize. It allows the basic operations of querying the objective at design points ($eval_batch()), storing the evaluations in the internal ArchiveTuning and accessing the final result ($result).

Evaluations of hyperparameter configurations are performed in batches by calling mlr3::benchmark() internally. Before a batch is evaluated, the bbotk::Terminator is queried for the remaining budget. If the available budget is exhausted, an exception is raised, and no further evaluations can be performed from this point on.

The tuner is also supposed to store its final result, consisting of a selected hyperparameter configuration and associated estimated performance values, by calling the method instance$assign_result.

Super classes

bbotk::OptimInstance -> bbotk::OptimInstanceSingleCrit -> TuningInstanceSingleCrit

Active bindings

result_learner_param_vals

(list())
Param values for the optimal learner call.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

This defines the resampled performance of a learner on a task, a feasibility region for the parameters the tuner is supposed to optimize, and a termination criterion.

Usage

TuningInstanceSingleCrit$new(
  task,
  learner,
  resampling,
  measure,
  terminator,
  search_space = NULL,
  store_benchmark_result = TRUE,
  store_models = FALSE,
  check_values = FALSE,
  allow_hotstart = FALSE
)

Arguments

task

(mlr3::Task)
Task to operate on.

learner

(mlr3::Learner)
Learner to tune.

resampling

(mlr3::Resampling)
Resampling that is used to evaluated the performance of the hyperparameter configurations. Uninstantiated resamplings are instantiated during construction so that all configurations are evaluated on the same data splits. Already instantiated resamplings are kept unchanged. Specialized Tuner change the resampling e.g. to evaluate a hyperparameter configuration on different data splits. This field, however, always returns the resampling passed in construction.

measure

(mlr3::Measure)
Measure to optimize.

terminator

(Terminator)
Stop criterion of the tuning process.

search_space

(paradox::ParamSet)
Hyperparameter search space. If NULL (default), the search space is constructed from the TuneToken of the learner's parameter set (learner$param_set).

store_benchmark_result

(logical(1))
If TRUE (default), store resample result of evaluated hyperparameter configurations in archive as mlr3::BenchmarkResult.

store_models

(logical(1))
If TRUE, fitted models are stored in the benchmark result (archive$benchmark_result). If store_benchmark_result = FALSE, models are only stored temporarily and not accessible after the tuning. This combination is needed for measures that require a model.

check_values

(logical(1))
If TRUE, hyperparameter values are checked before evaluation and performance scores after. If FALSE (default), values are unchecked but computational overhead is reduced.

allow_hotstart

(logical(1))
Allow to hotstart learners with previously fitted models. See also mlr3::HotstartStack. The learner must support hotstarting. Sets store_models = TRUE.


Method assign_result()

The Tuner object writes the best found point and estimated performance value here. For internal use.

Usage

TuningInstanceSingleCrit$assign_result(xdt, y, learner_param_vals = NULL)

Arguments

xdt

(data.table::data.table())
Hyperparameter values as data.table::data.table(). Each row is one configuration. Contains values in the search space. Can contain additional columns for extra information.

y

(numeric(1))
Optimal outcome.

learner_param_vals

(List of named list()s)
Fixed parameter values of the learner that are neither part of the


Method clone()

The objects of this class are cloneable with this method.

Usage

TuningInstanceSingleCrit$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

library(data.table)

# define search space
search_space = ps(
  cp = p_dbl(lower = 0.001, upper = 0.1),
  minsplit = p_int(lower = 1, upper = 10)
)

# initialize instance
instance = TuningInstanceSingleCrit$new(
  task = tsk("iris"),
  learner = lrn("classif.rpart"),
  resampling = rsmp("holdout"),
  measure = msr("classif.ce"),
  search_space = search_space,
  terminator = trm("evals", n_evals = 5)
)

# generate design
design = data.table(cp = c(0.05, 0.01), minsplit = c(5, 3))

# eval design
instance$eval_batch(design)

# show archive
instance$archive
#> <ArchiveTuning>
#>      cp minsplit classif.ce runtime_learners           timestamp batch_nr
#> 1: 0.05        5       0.08            0.012 2022-01-23 04:26:29        1
#> 2: 0.01        3       0.06            0.010 2022-01-23 04:26:29        1
#>         resample_result
#> 1: <ResampleResult[22]>
#> 2: <ResampleResult[22]>

### error handling

# get a learner which breaks with 50% probability
# set encapsulation + fallback
learner = lrn("classif.debug", error_train = 0.5)
learner$encapsulate = c(train = "evaluate", predict = "evaluate")
learner$fallback = lrn("classif.featureless")

# define search space
search_space = ps(
 x = p_dbl(lower = 0, upper = 1)
)

instance = TuningInstanceSingleCrit$new(
  task = tsk("wine"),
  learner = learner,
  resampling = rsmp("cv", folds = 3),
  measure = msr("classif.ce"),
  search_space = search_space,
  terminator = trm("evals", n_evals = 5)
)

instance$eval_batch(data.table(x = 1:5 / 5))