Specifies a general multi-criteria tuning scenario, including objective
function and archive for Tuners to act upon. This class stores an
ObjectiveTuning
object that encodes the black box objective function which
a Tuner has to optimize. It allows the basic operations of querying the
objective at design points ($eval_batch()
), storing the evaluations in the
internal Archive
and accessing the final result ($result
).
Evaluations of hyperparameter configurations are performed in batches by
calling mlr3::benchmark()
internally. Before a batch is evaluated, the
bbotk::Terminator is queried for the remaining budget. If the available
budget is exhausted, an exception is raised, and no further evaluations can
be performed from this point on.
The tuner is also supposed to store its final result, consisting of a
selected hyperparameter configuration and associated estimated performance
values, by calling the method instance$assign_result
.
Super classes
bbotk::OptimInstance
-> bbotk::OptimInstanceMultiCrit
-> TuningInstanceMultiCrit
Active bindings
result_learner_param_vals
(
list()
)
List of param values for the optimal learner call.
Methods
Method new()
Creates a new instance of this R6 class.
This defines the resampled performance of a learner on a task, a feasibility region for the parameters the tuner is supposed to optimize, and a termination criterion.
Usage
TuningInstanceMultiCrit$new(
task,
learner,
resampling,
measures,
terminator,
search_space = NULL,
store_benchmark_result = TRUE,
store_models = FALSE,
check_values = FALSE,
allow_hotstart = FALSE,
keep_hotstart_stack = FALSE
)
Arguments
task
(mlr3::Task)
Task to operate on.learner
(mlr3::Learner)
Learner to tune.resampling
(mlr3::Resampling)
Resampling that is used to evaluated the performance of the hyperparameter configurations. Uninstantiated resamplings are instantiated during construction so that all configurations are evaluated on the same data splits. Already instantiated resamplings are kept unchanged. Specialized Tuner change the resampling e.g. to evaluate a hyperparameter configuration on different data splits. This field, however, always returns the resampling passed in construction.measures
(list of mlr3::Measure)
Measures to optimize.terminator
(Terminator)
Stop criterion of the tuning process.search_space
(paradox::ParamSet)
Hyperparameter search space. IfNULL
(default), the search space is constructed from the TuneToken of the learner's parameter set (learner$param_set).store_benchmark_result
(
logical(1)
)
IfTRUE
(default), store resample result of evaluated hyperparameter configurations in archive as mlr3::BenchmarkResult.store_models
(
logical(1)
)
IfTRUE
, fitted models are stored in the benchmark result (archive$benchmark_result
). Ifstore_benchmark_result = FALSE
, models are only stored temporarily and not accessible after the tuning. This combination is needed for measures that require a model.check_values
(
logical(1)
)
IfTRUE
, hyperparameter values are checked before evaluation and performance scores after. IfFALSE
(default), values are unchecked but computational overhead is reduced.allow_hotstart
(
logical(1)
)
Allow to hotstart learners with previously fitted models. See also mlr3::HotstartStack. The learner must support hotstarting. Setsstore_models = TRUE
.keep_hotstart_stack
(
logical(1)
)
IfTRUE
, mlr3::HotstartStack is kept in$objective$hotstart_stack
after tuning.
Method assign_result()
The Tuner object writes the best found points and estimated performance values here. For internal use.
Arguments
xdt
(
data.table::data.table()
)
Hyperparameter values asdata.table::data.table()
. Each row is one configuration. Contains values in the search space. Can contain additional columns for extra information.ydt
(
data.table::data.table()
)
Optimal outcomes, e.g. the Pareto front.learner_param_vals
(List of named
list()s
)
Fixed parameter values of the learner that are neither part of the
Examples
library(data.table)
# define search space
search_space = ps(
cp = p_dbl(lower = 0.001, upper = 0.1),
minsplit = p_int(lower = 1, upper = 10)
)
# initialize instance
instance = TuningInstanceMultiCrit$new(
task = tsk("iris"),
learner = lrn("classif.rpart"),
resampling = rsmp("holdout"),
measure = msrs(c("classif.ce", "classif.acc")),
search_space = search_space,
terminator = trm("evals", n_evals = 5)
)
# generate design
design = data.table(cp = c(0.05, 0.01), minsplit = c(5, 3))
# eval design
instance$eval_batch(design)
# show archive
instance$archive
#> <ArchiveTuning>
#> cp minsplit classif.ce classif.acc runtime_learners timestamp
#> 1: 0.05 5 0.04 0.96 0.009 2022-05-03 05:02:11.64
#> 2: 0.01 3 0.02 0.98 0.008 2022-05-03 05:02:11.64
#> batch_nr warnings errors resample_result
#> 1: 1 0 0 <ResampleResult[22]>
#> 2: 1 0 0 <ResampleResult[22]>