Specifies a general multi-criteria tuning scenario, including objective
function and archive for Tuners to act upon. This class stores an
ObjectiveTuning object that encodes the black box objective function which
a Tuner has to optimize. It allows the basic operations of querying the
objective at design points (
$eval_batch()), storing the evaluations in the
Archive and accessing the final result (
Evaluations of hyperparameter configurations are performed in batches by
mlr3::benchmark() internally. Before a batch is evaluated, the
bbotk::Terminator is queried for the remaining budget. If the available
budget is exhausted, an exception is raised, and no further evaluations can
be performed from this point on.
The tuner is also supposed to store its final result, consisting of a
selected hyperparameter configuration and associated estimated performance
values, by calling the method
List of param values for the optimal learner call.
Creates a new instance of this R6 class.
This defines the resampled performance of a learner on a task, a feasibility region for the parameters the tuner is supposed to optimize, and a termination criterion.
TuningInstanceMultiCrit$new( task, learner, resampling, measures, terminator, search_space = NULL, store_models = FALSE, check_values = FALSE, store_benchmark_result = TRUE )
Task to operate on.
Resampling that is used to evaluated the performance of the hyperparameter configurations. Uninstantiated resamplings are instantiated during construction so that all configurations are evaluated on the same data splits. Already instantiated resamplings are kept unchanged. Specialized Tuner change the resampling e.g. to evaluate a hyperparameter configuration on different data splits. This field, however, always returns the resampling passed in construction.
Hyperparameter search space. If
NULL, the search space is constructed from
the TuneToken in the
ParamSet of the learner.
FALSE (default), the fitted models are not stored in the
store_benchmark_result = FALSE, the models are
only stored temporarily and not accessible after the tuning. This combination
might be useful for measures that require a model.
Should parameters before the evaluation and the results be checked for validity?
The Tuner object writes the best found points and estimated performance values here. For internal use.
TuningInstanceMultiCrit$assign_result(xdt, ydt, learner_param_vals = NULL)
x values as
data.table. Each row is one point. Contains the value in
the search space of the TuningInstanceMultiCrit object. Can contain
additional columns for extra information.
Optimal outcomes, e.g. the Pareto front.
Fixed parameter values of the learner that are neither part of the
The objects of this class are cloneable with this method.
TuningInstanceMultiCrit$clone(deep = FALSE)
Whether to make a deep clone.
library(data.table) # define search space search_space = ps( cp = p_dbl(lower = 0.001, upper = 0.1), minsplit = p_int(lower = 1, upper = 10) ) # initialize instance instance = TuningInstanceMultiCrit$new( task = tsk("iris"), learner = lrn("classif.rpart"), resampling = rsmp("holdout"), measure = msrs(c("classif.ce", "classif.acc")), search_space = search_space, terminator = trm("evals", n_evals = 5) ) # generate design design = data.table(cp = c(0.05, 0.01), minsplit = c(5, 3)) # eval design instance$eval_batch(design) # show archive instance$archive#> <ArchiveTuning> #> cp minsplit classif.ce classif.acc runtime_learners timestamp #> 1: 0.05 5 0.1 0.9 0.009 2021-09-16 04:23:07 #> 2: 0.01 3 0.1 0.9 0.011 2021-09-16 04:23:07 #> batch_nr resample_result #> 1: 1 <ResampleResult> #> 2: 1 <ResampleResult>