Class for Multi Criteria Tuning
Source:R/TuningInstanceBatchMulticrit.R
TuningInstanceBatchMultiCrit.Rd
The TuningInstanceBatchMultiCrit specifies a tuning problem for a Tuner.
The function ti()
creates a TuningInstanceBatchMultiCrit and the function tune()
creates an instance internally.
Details
The instance contains an ObjectiveTuningBatch object that encodes the black box objective function a Tuner has to optimize.
The instance allows the basic operations of querying the objective at design points ($eval_batch()
).
This operation is usually done by the Tuner.
Evaluations of hyperparameter configurations are performed in batches by calling mlr3::benchmark()
internally.
The evaluated hyperparameter configurations are stored in the ArchiveBatchTuning ($archive
).
Before a batch is evaluated, the bbotk::Terminator is queried for the remaining budget.
If the available budget is exhausted, an exception is raised, and no further evaluations can be performed from this point on.
The tuner is also supposed to store its final result, consisting of a selected hyperparameter configuration and associated estimated performance values, by calling the method instance$assign_result
.
Resources
There are several sections about hyperparameter optimization in the mlr3book.
Getting started with hyperparameter optimization.
An overview of all tuners can be found on our website.
Tune a support vector machine on the Sonar data set.
Learn about tuning spaces.
Estimate the model performance with nested resampling.
Learn about multi-objective optimization.
Simultaneously optimize hyperparameters and use early stopping with XGBoost.
Automate the tuning.
The gallery features a collection of case studies and demos about optimization.
Learn more advanced methods with the Practical Tuning Series.
Learn about hotstarting models.
Run the default hyperparameter configuration of learners as a baseline.
Use the Hyperband optimizer with different budget parameters.
The cheatsheet summarizes the most important functions of mlr3tuning.
Analysis
For analyzing the tuning results, it is recommended to pass the ArchiveBatchTuning to as.data.table()
.
The returned data table is joined with the benchmark result which adds the mlr3::ResampleResult for each hyperparameter evaluation.
The archive provides various getters (e.g. $learners()
) to ease the access.
All getters extract by position (i
) or unique hash (uhash
).
For a complete list of all getters see the methods section.
The benchmark result ($benchmark_result
) allows to score the hyperparameter configurations again on a different measure.
Alternatively, measures can be supplied to as.data.table()
.
The mlr3viz package provides visualizations for tuning results.
Super classes
bbotk::OptimInstance
-> bbotk::OptimInstanceBatch
-> bbotk::OptimInstanceBatchMultiCrit
-> TuningInstanceBatchMultiCrit
Public fields
internal_search_space
(paradox::ParamSet)
The search space containing those parameters that are internally optimized by the mlr3::Learner.
Active bindings
result_learner_param_vals
(
list()
)
List of param values for the optimal learner call.
Methods
Method new()
Creates a new instance of this R6 class.
Usage
TuningInstanceBatchMultiCrit$new(
task,
learner,
resampling,
measures,
terminator,
search_space = NULL,
internal_search_space = NULL,
store_benchmark_result = TRUE,
store_models = FALSE,
check_values = FALSE,
callbacks = NULL
)
Arguments
task
(mlr3::Task)
Task to operate on.learner
(mlr3::Learner)
Learner to tune.resampling
(mlr3::Resampling)
Resampling that is used to evaluate the performance of the hyperparameter configurations. Uninstantiated resamplings are instantiated during construction so that all configurations are evaluated on the same data splits. Already instantiated resamplings are kept unchanged. Specialized Tuner change the resampling e.g. to evaluate a hyperparameter configuration on different data splits. This field, however, always returns the resampling passed in construction.measures
(list of mlr3::Measure)
Measures to optimize.terminator
(bbotk::Terminator)
Stop criterion of the tuning process.search_space
(paradox::ParamSet)
Hyperparameter search space. IfNULL
(default), the search space is constructed from the paradox::TuneToken of the learner's parameter set (learner$param_set).internal_search_space
(paradox::ParamSet or
NULL
)
The internal search space.internal_search_space
(paradox::ParamSet or
NULL
)
The internal search space.store_benchmark_result
(
logical(1)
)
IfTRUE
(default), store resample result of evaluated hyperparameter configurations in archive as mlr3::BenchmarkResult.store_models
(
logical(1)
)
IfTRUE
, fitted models are stored in the benchmark result (archive$benchmark_result
). Ifstore_benchmark_result = FALSE
, models are only stored temporarily and not accessible after the tuning. This combination is needed for measures that require a model.check_values
(
logical(1)
)
IfTRUE
, hyperparameter values are checked before evaluation and performance scores after. IfFALSE
(default), values are unchecked but computational overhead is reduced.callbacks
(list of mlr3misc::Callback)
List of callbacks.
Method assign_result()
The Tuner object writes the best found points and estimated performance values here. For internal use.
Usage
TuningInstanceBatchMultiCrit$assign_result(
xdt,
ydt,
learner_param_vals = NULL,
extra = NULL,
xydt = NULL,
...
)
Arguments
xdt
(
data.table::data.table()
)
Hyperparameter values asdata.table::data.table()
. Each row is one configuration. Contains values in the search space. Can contain additional columns for extra information.ydt
(
data.table::data.table()
)
Optimal outcomes, e.g. the Pareto front.learner_param_vals
(List of named
list()s
)
Fixed parameter values of the learner that are neither part of theextra
(
data.table::data.table()
)
Additional information.xydt
(
data.table::data.table()
)
Point, outcome, and additional information (Deprecated)....
(
any
)
ignored.
Examples
# Hyperparameter optimization on the Palmer Penguins data set
task = tsk("penguins")
# Load learner and set search space
learner = lrn("classif.rpart",
cp = to_tune(1e-04, 1e-1, logscale = TRUE)
)
# Construct tuning instance
instance = ti(
task = task,
learner = learner,
resampling = rsmp("cv", folds = 3),
measures = msrs(c("classif.ce", "time_train")),
terminator = trm("evals", n_evals = 4)
)
# Choose optimization algorithm
tuner = tnr("random_search", batch_size = 2)
# Run tuning
tuner$optimize(instance)
#> cp learner_param_vals x_domain classif.ce time_train
#> <num> <list> <list> <num> <num>
#> 1: -3.259804 <list[2]> <list[1]> 0.09583016 0.003
#> 2: -3.759791 <list[2]> <list[1]> 0.09583016 0.003
#> 3: -2.565382 <list[2]> <list[1]> 0.09583016 0.003
#> 4: -3.080830 <list[2]> <list[1]> 0.09583016 0.003
# Optimal hyperparameter configurations
instance$result
#> cp learner_param_vals x_domain classif.ce time_train
#> <num> <list> <list> <num> <num>
#> 1: -3.259804 <list[2]> <list[1]> 0.09583016 0.003
#> 2: -3.759791 <list[2]> <list[1]> 0.09583016 0.003
#> 3: -2.565382 <list[2]> <list[1]> 0.09583016 0.003
#> 4: -3.080830 <list[2]> <list[1]> 0.09583016 0.003
# Inspect all evaluated configurations
as.data.table(instance$archive)
#> cp classif.ce time_train runtime_learners timestamp
#> <num> <num> <num> <num> <POSc>
#> 1: -3.259804 0.09583016 0.003 0.016 2024-11-08 15:15:07
#> 2: -3.759791 0.09583016 0.003 0.015 2024-11-08 15:15:07
#> 3: -2.565382 0.09583016 0.003 0.015 2024-11-08 15:15:07
#> 4: -3.080830 0.09583016 0.003 0.015 2024-11-08 15:15:07
#> warnings errors x_domain batch_nr resample_result
#> <int> <int> <list> <int> <list>
#> 1: 0 0 <list[1]> 1 <ResampleResult>
#> 2: 0 0 <list[1]> 1 <ResampleResult>
#> 3: 0 0 <list[1]> 2 <ResampleResult>
#> 4: 0 0 <list[1]> 2 <ResampleResult>