Subclass for grid search tuning.
The grid is constructed as a Cartesian product over discretized values per
parameter, see paradox::generate_design_grid()
. If the learner supports
hotstarting, the grid is sorted by the hotstart parameter (see also
mlr3::HotstartStack). If not, the points of the grid are evaluated in a
random order.
Dictionary
This Tuner can be instantiated via the dictionary
mlr_tuners or with the associated sugar function tnr()
:
TunerGridSearch$new()
mlr_tuners$get("grid_search")
tnr("grid_search")
Parameters
resolution
integer(1)
Resolution of the grid, seeparadox::generate_design_grid()
.param_resolutions
named
integer()
Resolution per parameter, named by parameter ID, seeparadox::generate_design_grid()
.batch_size
integer(1)
Maximum number of points to try in a batch.
Progress Bars
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
Parallelization
In order to support general termination criteria and parallelization, we
evaluate points in a batch-fashion of size batch_size
. Larger batches mean
we can parallelize more, smaller batches imply a more fine-grained checking
of termination criteria. A batch contains of batch_size
times resampling$iters
jobs.
E.g., if you set a batch size of 10 points and do a 5-fold cross validation, you can
utilize up to 50 cores.
Parallelization is supported via package future (see mlr3::benchmark()
's
section on parallelization for more details).
Logging
All Tuners use a logger (as implemented in lgr) from package
bbotk.
Use lgr::get_logger("bbotk")
to access and control the logger.
Optimizer
This Tuner is based on bbotk::OptimizerGridSearch which can be applied on any black box optimization problem. See also the documentation of bbotk.
See also
Package mlr3hyperband for hyperband tuning.
Other Tuner:
mlr_tuners_cmaes
,
mlr_tuners_design_points
,
mlr_tuners_gensa
,
mlr_tuners_irace
,
mlr_tuners_nloptr
,
mlr_tuners_random_search
,
mlr_tuners
Super class
mlr3tuning::Tuner
-> TunerGridSearch
Examples
# retrieve task
task = tsk("pima")
# load learner and set search space
learner = lrn("classif.rpart", cp = to_tune(1e-04, 1e-1, logscale = TRUE))
# hyperparameter tuning on the pima indians diabetes data set
instance = tune(
method = "grid_search",
task = task,
learner = learner,
resampling = rsmp("holdout"),
measure = msr("classif.ce"),
term_evals = 10
)
# best performing hyperparameter configuration
instance$result
#> cp learner_param_vals x_domain classif.ce
#> 1: -4.60517 <list[2]> <list[1]> 0.2695312
# all evaluated hyperparameter configuration
as.data.table(instance$archive)
#> cp classif.ce x_domain_cp runtime_learners timestamp
#> 1: -6.907755 0.2968750 0.0010000000 0.014 2022-05-24 04:26:40
#> 2: -7.675284 0.2968750 0.0004641589 0.015 2022-05-24 04:26:40
#> 3: -2.302585 0.2890625 0.1000000000 0.015 2022-05-24 04:26:40
#> 4: -4.605170 0.2695312 0.0100000000 0.016 2022-05-24 04:26:40
#> 5: -6.140227 0.2968750 0.0021544347 0.015 2022-05-24 04:26:40
#> 6: -9.210340 0.2968750 0.0001000000 0.015 2022-05-24 04:26:40
#> 7: -5.372699 0.2968750 0.0046415888 0.017 2022-05-24 04:26:41
#> 8: -3.837642 0.2773438 0.0215443469 0.032 2022-05-24 04:26:41
#> 9: -8.442812 0.2968750 0.0002154435 0.015 2022-05-24 04:26:41
#> 10: -3.070113 0.2890625 0.0464158883 0.014 2022-05-24 04:26:41
#> batch_nr warnings errors resample_result
#> 1: 1 0 0 <ResampleResult[22]>
#> 2: 2 0 0 <ResampleResult[22]>
#> 3: 3 0 0 <ResampleResult[22]>
#> 4: 4 0 0 <ResampleResult[22]>
#> 5: 5 0 0 <ResampleResult[22]>
#> 6: 6 0 0 <ResampleResult[22]>
#> 7: 7 0 0 <ResampleResult[22]>
#> 8: 8 0 0 <ResampleResult[22]>
#> 9: 9 0 0 <ResampleResult[22]>
#> 10: 10 0 0 <ResampleResult[22]>
# fit final model on complete data set
learner$param_set$values = instance$result_learner_param_vals
learner$train(task)