Skip to contents

Subclass for random search tuning.

Source

Bergstra J, Bengio Y (2012). “Random Search for Hyper-Parameter Optimization.” Journal of Machine Learning Research, 13(10), 281–305. https://jmlr.csail.mit.edu/papers/v13/bergstra12a.html.

Details

The random points are sampled by paradox::generate_design_random().

Dictionary

This Tuner can be instantiated with the associated sugar function tnr():

tnr("random_search")

Parallelization

In order to support general termination criteria and parallelization, we evaluate points in a batch-fashion of size batch_size. Larger batches mean we can parallelize more, smaller batches imply a more fine-grained checking of termination criteria. A batch contains of batch_size times resampling$iters jobs. E.g., if you set a batch size of 10 points and do a 5-fold cross validation, you can utilize up to 50 cores.

Parallelization is supported via package future (see mlr3::benchmark()'s section on parallelization for more details).

Logging

All Tuners use a logger (as implemented in lgr) from package bbotk. Use lgr::get_logger("bbotk") to access and control the logger.

Optimizer

This Tuner is based on bbotk::OptimizerBatchRandomSearch which can be applied on any black box optimization problem. See also the documentation of bbotk.

Parameters

batch_size

integer(1)
Maximum number of points to try in a batch.

Resources

There are several sections about hyperparameter optimization in the mlr3book.

  • An overview of all tuners can be found on our website.

  • Learn more about tuners.

The gallery features a collection of case studies and demos about optimization.

  • Use the Hyperband optimizer with different budget parameters.

Progress Bars

$optimize() supports progress bars via the package progressr combined with a Terminator. Simply wrap the function in progressr::with_progress() to enable them. We recommend to use package progress as backend; enable with progressr::handlers("progress").

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

TunerBatchRandomSearch$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Hyperparameter Optimization

# load learner and set search space
learner = lrn("classif.rpart",
  cp = to_tune(1e-04, 1e-1, logscale = TRUE)
)

# run hyperparameter tuning on the Palmer Penguins data set
instance = tune(
  tuner = tnr("random_search"),
  task = tsk("penguins"),
  learner = learner,
  resampling = rsmp("holdout"),
  measure = msr("classif.ce"),
  term_evals = 10
)

# best performing hyperparameter configuration
instance$result
#>           cp learner_param_vals  x_domain classif.ce
#>        <num>             <list>    <list>      <num>
#> 1: -4.398173          <list[2]> <list[1]> 0.06086957

# all evaluated hyperparameter configuration
as.data.table(instance$archive)
#>            cp classif.ce  x_domain_cp runtime_learners           timestamp
#>         <num>      <num>        <num>            <num>              <POSc>
#>  1: -4.398173 0.06086957 0.0122997877            0.005 2024-07-24 10:53:25
#>  2: -5.466948 0.06086957 0.0042241028            0.005 2024-07-24 10:53:25
#>  3: -7.609426 0.06086957 0.0004957563            0.007 2024-07-24 10:53:25
#>  4: -7.528905 0.06086957 0.0005373265            0.006 2024-07-24 10:53:25
#>  5: -5.643985 0.06086957 0.0035387370            0.006 2024-07-24 10:53:25
#>  6: -3.753571 0.06086957 0.0234339088            0.005 2024-07-24 10:53:26
#>  7: -6.287287 0.06086957 0.0018597984            0.006 2024-07-24 10:53:26
#>  8: -6.211471 0.06086957 0.0020062850            0.005 2024-07-24 10:53:26
#>  9: -2.723944 0.06086957 0.0656154784            0.025 2024-07-24 10:53:26
#> 10: -8.887886 0.06086957 0.0001380511            0.005 2024-07-24 10:53:26
#>     warnings errors batch_nr  resample_result
#>        <int>  <int>    <int>           <list>
#>  1:        0      0        1 <ResampleResult>
#>  2:        0      0        2 <ResampleResult>
#>  3:        0      0        3 <ResampleResult>
#>  4:        0      0        4 <ResampleResult>
#>  5:        0      0        5 <ResampleResult>
#>  6:        0      0        6 <ResampleResult>
#>  7:        0      0        7 <ResampleResult>
#>  8:        0      0        8 <ResampleResult>
#>  9:        0      0        9 <ResampleResult>
#> 10:        0      0       10 <ResampleResult>

# fit final model on complete data set
learner$param_set$values = instance$result_learner_param_vals
learner$train(tsk("penguins"))