Skip to contents

Subclass for iterated racing. Calls irace::irace() from package irace.

Source

Lopez-Ibanez M, Dubois-Lacoste J, Caceres LP, Birattari M, Stuetzle T (2016). “The irace package: Iterated racing for automatic algorithm configuration.” Operations Research Perspectives, 3, 43--58. doi:10.1016/j.orp.2016.09.002 .

Dictionary

This Tuner can be instantiated with the associated sugar function tnr():

tnr("irace")

Control Parameters

n_instances

integer(1)
Number of resampling instances.

For the meaning of all other parameters, see irace::defaultScenario(). Note that we have removed all control parameters which refer to the termination of the algorithm. Use bbotk::TerminatorEvals instead. Other terminators do not work with TunerIrace.

Archive

The ArchiveBatchTuning holds the following additional columns:

  • "race" (integer(1))
    Race iteration.

  • "step" (integer(1))
    Step number of race.

  • "instance" (integer(1))
    Identifies resampling instances across races and steps.

  • "configuration" (integer(1))
    Identifies configurations across races and steps.

Result

The tuning result (instance$result) is the best-performing elite of the final race. The reported performance is the average performance estimated on all used instances.

Progress Bars

$optimize() supports progress bars via the package progressr combined with a bbotk::Terminator. Simply wrap the function in progressr::with_progress() to enable them. We recommend to use package progress as backend; enable with progressr::handlers("progress").

Logging

All Tuners use a logger (as implemented in lgr) from package bbotk. Use lgr::get_logger("bbotk") to access and control the logger.

Optimizer

This Tuner is based on bbotk::OptimizerBatchIrace which can be applied on any black box optimization problem. See also the documentation of bbotk.

Resources

There are several sections about hyperparameter optimization in the mlr3book.

  • An overview of all tuners can be found on our website.

  • Learn more about tuners.

The gallery features a collection of case studies and demos about optimization.

  • Use the Hyperband optimizer with different budget parameters.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method optimize()

Performs the tuning on a TuningInstanceBatchSingleCrit until termination. The single evaluations and the final results will be written into the ArchiveBatchTuning that resides in the TuningInstanceBatchSingleCrit. The final result is returned.

Usage

TunerBatchIrace$optimize(inst)

Arguments


Method clone()

The objects of this class are cloneable with this method.

Usage

TunerBatchIrace$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# retrieve task
task = tsk("pima")

# load learner and set search space
learner = lrn("classif.rpart", cp = to_tune(1e-04, 1e-1, logscale = TRUE))
# \donttest{
# hyperparameter tuning on the pima indians diabetes data set
instance = tune(
  tuner = tnr("irace"),
  task = task,
  learner = learner,
  resampling = rsmp("holdout"),
  measure = msr("classif.ce"),
  term_evals = 42
)
#> # 2024-06-30 09:41:29 UTC: Initialization
#> # Elitist race
#> # Elitist new instances: 1
#> # Elitist limit: 2
#> # nbIterations: 2
#> # minNbSurvival: 2
#> # nbParameters: 1
#> # seed: 1983921168
#> # confidence level: 0.95
#> # budget: 42
#> # mu: 5
#> # deterministic: FALSE
#> 
#> # 2024-06-30 09:41:29 UTC: Iteration 1 of 2
#> # experimentsUsedSoFar: 0
#> # remainingBudget: 42
#> # currentBudget: 21
#> # nbConfigurations: 3
#> # Markers:
#>      x No test is performed.
#>      c Configurations are discarded only due to capping.
#>      - The test is performed and some configurations are discarded.
#>      = The test is performed but no configuration is discarded.
#>      ! The test is performed and configurations could be discarded but elite configurations are preserved.
#>      . All alive configurations are elite and nothing is discarded
#> 
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> | |   Instance|      Alive|       Best|       Mean best| Exp so far|  W time|  rho|KenW|  Qvar|
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> |x|          1|          3|          1|    0.2343750000|          3|00:00:00|   NA|  NA|    NA|
#> |x|          2|          3|          2|    0.2558593750|          6|00:00:00|+0.00|0.50|0.3333|
#> |x|          3|          3|          2|    0.2617187500|          9|00:00:00|-0.25|0.17|0.3956|
#> |x|          4|          3|          2|    0.2558593750|         12|00:00:00|+0.06|0.30|0.3339|
#> |=|          5|          3|          1|    0.2515625000|         15|00:00:00|+0.20|0.36|0.3798|
#> |=|          6|          3|          1|    0.2513020833|         18|00:00:00|+0.32|0.43|0.3298|
#> |=|          7|          3|          1|    0.2539062500|         21|00:00:00|+0.05|0.18|0.5337|
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> Best-so-far configuration:           1    mean value:     0.2539062500
#> Description of the best-so-far configuration:
#>   .ID.                cp .PARENT.
#> 1    1 -3.78704670002288       NA
#> 
#> # 2024-06-30 09:41:30 UTC: Elite configurations (first number is the configuration ID; listed from best to worst according to the sum of ranks):
#>                  cp
#> 1 -3.78704670002288
#> 2 -4.13310515855931
#> # 2024-06-30 09:41:30 UTC: Iteration 2 of 2
#> # experimentsUsedSoFar: 21
#> # remainingBudget: 21
#> # currentBudget: 21
#> # nbConfigurations: 4
#> # Markers:
#>      x No test is performed.
#>      c Configurations are discarded only due to capping.
#>      - The test is performed and some configurations are discarded.
#>      = The test is performed but no configuration is discarded.
#>      ! The test is performed and configurations could be discarded but elite configurations are preserved.
#>      . All alive configurations are elite and nothing is discarded
#> 
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> | |   Instance|      Alive|       Best|       Mean best| Exp so far|  W time|  rho|KenW|  Qvar|
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> |x|          8|          4|          1|    0.2421875000|          4|00:00:00|   NA|  NA|    NA|
#> |x|          2|          4|          2|    0.2734375000|          6|00:00:00|-0.94|0.03|1.4777|
#> |x|          7|          4|          2|    0.2721354167|          8|00:00:00|-0.47|0.02|0.7426|
#> |x|          5|          4|          1|    0.2568359375|         10|00:00:00|-0.23|0.08|0.7062|
#> |=|          1|          4|          1|    0.2523437500|         12|00:00:00|-0.08|0.14|0.6793|
#> |=|          4|          4|          1|    0.2500000000|         14|00:00:00|+0.05|0.21|0.6232|
#> |=|          3|          4|          1|    0.2527901786|         16|00:00:00|+0.15|0.27|0.5742|
#> |-|          6|          3|          1|    0.2524414062|         18|00:00:00|+0.11|0.23|0.3690|
#> |=|          9|          3|          1|    0.2513020833|         21|00:00:00|+0.10|0.20|0.3333|
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> Best-so-far configuration:           1    mean value:     0.2513020833
#> Description of the best-so-far configuration:
#>   .ID.                cp .PARENT.
#> 1    1 -3.78704670002288       NA
#> 
#> # 2024-06-30 09:41:31 UTC: Elite configurations (first number is the configuration ID; listed from best to worst according to the sum of ranks):
#>                  cp
#> 1 -3.78704670002288
#> 4 -3.80645732681053
#> # 2024-06-30 09:41:31 UTC: Stopped because budget is exhausted
#> # Iteration: 3
#> # nbIterations: 2
#> # experimentsUsedSoFar: 42
#> # timeUsed: 0
#> # remainingBudget: 0
#> # currentBudget: 21
#> # number of elites: 2
#> # nbConfigurations: 4
#> # Total CPU user time: 1.868, CPU sys time: 0.04, Wall-clock time: 1.908

# best performing hyperparameter configuration
instance$result
#>           cp configuration learner_param_vals  x_domain classif.ce
#>        <num>         <num>             <list>    <list>      <num>
#> 1: -3.787047             1          <list[2]> <list[1]>  0.2513021

# all evaluated hyperparameter configuration
as.data.table(instance$archive)
#>            cp classif.ce x_domain_cp runtime_learners           timestamp
#>         <num>      <num>       <num>            <num>              <POSc>
#>  1: -3.787047  0.2343750 0.022662432            0.013 2024-06-30 09:41:29
#>  2: -4.133105  0.2343750 0.016033016            0.013 2024-06-30 09:41:29
#>  3: -6.647055  0.2343750 0.001297838            0.014 2024-06-30 09:41:29
#>  4: -3.787047  0.2929688 0.022662432            0.013 2024-06-30 09:41:29
#>  5: -4.133105  0.2773438 0.016033016            0.012 2024-06-30 09:41:29
#>  6: -6.647055  0.2890625 0.001297838            0.014 2024-06-30 09:41:29
#>  7: -3.787047  0.2695312 0.022662432            0.013 2024-06-30 09:41:30
#>  8: -4.133105  0.2734375 0.016033016            0.013 2024-06-30 09:41:30
#>  9: -6.647055  0.3281250 0.001297838            0.013 2024-06-30 09:41:30
#> 10: -3.787047  0.2382812 0.022662432            0.011 2024-06-30 09:41:30
#> 11: -4.133105  0.2382812 0.016033016            0.013 2024-06-30 09:41:30
#> 12: -6.647055  0.2656250 0.001297838            0.013 2024-06-30 09:41:30
#> 13: -3.787047  0.2226562 0.022662432            0.013 2024-06-30 09:41:30
#> 14: -4.133105  0.2539062 0.016033016            0.013 2024-06-30 09:41:30
#> 15: -6.647055  0.2617188 0.001297838            0.013 2024-06-30 09:41:30
#> 16: -3.787047  0.2500000 0.022662432            0.014 2024-06-30 09:41:30
#> 17: -4.133105  0.2578125 0.016033016            0.014 2024-06-30 09:41:30
#> 18: -6.647055  0.2773438 0.001297838            0.014 2024-06-30 09:41:30
#> 19: -3.787047  0.2695312 0.022662432            0.013 2024-06-30 09:41:30
#> 20: -4.133105  0.2695312 0.016033016            0.014 2024-06-30 09:41:30
#> 21: -6.647055  0.2500000 0.001297838            0.012 2024-06-30 09:41:30
#> 22: -3.787047  0.2421875 0.022662432            0.013 2024-06-30 09:41:30
#> 23: -4.133105  0.2695312 0.016033016            0.013 2024-06-30 09:41:30
#> 24: -3.806457  0.2421875 0.022226782            0.013 2024-06-30 09:41:30
#> 25: -4.445972  0.2812500 0.011725708            0.013 2024-06-30 09:41:30
#> 26: -3.806457  0.2929688 0.022226782            0.013 2024-06-30 09:41:30
#> 27: -4.445972  0.2773438 0.011725708            0.013 2024-06-30 09:41:30
#> 28: -3.806457  0.2695312 0.022226782            0.033 2024-06-30 09:41:30
#> 29: -4.445972  0.2695312 0.011725708            0.012 2024-06-30 09:41:30
#> 30: -3.806457  0.2226562 0.022226782            0.013 2024-06-30 09:41:30
#> 31: -4.445972  0.2304688 0.011725708            0.011 2024-06-30 09:41:30
#> 32: -3.806457  0.2343750 0.022226782            0.012 2024-06-30 09:41:31
#> 33: -4.445972  0.2421875 0.011725708            0.012 2024-06-30 09:41:31
#> 34: -3.806457  0.2382812 0.022226782            0.013 2024-06-30 09:41:31
#> 35: -4.445972  0.2578125 0.011725708            0.013 2024-06-30 09:41:31
#> 36: -3.806457  0.2695312 0.022226782            0.013 2024-06-30 09:41:31
#> 37: -4.445972  0.2734375 0.011725708            0.013 2024-06-30 09:41:31
#> 38: -3.806457  0.2500000 0.022226782            0.013 2024-06-30 09:41:31
#> 39: -4.445972  0.2578125 0.011725708            0.014 2024-06-30 09:41:31
#> 40: -3.787047  0.2421875 0.022662432            0.011 2024-06-30 09:41:31
#> 41: -4.133105  0.2421875 0.016033016            0.013 2024-06-30 09:41:31
#> 42: -3.806457  0.2421875 0.022226782            0.013 2024-06-30 09:41:31
#>            cp classif.ce x_domain_cp runtime_learners           timestamp
#>     batch_nr  race  step instance configuration warnings errors
#>        <int> <num> <int>    <int>         <num>    <int>  <int>
#>  1:        1     1     1       10             1        0      0
#>  2:        1     1     1       10             2        0      0
#>  3:        1     1     1       10             3        0      0
#>  4:        2     1     1        3             1        0      0
#>  5:        2     1     1        3             2        0      0
#>  6:        2     1     1        3             3        0      0
#>  7:        3     1     1        6             1        0      0
#>  8:        3     1     1        6             2        0      0
#>  9:        3     1     1        6             3        0      0
#> 10:        4     1     1        8             1        0      0
#> 11:        4     1     1        8             2        0      0
#> 12:        4     1     1        8             3        0      0
#> 13:        5     1     1        7             1        0      0
#> 14:        5     1     1        7             2        0      0
#> 15:        5     1     1        7             3        0      0
#> 16:        6     1     1        1             1        0      0
#> 17:        6     1     1        1             2        0      0
#> 18:        6     1     1        1             3        0      0
#> 19:        7     1     1        9             1        0      0
#> 20:        7     1     1        9             2        0      0
#> 21:        7     1     1        9             3        0      0
#> 22:        8     2     1        5             1        0      0
#> 23:        8     2     1        5             2        0      0
#> 24:        8     2     1        5             4        0      0
#> 25:        8     2     1        5             5        0      0
#> 26:        9     2     1        3             4        0      0
#> 27:        9     2     1        3             5        0      0
#> 28:       10     2     1        9             4        0      0
#> 29:       10     2     1        9             5        0      0
#> 30:       11     2     1        7             4        0      0
#> 31:       11     2     1        7             5        0      0
#> 32:       12     2     1       10             4        0      0
#> 33:       12     2     1       10             5        0      0
#> 34:       13     2     1        8             4        0      0
#> 35:       13     2     1        8             5        0      0
#> 36:       14     2     1        6             4        0      0
#> 37:       14     2     1        6             5        0      0
#> 38:       15     2     1        1             4        0      0
#> 39:       15     2     1        1             5        0      0
#> 40:       16     2     1        4             1        0      0
#> 41:       16     2     1        4             2        0      0
#> 42:       16     2     1        4             4        0      0
#>     batch_nr  race  step instance configuration warnings errors
#>      resample_result
#>               <list>
#>  1: <ResampleResult>
#>  2: <ResampleResult>
#>  3: <ResampleResult>
#>  4: <ResampleResult>
#>  5: <ResampleResult>
#>  6: <ResampleResult>
#>  7: <ResampleResult>
#>  8: <ResampleResult>
#>  9: <ResampleResult>
#> 10: <ResampleResult>
#> 11: <ResampleResult>
#> 12: <ResampleResult>
#> 13: <ResampleResult>
#> 14: <ResampleResult>
#> 15: <ResampleResult>
#> 16: <ResampleResult>
#> 17: <ResampleResult>
#> 18: <ResampleResult>
#> 19: <ResampleResult>
#> 20: <ResampleResult>
#> 21: <ResampleResult>
#> 22: <ResampleResult>
#> 23: <ResampleResult>
#> 24: <ResampleResult>
#> 25: <ResampleResult>
#> 26: <ResampleResult>
#> 27: <ResampleResult>
#> 28: <ResampleResult>
#> 29: <ResampleResult>
#> 30: <ResampleResult>
#> 31: <ResampleResult>
#> 32: <ResampleResult>
#> 33: <ResampleResult>
#> 34: <ResampleResult>
#> 35: <ResampleResult>
#> 36: <ResampleResult>
#> 37: <ResampleResult>
#> 38: <ResampleResult>
#> 39: <ResampleResult>
#> 40: <ResampleResult>
#> 41: <ResampleResult>
#> 42: <ResampleResult>
#>      resample_result

# fit final model on complete data set
learner$param_set$values = instance$result_learner_param_vals
learner$train(task)
# }