Abstract Tuner class that implements the base functionality each tuner must provide. A tuner is an object that describes the tuning strategy, i.e. how to optimize the black-box function and its feasible set defined by the TuningInstance object.

A list of measures can be passed to the instance, and they will always be all evaluated. However, single-criteria tuners optimize only the first measure.

A tuner must write its result to the assign_result method of the Tuninginstance at the end of its tuning in order to store the best selected hyperparameter configuration and its estimated performance vector.

Format

R6::R6Class object.

Construction

tuner = Tuner$new(param_set, param_classes, properties, packages = character())

Fields

Methods

Private Methods

  • tune_internal(instance) -> NULL
    Abstract base method. Implement to specify tuning of your subclass. See technical details sections.

  • assign_result(instance) -> NULL
    Abstract base method. Implement to specify how the final configuration is selected. See technical details sections.

Technical Details and Subclasses

A subclass is implemented in the following way:

  • Inherit from Tuner

  • Specify the private abstract method $tune_internal() and use it to call into your optimizer.

  • You need to call instance$eval_batch() to evaluate design points.

  • The batch evaluation is requested at the TuningInstance object instance, so each batch is possibly executed in parallel via mlr3::benchmark(), and all evaluations are stored inside of instance$bmr.

  • Before and after the batch evaluation, the Terminator is checked, and if it is positive, an exception of class "terminated_error" is generated. In the later case the current batch of evaluations is still stored in instance, but the numeric scores are not sent back to the handling optimizer as it has lost execution control.

  • After such an exception was caught we select the best configuration from instance$bmr and return it.

  • Note that therefore more points than specified by the Terminator may be evaluated, as the Terminator is only checked before and after a batch evaluation, and not in-between evaluation in a batch. How many more depends on the setting of the batch size.

  • Overwrite the private super-method assign_result if you want to decide yourself how to estimate the final configuration in the instance and its estimated performance. The default behavior is: We pick the best resample-experiment, regarding the first measure, then assign its configuration and aggregated performance to the instance.

See also

Examples

library(mlr3) library(paradox) param_set = ParamSet$new(list( ParamDbl$new("cp", lower = 0.001, upper = 0.1) )) terminator = term("evals", n_evals = 3) instance = TuningInstance$new( task = tsk("iris"), learner = lrn("classif.rpart"), resampling = rsmp("holdout"), measures = msr("classif.ce"), param_set = param_set, terminator = terminator ) tt = tnr("random_search") # swap this line to use a different Tuner tt$tune(instance) # modifies the instance by reference instance$result # returns best configuration and best performance
#> $tune_x #> $tune_x$cp #> [1] 0.02685743 #> #> #> $params #> $params$xval #> [1] 0 #> #> $params$cp #> [1] 0.02685743 #> #> #> $perf #> classif.ce #> 0.04 #>
instance$archive() # allows access of data.table / benchmark result of full path of all evaluations
#> nr batch_nr resample_result task_id learner_id resampling_id iters #> 1: 1 1 <ResampleResult> iris classif.rpart holdout 1 #> 2: 2 2 <ResampleResult> iris classif.rpart holdout 1 #> 3: 3 3 <ResampleResult> iris classif.rpart holdout 1 #> params tune_x warnings errors classif.ce #> 1: <list> <list> 0 0 0.04 #> 2: <list> <list> 0 0 0.04 #> 3: <list> <list> 0 0 0.04