Tuner class that implements the base functionality each tuner must
provide. A tuner is an object that describes the tuning strategy, i.e. how to
optimize the black-box function and its feasible set defined by the
TuningInstanceSingleCrit / TuningInstanceMultiCrit object.
A tuner must write its result into the TuningInstanceSingleCrit /
TuningInstanceMultiCrit using the
assign_result method of the
bbotk::OptimInstance at the end of its tuning in order to store the best
selected hyperparameter configuration and its estimated performance vector.
Abstract base method. Implement to specify tuning of your subclass. See technical details sections.
Abstract base method. Implement to specify how the final configuration is selected. See technical details sections.
A subclass is implemented in the following way:
Inherit from Tuner.
Specify the private abstract method
$.tune() and use it to call into
You need to call
instance$eval_batch() to evaluate design points.
The batch evaluation is requested at the TuningInstanceSingleCrit /
instance, so each batch is possibly
executed in parallel via
mlr3::benchmark(), and all evaluations are stored
Before the batch evaluation, the bbotk::Terminator is checked, and if it is
positive, an exception of class
"terminated_error" is generated. In the
later case the current batch of evaluations is still stored in
but the numeric scores are not sent back to the handling optimizer as it has
lost execution control.
After such an exception was caught we select the best configuration from
instance$archive and return it.
Note that therefore more points than specified by the bbotk::Terminator may be evaluated, as the Terminator is only checked before a batch evaluation, and not in-between evaluation in a batch. How many more depends on the setting of the batch size.
Overwrite the private super-method
.assign_result() if you want to decide
yourself how to estimate the final configuration in the instance and its
estimated performance. The default behavior is: We pick the best
resample-experiment, regarding the given measure, then assign its
configuration and aggregated performance to the instance.
Creates a new instance of this R6 class.
Tuner$new(param_set, param_classes, properties, packages = character())
Set of control parameters for tuner.
Helper for print outputs.
Performs the tuning on a TuningInstanceSingleCrit or TuningInstanceMultiCrit until termination. The single evaluations will be written into the ArchiveTuning that resides in the TuningInstanceSingleCrit/TuningInstanceMultiCrit. The result will be written into the instance object.
The objects of this class are cloneable with this method.
Tuner$clone(deep = FALSE)
Whether to make a deep clone.
library(mlr3) library(paradox) search_space = ParamSet$new(list( ParamDbl$new("cp", lower = 0.001, upper = 0.1) )) terminator = trm("evals", n_evals = 3) instance = TuningInstanceSingleCrit$new( task = tsk("iris"), learner = lrn("classif.rpart"), resampling = rsmp("holdout"), measure = msr("classif.ce"), search_space = search_space, terminator = terminator ) # swap this line to use a different Tuner tt = tnr("random_search") # modifies the instance by reference tt$optimize(instance)#> cp learner_param_vals x_domain classif.ce #> 1: 0.02116277 <list> <list> 0.04# returns best configuration and best performance instance$result#> cp learner_param_vals x_domain classif.ce #> 1: 0.02116277 <list> <list> 0.04# allows access of data.table / benchmark result of full path of all # evaluations instance$archive#> <ArchiveTuning> #> cp classif.ce uhash x_domain #> 1: 0.02116277 0.04 091f8e06-5129-4bf6-b033-d2e3a631147b <list> #> 2: 0.02644131 0.04 3e7744a1-7787-4f52-b497-cd4581260d0f <list> #> 3: 0.06178655 0.04 ae73cd5a-5ae7-443d-8dc7-e93fbd064dbd <list> #> timestamp batch_nr #> 1: 2020-09-28 04:30:39 1 #> 2: 2020-09-28 04:30:39 2 #> 3: 2020-09-28 04:30:39 3