The TunerBatch implements the optimization algorithm.
Details
TunerBatch is an abstract base class that implements the base functionality each tuner must provide. A subclass is implemented in the following way:
- Inherit from Tuner. 
- Specify the private abstract method - $.optimize()and use it to call into your optimizer.
- You need to call - instance$eval_batch()to evaluate design points.
- The batch evaluation is requested at the TuningInstanceBatchSingleCrit/TuningInstanceBatchMultiCrit object - instance, so each batch is possibly executed in parallel via- mlr3::benchmark(), and all evaluations are stored inside of- instance$archive.
- Before the batch evaluation, the bbotk::Terminator is checked, and if it is positive, an exception of class - "terminated_error"is generated. In the later case the current batch of evaluations is still stored in- instance, but the numeric scores are not sent back to the handling optimizer as it has lost execution control.
- After such an exception was caught we select the best configuration from - instance$archiveand return it.
- Note that therefore more points than specified by the bbotk::Terminator may be evaluated, as the Terminator is only checked before a batch evaluation, and not in-between evaluation in a batch. How many more depends on the setting of the batch size. 
- Overwrite the private super-method - .assign_result()if you want to decide yourself how to estimate the final configuration in the instance and its estimated performance. The default behavior is: We pick the best resample-experiment, regarding the given measure, then assign its configuration and aggregated performance to the instance.
Private Methods
- .optimize(instance)->- NULL
 Abstract base method. Implement to specify tuning of your subclass. See details sections.
- .assign_result(instance)->- NULL
 Abstract base method. Implement to specify how the final configuration is selected. See details sections.
Resources
There are several sections about hyperparameter optimization in the mlr3book.
- Getting started with hyperparameter optimization. 
- An overview of all tuners can be found on our website. 
- Tune a support vector machine on the Sonar data set. 
- Learn about tuning spaces. 
- Estimate the model performance with nested resampling. 
- Learn about multi-objective optimization. 
- Simultaneously optimize hyperparameters and use early stopping with XGBoost. 
- Automate the tuning. 
The gallery features a collection of case studies and demos about optimization.
- Learn more advanced methods with the Practical Tuning Series. 
- Learn about hotstarting models. 
- Run the default hyperparameter configuration of learners as a baseline. 
- Use the Hyperband optimizer with different budget parameters. 
The cheatsheet summarizes the most important functions of mlr3tuning.
Super class
mlr3tuning::Tuner -> TunerBatch
Methods
Inherited methods
Method new()
Creates a new instance of this R6 class.
Usage
TunerBatch$new(
  id = "tuner_batch",
  param_set,
  param_classes,
  properties,
  packages = character(),
  label = NA_character_,
  man = NA_character_
)Arguments
- id
- ( - character(1))
 Identifier for the new instance.
- param_set
- (paradox::ParamSet) 
 Set of control parameters.
- param_classes
- ( - character())
 Supported parameter classes for learner hyperparameters that the tuner can optimize, as given in the paradox::ParamSet- $classfield.
- properties
- ( - character())
 Set of properties of the tuner. Must be a subset of- mlr_reflections$tuner_properties.
- packages
- ( - character())
 Set of required packages. Note that these packages will be loaded via- requireNamespace(), and are not attached.
- label
- ( - character(1))
 Label for this object. Can be used in tables, plot and text output instead of the ID.
- man
- ( - character(1))
 String in the format- [pkg]::[topic]pointing to a manual page for this object. The referenced help package can be opened via method- $help().
Method optimize()
Performs the tuning on a TuningInstanceBatchSingleCrit or TuningInstanceBatchMultiCrit until termination. The single evaluations will be written into the ArchiveBatchTuning that resides in the TuningInstanceBatchSingleCrit/TuningInstanceBatchMultiCrit. The result will be written into the instance object.
