TunerNLoptr
class that implements non-linear optimization. Calls
nloptr::nloptr from package nloptr.
Source
Johnson, G S (2020). “The NLopt nonlinear-optimization package.” https://github.com/stevengj/nlopt.
Details
The termination conditions stopval
, maxtime
and maxeval
of
nloptr::nloptr()
are deactivated and replaced by the bbotk::Terminator
subclasses. The x and function value tolerance termination conditions
(xtol_rel = 10^-4
, xtol_abs = rep(0.0, length(x0))
,
ftol_rel = 0.0
and ftol_abs = 0.0
) are still available and implemented with
their package defaults. To deactivate these conditions, set them to -1
.
Dictionary
This Tuner can be instantiated via the dictionary
mlr_tuners or with the associated sugar function tnr()
:
TunerNLoptr$new()
mlr_tuners$get("nloptr")
tnr("nloptr")
Logging
All Tuners use a logger (as implemented in lgr) from package
bbotk.
Use lgr::get_logger("bbotk")
to access and control the logger.
Optimizer
This Tuner is based on bbotk::OptimizerNLoptr which can be applied on any black box optimization problem. See also the documentation of bbotk.
Parameters
algorithm
character(1)
eval_g_ineq
function()
xtol_rel
numeric(1)
xtol_abs
numeric(1)
ftol_rel
numeric(1)
ftol_abs
numeric(1)
start_values
character(1)
Createrandom
start values or based oncenter
of search space? In the latter case, it is the center of the parameters before a trafo is applied.
For the meaning of the control parameters, see nloptr::nloptr()
and
nloptr::nloptr.print.options()
.
The termination conditions stopval
, maxtime
and maxeval
of
nloptr::nloptr()
are deactivated and replaced by the Terminator
subclasses. The x and function value tolerance termination conditions
(xtol_rel = 10^-4
, xtol_abs = rep(0.0, length(x0))
, ftol_rel = 0.0
and
ftol_abs = 0.0
) are still available and implemented with their package
defaults. To deactivate these conditions, set them to -1
.
Progress Bars
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
See also
Package mlr3hyperband for hyperband tuning.
Other Tuner:
mlr_tuners_cmaes
,
mlr_tuners_design_points
,
mlr_tuners_gensa
,
mlr_tuners_grid_search
,
mlr_tuners_irace
,
mlr_tuners_random_search
,
mlr_tuners
Super classes
mlr3tuning::Tuner
-> mlr3tuning::TunerFromOptimizer
-> TunerNLoptr
Examples
if (FALSE) {
# retrieve task
task = tsk("pima")
# load learner and set search space
learner = lrn("classif.rpart", cp = to_tune(1e-04, 1e-1, logscale = TRUE))
# hyperparameter tuning on the pima indians diabetes data set
instance = tune(
method = "nloptr",
task = task,
learner = learner,
resampling = rsmp("holdout"),
measure = msr("classif.ce"),
algorithm = "NLOPT_LN_BOBYQA"
)
# best performing hyperparameter configuration
instance$result
# all evaluated hyperparameter configuration
as.data.table(instance$archive)
# fit final model on complete data set
learner$param_set$values = instance$result_learner_param_vals
learner$train(task)
}