Skip to contents

Subclass for non-linear optimization (NLopt). Calls nloptr::nloptr from package nloptr.

Source

Johnson, G S (2020). “The NLopt nonlinear-optimization package.” https://github.com/stevengj/nlopt.

Details

The termination conditions stopval, maxtime and maxeval of nloptr::nloptr() are deactivated and replaced by the bbotk::Terminator subclasses. The x and function value tolerance termination conditions (xtol_rel = 10^-4, xtol_abs = rep(0.0, length(x0)), ftol_rel = 0.0 and ftol_abs = 0.0) are still available and implemented with their package defaults. To deactivate these conditions, set them to -1.

Dictionary

This Tuner can be instantiated with the associated sugar function tnr():

tnr("nloptr")

Logging

All Tuners use a logger (as implemented in lgr) from package bbotk. Use lgr::get_logger("bbotk") to access and control the logger.

Optimizer

This Tuner is based on bbotk::OptimizerNLoptr which can be applied on any black box optimization problem. See also the documentation of bbotk.

Parameters

algorithm

character(1)

eval_g_ineq

function()

xtol_rel

numeric(1)

xtol_abs

numeric(1)

ftol_rel

numeric(1)

ftol_abs

numeric(1)

start_values

character(1)
Create random start values or based on center of search space? In the latter case, it is the center of the parameters before a trafo is applied.

For the meaning of the control parameters, see nloptr::nloptr() and nloptr::nloptr.print.options().

The termination conditions stopval, maxtime and maxeval of nloptr::nloptr() are deactivated and replaced by the Terminator subclasses. The x and function value tolerance termination conditions (xtol_rel = 10^-4, xtol_abs = rep(0.0, length(x0)), ftol_rel = 0.0 and ftol_abs = 0.0) are still available and implemented with their package defaults. To deactivate these conditions, set them to -1.

Resources

There are several sections about hyperparameter optimization in the mlr3book.

The gallery features a collection of case studies and demos about optimization.

  • Use the Hyperband optimizer with different budget parameters.

Progress Bars

$optimize() supports progress bars via the package progressr combined with a Terminator. Simply wrap the function in progressr::with_progress() to enable them. We recommend to use package progress as backend; enable with progressr::handlers("progress").

Super classes

mlr3tuning::Tuner -> mlr3tuning::TunerFromOptimizer -> TunerNLoptr

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method clone()

The objects of this class are cloneable with this method.

Usage

TunerNLoptr$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Hyperparameter Optimization
# \donttest{

# load learner and set search space
learner = lrn("classif.rpart",
  cp = to_tune(1e-04, 1e-1, logscale = TRUE)
)

# run hyperparameter tuning on the Palmer Penguins data set
instance = tune(
  tuner = tnr("nloptr", algorithm = "NLOPT_LN_BOBYQA"),
  task = tsk("penguins"),
  learner = learner,
  resampling = rsmp("holdout"),
  measure = msr("classif.ce")
)

# best performing hyperparameter configuration
instance$result
#>          cp learner_param_vals  x_domain classif.ce
#>       <num>             <list>    <list>      <num>
#> 1: -4.91774          <list[2]> <list[1]> 0.06956522

# all evaluated hyperparameter configuration
as.data.table(instance$archive)
#>            cp classif.ce x_domain_cp runtime_learners           timestamp
#>         <num>      <num>       <num>            <num>              <POSc>
#>  1: -4.917740 0.06956522 0.007315645            0.005 2024-03-06 08:51:30
#>  2: -4.917740 0.06956522 0.007315645            0.006 2024-03-06 08:51:30
#>  3: -4.917740 0.06956522 0.007315645            0.006 2024-03-06 08:51:30
#>  4: -3.190801 0.07826087 0.041138897            0.007 2024-03-06 08:51:30
#>  5: -6.644679 0.06956522 0.001300926            0.006 2024-03-06 08:51:30
#>  6: -5.781209 0.06956522 0.003084982            0.006 2024-03-06 08:51:30
#>  7: -5.349475 0.06956522 0.004750646            0.006 2024-03-06 08:51:30
#>  8: -5.133607 0.06956522 0.005895256            0.006 2024-03-06 08:51:31
#>  9: -5.025674 0.06956522 0.006567161            0.006 2024-03-06 08:51:31
#> 10: -4.971707 0.06956522 0.006931307            0.006 2024-03-06 08:51:31
#> 11: -4.874567 0.06956522 0.007638404            0.006 2024-03-06 08:51:31
#> 12: -4.948990 0.06956522 0.007090567            0.024 2024-03-06 08:51:31
#> 13: -4.930341 0.06956522 0.007224042            0.006 2024-03-06 08:51:31
#> 14: -4.928246 0.06956522 0.007239192            0.006 2024-03-06 08:51:31
#> 15: -4.916013 0.06956522 0.007328290            0.005 2024-03-06 08:51:31
#> 16: -4.917913 0.06956522 0.007314382            0.005 2024-03-06 08:51:31
#> 17: -4.917740 0.06956522 0.007315645            0.005 2024-03-06 08:51:31
#>     batch_nr warnings errors  resample_result
#>        <int>    <int>  <int>           <list>
#>  1:        1        0      0 <ResampleResult>
#>  2:        2        0      0 <ResampleResult>
#>  3:        3        0      0 <ResampleResult>
#>  4:        4        0      0 <ResampleResult>
#>  5:        5        0      0 <ResampleResult>
#>  6:        6        0      0 <ResampleResult>
#>  7:        7        0      0 <ResampleResult>
#>  8:        8        0      0 <ResampleResult>
#>  9:        9        0      0 <ResampleResult>
#> 10:       10        0      0 <ResampleResult>
#> 11:       11        0      0 <ResampleResult>
#> 12:       12        0      0 <ResampleResult>
#> 13:       13        0      0 <ResampleResult>
#> 14:       14        0      0 <ResampleResult>
#> 15:       15        0      0 <ResampleResult>
#> 16:       16        0      0 <ResampleResult>
#> 17:       17        0      0 <ResampleResult>

# fit final model on complete data set
learner$param_set$values = instance$result_learner_param_vals
learner$train(tsk("penguins"))
# }