Overview
An optimizer is a Guild operation that runs a batch. A batch generates one or more trial runs or trials. Optimizers are able to suggest flag values to minimize or maximize an objective.
Below is a list of supported optimizers.
gp | Sequential optimizer using Gaussian processes. |
forest | Sequential optimizer using decision trees. |
gbrt | Sequential optimizer using gradient boosted regression trees. |
random | Batch processor using randomly selected values. |
Use the default optimizer for an operation by specifying the --optimize
option with guild run
. The default optimizer can be defined for an operation using the optimizers
attribute. Guild uses the gp
optimizer if one is not otherwise defined for an operation.
Specify a named optimizer with the --optimizer
option to guild run
. A name may be one of the optimizers below or may be the name of an optimizer defined for the operation.
Optimizer flags are set using --opt-flag
or -Fo
. Optimizer flags are specified like other flags using the format NAME=VALUE
.
To run the default optimizer for train
:
guild run train --optimize
To use the forest
optimizer:
guild run train --optimizer forest
For more examples, see Guild File Cheatsheet.
gp
Bayesian optimizer using Gaussian processes.
Refer to skopt API documentation for details on this algorithm and its flags.
Aliases: gaussian
, bayesian
gp Flags
acq-func
Function to minimize over the gaussian prior (default is gp_hedge
)
Choices:
LCB |
Lower confidence bound |
EI |
Negative expected improvement |
PI |
Negative probability of improvement |
gp_hedge |
Probabilistically use LCB, EI, or PI at every iteration |
EIps |
Negative expected improvement per second |
PIps |
Negative probability of improvement per second |
kappa
Degree to which variance in the predicted values is taken into account (default is 1.96
)
noise
Level of noise associated with the objective (default is gaussian
)
Use gaussian
if the objective returns noisy observations, otherwise specify the expected variance of the noise.
random-starts
Number of trials using random values before optimizing (default is 3
)
xi
Improvement to seek over the previous best values (default is 0.05
)
forest
Sequential optimization using decision trees. Refer to skopt API documentation for details on this algorithm and its flags.
forest Flags
kappa
Degree to which variance in the predicted values is taken into account (default is 1.96
)
random-starts
Number of trials using random values before optimizing (default is 3
)
xi
Improvement to seek over the previous best values (default is 0.05
)
gbrt
Sequential optimization using gradient boosted regression trees.
Refer to skopt API documentation for details on this algorithm and its flags.
gbrt Flags
kappa
Degree to which variance in the predicted values is taken into account (default is 1.96
)
random-starts
Number of trials using random values before optimizing (default is 3
)
xi
Improvement to seek over the previous best values (default is 0.05
)
random
Batch processor supporting random flag value generation.
Values are selected from the search space distribution specified for each flag value.
This optimizer does not attempt to optimize an objective.
The random optimizers does not support any flags.