Error while running gp optimizer for Hyperparameter optimization: Cannot find objective 'loss'


I am trying to use Guild to do hyperparameter optimization using the gp optimizer. The first three runs started using random initializations as expected. However, even after the 3rd run, it continues to perform random initializations. On examining the output, I noticed that the following information was posted:

INFO: [guild] Random start for optimization (cannot find objective ‘loss’)

Is this expected? Or am I missing something?

I looked at the scalars using guild runs info and found that all the scalars that I am logging using Tensorboard are displayed correctly.

I would appreciate any help in this matter.


By default the optimizers look for a scalar named ‘loss’ to minimize. If that’s what you want, make sure that ‘loss’ shows up for the generated runs by running guild runs info. If you want to minimize (or maximize) a different scalar, specify its name with --min (or --max) for the run command.

1 Like

Thanks. I used the --minimize and specified the scalar value that I was logging using TensorBoard and wanted to minimize as part of the hyperparameter optimization. The error disappeared as a result.

Great! You can specify this scalar in the Guild file so you don’t need to remember to set it each time with the run command.

  objective: <name of scalar you want to minimize>

If you want to maximize, use a negative sign (’-’) in front of the scalar name.

1 Like

Hi @garrett, I have a few follow-up questions for this topic.
I am trying to utilize this specifying the scaler functionality in a python script because my experiment does not have a ‘loss’ value to minimize, and I have gone through the Python API and couldn’t find an example of it being used this way.

So if I am able to specify it in the Guild File that solves one of the two issues!
The bigger issue is where would this ‘objective: < scaler name >’ be inserted with/alongside the command in a python script?? Does it go inside the and if so how is it specified?

I know that for choosing an ‘Optimizer’ we can add the ‘_optimizer=’ argument inside the parentheses.

Thanks in advance.

Specify the scalar that you want to either minimize or maximize using the _minimize or _maximize keyword argument to the run() function. Let’s say you want to maximize the scalar val_acc (e.g. validation accuracy). Use this:

from guild import ipy as guild

def train(lr, epochs):
  for _ in range(epochs):
    val_acc = _train_and_validate(lr)  # Example function
    print("val_acc: %f" % val_acc)
  lr=guild.loguniform(1e-5, 1e-1),