How to use Tune (ray) with Guild

Hi!

So I’ve spoke with @garrett about using Tune with guild before but didn’t get to follow up. In their message with me on the now-deprecated slack channel they say:

Regarding Tune - awesome library!
Unfortunately, Guild does not (currently) make this super easy for you! But it is quite possible to integrate any hyperparameter tuning algorithm into Guild by creating an operation in your project. Guild refers to these as batch operations - or optimizers.
I’ll refer to it as an optimizer here.
To use an optimizer to repeatedly run an operation, use these semantics for the run command:
$ guild run <name of base op> --optimizer <name of optimizer op>
You can provide flags to the optimizer op using -Fo arguments.
The trick here is to create this optimizer op that uses Tune.

I was trying to figure out how to do this but it is not so clear to me. How would you start going about this? I was hoping to leverage their SOTA hyperparam tuning algorithms and also their ability to parallelize the process as much as possible to speed up the whole process.

There’s been a bit of needle moving on this front. There’s an example that uses Hyperopt TPE for sequential optimization:

Using Tune or another library (e.g. the excellent Optuna lib) should be a matter of following the pattern used by this example.

Is there any way to ignore certain output scalars? So I am using Tune, but I don’t want to keep anything logged by Tune, I just want to copy over the logs for the best performing model. I can do the second part, but the first part I don’t know how.

Yes I think so, I’ll work up an example for you. Stay tuned!

Here’s an issue doc that provides some sample output scalar configurations that exclude certain patterns.

The other approach (the one I’d be tempted to take, provided I have control over the logging code) is to disable output scalars altogether and use a TF event logging library to explicitly write what you want.

1 Like