(PyTorch) Lightning has moved on to a more sophisticated CLI based on argparse (Configure hyperparameters from the CLI — PyTorch Lightning 2.1.2 documentation) and I’m facing some troubles combining both.
I was able to “hack” everything together with the only obstacle that the lightning CLI expects commands like python -m path/to/main.py fit --model.lr=3e-4
and I see no way to emulate the subcommand “fit” in my guild.yml
Ideally, guild could simply import the config.yml that pytorch lightning expects (flags-import
and flags-dest
) and transform it to the equivalent command line arguments
So given
seed_everything: 42
trainer:
accelerator: cpu
deterministic: True
fast_dev_run: True
...
I can run python trainer.py fit --config config.yml
or directly python trainer.py fit --seed_everything 42 --trainer.accelerator cpu ...
In guild I tried the following:
test:
main: trainer
flags:
seed_everything:
default: 42
trainer.accelerator:
default: cpu
...
However the subcommand fit
can’t be modeled this way.
Any ideas how to do this? I’d love to leverage guild, for example, for running multiple trials.
Cheers,
Alessandro