Experiment tracking with reinforcment learning

I was wondering if anybody was using Guild for tracking Reinforcement learning experiments. Its a bit of a different use-case because instead of having training data you have a training environment. This environment gets updated regularly and if you are running many quick training runs its becomes very difficult to keep track of which envs were run with which neural nets at what time.

1 Like

In this case, how is an environment defined? Are you able to serialize an environment as a file artifact, and then reuse that (deserialize) across subsequent experiments?

In general, you want an experiment to contain the files that were used and generated for a run. That might for example include the initial environment (saved as one or more files) and the environment after the experiment (saved as a different set of files). Any experiment related files would also be stored in that run.

Subsequent runs would then use previous environments as inputs.

In this way, the state that you’re interested in capturing is always present in any given run.

1 Like