Conceptually I have a two stage pipeline. Where the first stage generates a set of flags (“hyper-hyper parameters”). Then in the second stage I want to combine those with a set of hyper parameters to optimize. The challenge is that since they’re created dynamically… I can’t know ahead of time how many there are.
I can do it manually like this
guild run train x='[1,2,3] y='[1,2,3]'' @bigbatch.csv
What I would like to do is for the train step to use a generated bigbatch.csv from the upstream pipeline
I’ve attached what I think it should look like at the guild.yml level
train:
description: Sample training script
flags-import: all
requires:
- operation: bigbatch
bigbatch:
description: make file bigbatch.csv
This gives me a symlink to the correct file called bigbatch.csv in the “train folder” after the guild train operation. However when I use the “@” batch notation the bigbatch.csv is taken from my cwd. Is there any way to reference batch parameters in the guild.yml?