So I generate a bunch of results in multiple runs of the same model (I like to keep them in the run subdir because it helps me with output data versioning). However, when it comes to the time that I need to analyse them, I always have to write an extra bit of code to copy from the selected runs. Therefore, I am looking for more elegant solution to do so.
I also tried guild export
but it either copy all resources or none and I still need to manually collect them from the exported run subdirs to the same directory. Furthermore, guild export
of a pipeline will only copy the source-code, leaving behind the results generated in its steps.
So I used to do something like:
for runid in ID1 ID2..
do
mkdir -p /somewhere/else/${runid}
cp -rv `guild open ${runid} --cmd='echo' --path=relative/output/dir`* /somewhere/else/${runid}
done
I wonder if there’s a more elegant way to do this, which could involve something like guild cp RUNID --path=relative/paths LOCATION
. Using guild open
also feels a little bit odd in this scenario.