Parameterizing different model interfaces from guild

This issue isn’t specific to guild, but I was hoping someone had some advice on how to fix this problem.

I want to parameterize through guild which model to test. The different models I want to test all have different interfaces exposing different hyperparameters. A minimal example here:

import argparse


class ModelA:
    def __init__(self, filter_size):
        self.name = "ModelA"
        self.filter_size = filter_size


class ModelB:
    def __init__(self, num_layers):
        self.name = "ModelB"
        self.num_layers = num_layers


MODELS = {"ModelA": ModelA, "ModelB": ModelB}

if __name__ == "__main__":
    pa = argparse.ArgumentParser()
    pa.add_argument("--model")
    pa.add_argument("--filter_size")
    pa.add_argument("--num_layers")
    args = pa.parse_args()

    model = MODELS[args.model]

    # Init model
    model(...)

As you can see, it is not clear how to initialize the chosen model. I have thought of a couple of different solutions myself:

  1. A separate script for each model. This quickly becomes infeasible with many models, so I opted not to go this way.

  2. Some magic using the python inspect module. Basically infer which parameters the model constructor requires and then extract them from the Namespace object. This don’t work well with guild flags as unused flags will still be capture in guild compare.

  3. Only pass required parameters for the object you chose, so something like python test_model.py --model_name ModelA --filter_size 3, which is the solution I am leaning towards now. I am unsure though how that would work with guild though as I don’t think you can make flags optional.

  4. Require all models to have *args, **kwargs in the constructor, but again, I think this is a bit hacky.

  5. Factory methods for each model ModelA.from_namespace, but this is also a little hacky, I think.

Open for any suggestions here!

EDIT:

I just remembered seeing in pytorch-lightning how they handle this. I like this approach, so I might just do this. Will guild be able to capture the flags here?

EDIT2:

Looks like guild doesn’t handle this pattern well:

if __name__ == "__main__":
    pa = argparse.ArgumentParser()
    pa.add_argument("--model")
    pa.add_argument("--filter_size")
    pa.add_argument("--num_layers")

    pa = parse_args()

    temp_args = pa.parse_known_args()[0]

    model = MODELS[temp_args.model_name]
    pa = model.add_model_specific_args(pa)

    args = pa.parse_args()

    model = model.from_args(args)

guild throws me this warning and error:

WARNING: cannot import flags from test.py: .venv/bin/python3: Error while finding module specification for 'guild.plugins.import_argparse_flags_main' (ModuleNotFoundError: No module named 'guild.plugins'; 'guild' is not a package)

and

.venv/bin/python: Error while finding module specification for 'guild.op_main' (AttributeError: module 'guild' has no attribute '__path__')

The errors you’re getting suggest that the environment during the operation is being modified somewhere along the way. E.g. sys.path is dynamically modified somewhere. Without an example to run I can’t speculate as to what’s going on there.

The last code example contains pa = parse_args() which is surprising—I’m guessing that’s a typo as that line redefines pa.

There’s no reason the PyTorch pattern won’t work with Guild—it’s a straight forward use of Python to construct an arg parser specific to a model type.

Using separate modules, one for each model, I think is the most straight forward path here. The next simplest is to use argparse sub-commands. Guild now supports importing flags based on the args you specify in the main attr so that any sub-commands used in main will be used by Guild when importing flags.

Here’s a sample model module, which couple the model def and the CLI support, but keeps them clearly separated. (I don’t personally care for the way the PyTorch example couples the argparse support to the model class itself.)

# model_a.py

from cli_utils import add_shared_args

class ModelA:
  def __init__(self, a, b, c):
    pass

if __name__ == "__main__":
  p = argparse.ArgumentParser()
  add_shared_args(p)
  p.add_argument("--b")
  p.add_argument("--c")
  args = p.parse_args()
  m = ModelA(args.a, args.b, args.c)
  m.train()

This scales easily enough—as you add models just add another module. Any args shared across models can be added via a util function.

That said the approach you’re using above, though I think complicated, should work fine with Guild. The errors you’re seeing there I think are unrelated to the argparse approach.

You are right - the errors I saw was unrelated to guild - it is now working. Thank you!

The idea of using separate modules, one for each model, works fine as long as it is only model interfaces that differ.

I also face an issue where pre-processor, model, post-processor and data set interfaces differ. That quickly turns into combinatorial explosion, if I want to create a module for each combination.

I will look into the argparse sub-commands - I am not familiar with it.

Either way, thank you for your input!

1 Like