Resume training from model checkpoints

Thanks for the great software! Loving the redesign, and the helpful documentation and tutorials.

Related to this previous topic (Resuming training with scVI) , is resuming training now possible with scvi? If I call and then scvi.model.load and then call model.train() again, does that start from the last state, or overwrite it?

It looks like one can enable_checkpointing=True in the Trainer and then can find the path of the last checkpoint with something like model.trainer.callbacks[-1].best_model_path. But then is there a way to pass ckpt_path to so that it is used on subsequent calls to model.train()?


This is possible and will start from the loaded model weights, though the optimizer state is lost.

I would have to look into this, all kwargs to train get passed to pytorch lightning trainer init. Currently no way to pass kwargs to fit.