Is Gaussian mixture VAE implemented in scVI-tools?

Hi,

Is Gaussian mixture VAE framework (GMVAE), one used in e.g. scVAE (scVAE: Single-cell variational auto-encoders — scVAE 2.1.4 documentation), also implemented in scVI-tools?

Thanks!

Hey,

No, it is not implemented in base models there.

If I’m interested in implementing it with scVI-tools, would such a contribution be welcomed?
on a high level, I am thinking of the following

  1. Create a GMMVAE Model: create a new model class, probably inheriting from scvi.model.VAEC, __init__ method would define the GMM prior parameters (means, variances, and mixture weights) as learnable torch.nn.Parameter tensors.
  2. Implement Custom KL Divergence: we would need to calculate the KL divergence between the encoder’s Gaussian posterior and the Gaussian mixture prior. This would replace the standard KL term in the loss function.
  3. API: wrap it in a user-friendly class (e.g., scvi.model.SCVI_GMM) that inherits from the existing API

(Mike and I work together, and I just had this random thought about whether it’d be meaningful to implement it with scVI, since the current scVAE implementation following that paper was outdated and implemented with tensorflow1

Hi, it is used in sysVI and also MrVI. Called Mixture-of-Gaussian prior. It is a quite general implementation and can be easily plugged into other models (see sysVI).