Model complexity selection

I believe you can investigate the bias variance trade off by looking at the training curves. Your model_da should have an field .history that has a dictionary of training metrics per epoch of model training. It definitely has the reconstruction error for the training set. I can’t remember how to make the training also record the reconstruction error for the test set. @adamgayoso is this in the documentation somewhere?

When you have these curves for alternative models you can compare them. Curves with higher reconstruction error in both train and test will have more bias. Curves where the test error goes up compared to the training error will have more variance. (Caveat, I always get the concepts of bias and variance for model mixed, I’m hoping I’m getting it right now).

Now, how to quantify the complexity of the model? That is hard I think… You could try to count all the parameters, but there are also various forms of regularization both for the neural networks and the latent variables (and other parameters). Not sure how that factors in to the definition of model complexity.

/Valentine

2 Likes