Hi I have SCVI env which is running perfectly fine but whenever I want to run vae.train() I am getting this error which I don’t know how to fix .
I really appreciate your help.
/Users/sergio/opt/miniconda3/envs/scib/lib/python3.9/site-packages/scvi/model/base/_training_mixin.py:67: UserWarning: max_epochs=115 is less than n_epochs_kl_warmup=400. The max_kl_weight will not be reached during training. warnings.warn( GPU available: True (mps), used: False TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs /Users/sergio/opt/miniconda3/envs/scib/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py:1789: UserWarning: MPS available but not used. Set accelerator
and devices
using Trainer(accelerator='mps', devices=1)
. rank_zero_warn(
Epoch 1/115: 0%| | 0/115 [00:00<?, ?it/s]
Output exceeds the size limit. Open the full output data in a text editor
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In [23], line 1 ----> 1 vae.train() File ~/opt/miniconda3/envs/scib/lib/python3.9/site-packages/scvi/model/base/_training_mixin.py:142, in UnsupervisedTrainingMixin.train(self, max_epochs, use_gpu, train_size, validation_size, batch_size, early_stopping, plan_kwargs, **trainer_kwargs) 131 trainer_kwargs[es] = ( 132 early_stopping if es not in trainer_kwargs.keys() else trainer_kwargs[es] 133 ) 134 runner = TrainRunner( 135 self, 136 training_plan=training_plan, (…) 140 **trainer_kwargs, 141 ) → 142 return runner() File ~/opt/miniconda3/envs/scib/lib/python3.9/site-packages/scvi/train/_trainrunner.py:81, in TrainRunner.call(self) 78 if hasattr(self.data_splitter, “n_val”): 79 self.training_plan.n_obs_validation = self.data_splitter.n_val —> 81 self.trainer.fit(self.training_plan, self.data_splitter) 82 self._update_history() 84 # data splitter only gets these attrs after fit File ~/opt/miniconda3/envs/scib/lib/python3.9/site-packages/scvi/train/_trainer.py:188, in Trainer.fit(self, *args, **kwargs)
…
[nan, nan, nan, …, nan, nan, nan], …, [nan, nan, nan, …, nan, nan, nan], [nan, nan, nan, …, nan, nan, nan], [nan, nan, nan, …, nan, nan, nan]], grad_fn=)t`