I am extending scVi skeleton for my own package. I can train model without problems, but when I try to do prediction my data is no longer on the same device as the pre-trained module.
I use the following data construction in my prediction, similar to other prediction-like functions I have seen in the scVI package:
adata = self._validate_anndata(adata)
scdl = self._make_data_loader(
adata=adata, indices=indices, batch_size=batch_size
latent = 
for tensors in scdl:
inference_inputs = self.module._get_inference_input(tensors)
outputs = self.module.inference(**inference_inputs)
My validate_adata is a custom function, while make_data_loader comes from the scVI Base Model.
I can not find a way to specify device in the data loader (to match the one of the model). How should I deal with this?
So if I understand correctly, the
self.module is on GPU and data on CPU.
Consider adding the following decorator to
from collections.abc import Mapping, Sequence
from functools import wraps
from typing import Any, Callable, Union
from torch.nn import Module
def auto_move_data(fn: Callable) -> Callable:
Decorator for :class:`~torch.nn.Module` methods to move data to correct device.
Input arguments are moved automatically to the correct device.
It has no effect if applied to a method of an object that is not an instance of
:class:`~torch.nn.Module` and is typically applied to ``__call__``
as in here
def inference(self, x, batch_index, cont_covs=None, cat_covs=None, n_samples=1):
this essentially moves the data to the correct device (based on
Thank you. Adding the decorator to the function that cause the issue has resolved it.