Parameters in training model for integrating datasets with scVI in R

Thanks for developing this tool!!

I was trying to use scVI to integrate my datasets. Following the instructions (Integrating datasets with scVI in R — scvi-tools), I was able to run the script successfully without changing any parameters in the training model. My next step was trying to change some parameters in the training model, but I encountered errors. Could you please help me resolve these issues? Thanks!

============

model ← scvi$model$SCVI(adata_5_in_1, n_latent = 5)

Error in py_call_impl(callable, call_args$unnamed, call_args$named) :
TypeError: empty() received an invalid combination of arguments - got (tuple, dtype=NoneType, device=NoneType), but expected one of:

  • (tuple of ints size, *, tuple of names names, torch.memory_format memory_format = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
  • (tuple of ints size, *, torch.memory_format memory_format = None, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)

reticulate::py_last_error()

── Python Exception Message ──────────────────────────────────────────────────────────────────────────────────────
Traceback (most recent call last):
File “/home/f06b22037/SSD2/utility/miniconda3/envs/scvi-env/lib/python3.12/site-packages/scvi/model/_scvi.py”, line 162, in init
self.module = self._module_cls(
^^^^^^^^^^^^^^^^^
File “/home/f06b22037/SSD2/utility/miniconda3/envs/scvi-env/lib/python3.12/site-packages/scvi/module/_vae.py”, line 235, in init
self.z_encoder = Encoder(
^^^^^^^^
File “/home/f06b22037/SSD2/utility/miniconda3/envs/scvi-env/lib/python3.12/site-packages/scvi/nn/_base_components.py”, line 260, in init
self.mean_encoder = nn.Linear(n_hidden, n_output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/f06b22037/SSD2/utility/miniconda3/envs/scvi-env/lib/python3.12/site-packages/torch/nn/modules/linear.py”, line 106, in init
torch.empty((out_features, in_features), **factory_kwargs)
TypeError: empty() received an invalid combination of arguments - got (tuple, dtype=NoneType, device=NoneType), but expected one of:

  • (tuple of ints size, *, tuple of names names, torch.memory_format memory_format = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
  • (tuple of ints size, *, torch.memory_format memory_format = None, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)

── R Traceback ───────────────────────────────────────────────────────────────────────────────────────────────────

  1. └─scvi$model$SCVI(adata_5_in_1, n_latent = 5)
  2. └─reticulate:::py_call_impl(callable, call_args$unnamed, call_args$named)
    See reticulate::py_last_error()$r_trace$full_call for more details.

Hi, I assume reticulate has issues interpreting 5. Can you try:

n_latent_value <- as.integer(5)
model <- scvi$model$SCVI(adata_5_in_1, n_latent = n_latent_value)

Hi,

n_latent_value ← as.integer(5)
model ← scvi$model$SCVI$create(adata_5_in_1, n_latent = n_latent_value)

I tried but still have errors.

Error in py_get_attr(x, name) :
AttributeError: type object ‘SCVI’ has no attribute ‘create’
Run reticulate::py_last_error() for details.

reticulate::py_last_error()

── Python Exception Message ──────────────────────────────────────────────────────────────────────────────────────
AttributeError: type object ‘SCVI’ has no attribute ‘create’

── R Traceback ───────────────────────────────────────────────────────────────────────────────────────────────────

  1. ├─scvi$model$SCVI$create
  2. └─reticulate:::$.python.builtin.object(scvi$model$SCVI, create)
  3. └─reticulate:::py_get_attr_or_item(x, name, TRUE)
  4. └─reticulate::py_get_attr(x, name)
    

See reticulate::py_last_error()$r_trace$full_call for more details.

── Python Exception Message ──────────────────────────────────────────────────────────────────────────────────────
AttributeError: type object ‘SCVI’ has no attribute ‘create’

── R Traceback ───────────────────────────────────────────────────────────────────────────────────────────────────

  1. ├─scvi$model$SCVI$create
  2. └─reticulate:::$.python.builtin.object(scvi$model$SCVI, create)
  3. └─reticulate:::py_get_attr_or_item(x, name, TRUE)
  4. └─reticulate::py_get_attr(x, name)
    

See reticulate::py_last_error()$r_trace$full_call for more details.

reticulate::py_last_error()$r_trace$full_call
[[1]]
scvi$model$SCVI$create

[[2]]
$.python.builtin.object(scvi$model$SCVI, create)

[[3]]
py_get_attr_or_item(x, name, TRUE)

[[4]]
py_get_attr(x, name)

See edit above. Sorry for the typo.

It works!! Thank you!!

Could you please share some suggestions on which parameters we can adjust to enhance the integration performance? I am planning to try n_hidden, n_layers and n_latent_value.

Thanks!!

Hi,
Practically all model/training parameters are open for tune and can affect the integration performance. Also the selection of batch key is a crucial factor of course. You can use our autotune to try to find an optimal set of parameters, check our documentation in the subject.

Hi,

I would like to use GPU to speed up the process. I got some errors…It seems to result from PyTorch and CUDA so I tried to re-install PyTorch. But it still failed…
Just wonder if you have any idea? Thanks!!!

========

Sys.setenv(CUDA_VISIBLE_DEVICES = “1”)
torch$cuda$is_available()
[1] TRUE
scvi$model$SCVI$setup_anndata(adata_5_in_1, batch_key = “batch”)
model ← scvi$model$SCVI(adata_5_in_1)
model$train(accelerator=“gpu”)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
Error in py_call_impl(callable, call_args$unnamed, call_args$named) :
torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at “…/aten/src/ATen/cuda/CUDAContext.cpp”:49, please report a bug to PyTorch. device=1, num_gpus=

<…truncated…> 1360, in _find_and_load
File “”, line 1331, in _find_and_load_unlocked
File “”, line 935, in _load_unlocked
File “”, line 999, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 264, in
_lazy_call(_check_capability)
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 261, in _lazy_call
_queued_calls.append((callable, traceback.format_stack()))

Run reticulate::py_last_error() for details.

── Python Exception Message ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Traceback (most recent call last):
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 332, in _lazy_init
queued_call()
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 200, in _check_capability
capability = get_device_capability(d)
^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 509, in get_device_capability
prop = get_device_properties(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 527, in get_device_properties
return _get_device_properties(device) # type: ignore[name-defined]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at “…/aten/src/ATen/cuda/CUDAContext.cpp”:49, please report a bug to PyTorch. device=1, num_gpus=

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/scvi/model/base/_training_mixin.py”, line 161, in train
return runner()
^^^^^^^^
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/scvi/train/_trainrunner.py”, line 96, in call
self.trainer.fit(self.training_plan, self.data_splitter)
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/scvi/train/_trainer.py”, line 210, in fit
super().fit(*args, **kwargs)
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/lightning/pytorch/trainer/trainer.py”, line 539, in fit
call._call_and_handle_interrupt(
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/lightning/pytorch/trainer/call.py”, line 47, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/lightning/pytorch/trainer/trainer.py”, line 575, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/lightning/pytorch/trainer/trainer.py”, line 938, in _run
self.strategy.setup_environment()
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/lightning/pytorch/strategies/strategy.py”, line 129, in setup_environment
self.accelerator.setup_device(self.root_device)
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/lightning/pytorch/accelerators/cuda.py”, line 46, in setup_device
_check_cuda_matmul_precision(device)
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/lightning/fabric/accelerators/cuda.py”, line 161, in _check_cuda_matmul_precision
if not torch.cuda.is_available() or not _is_ampere_or_later(device):
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/lightning/fabric/accelerators/cuda.py”, line 155, in _is_ampere_or_later
major, _ = torch.cuda.get_device_capability(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 509, in get_device_capability
prop = get_device_properties(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 523, in get_device_properties
_lazy_init() # will define _get_device_properties
^^^^^^^^^^^^
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 338, in _lazy_init
raise DeferredCudaCallError(msg) from e
torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at “…/aten/src/ATen/cuda/CUDAContext.cpp”:49, please report a bug to PyTorch. device=1, num_gpus=

CUDA call was originally invoked at:

File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 122, in _find_and_load_hook
return _run_hook(name, _hook)
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 96, in _run_hook
module = hook()
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 120, in _hook
return find_and_load(name, import)
File “”, line 1360, in _find_and_load
File “”, line 1331, in _find_and_load_unlocked
File “”, line 935, in _load_unlocked
File “”, line 999, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/anndata/init.py”, line 39, in
from .io import read_h5ad, read_zarr
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 122, in _find_and_load_hook
return _run_hook(name, _hook)
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 96, in _run_hook
module = hook()
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 120, in _hook
return find_and_load(name, import)
File “”, line 1360, in _find_and_load
File “”, line 1331, in _find_and_load_unlocked
File “”, line 935, in _load_unlocked
File “”, line 999, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/anndata/io.py”, line 7, in
from ._io.h5ad import read_h5ad, write_h5ad
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 122, in _find_and_load_hook
return _run_hook(name, _hook)
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 96, in _run_hook
module = hook()
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 120, in _hook
return find_and_load(name, import)
File “”, line 1360, in _find_and_load
File “”, line 1331, in _find_and_load_unlocked
File “”, line 935, in _load_unlocked
File “”, line 999, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/anndata/_io/h5ad.py”, line 25, in
from …experimental import read_dispatched
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 122, in _find_and_load_hook
return _run_hook(name, _hook)
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 96, in _run_hook
module = hook()
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 120, in _hook
return find_and_load(name, import)
File “”, line 1360, in _find_and_load
File “”, line 1331, in _find_and_load_unlocked
File “”, line 935, in _load_unlocked
File “”, line 999, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/anndata/experimental/init.py”, line 12, in
from .pytorch import AnnLoader
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 122, in _find_and_load_hook
return _run_hook(name, _hook)
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 96, in _run_hook
module = hook()
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 120, in _hook
return find_and_load(name, import)
File “”, line 1360, in _find_and_load
File “”, line 1331, in _find_and_load_unlocked
File “”, line 935, in _load_unlocked
File “”, line 999, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/anndata/experimental/pytorch/init.py”, line 3, in
from ._annloader import AnnLoader
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 122, in _find_and_load_hook
return _run_hook(name, _hook)
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 96, in _run_hook
module = hook()
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 120, in _hook
return find_and_load(name, import)
File “”, line 1360, in _find_and_load
File “”, line 1331, in _find_and_load_unlocked
File “”, line 935, in _load_unlocked
File “”, line 999, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/anndata/experimental/pytorch/_annloader.py”, line 19, in
import torch
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 122, in _find_and_load_hook
return _run_hook(name, _hook)
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 96, in _run_hook
module = hook()
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 120, in _hook
return find_and_load(name, import)
File “”, line 1360, in _find_and_load
File “”, line 1331, in _find_and_load_unlocked
File “”, line 935, in _load_unlocked
File “”, line 999, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/torch/init.py”, line 1954, in
_C._initExtension(_manager_path())
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 122, in _find_and_load_hook
return _run_hook(name, _hook)
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 96, in _run_hook
module = hook()
File “/home/guest001/miniconda3/envs/scvi-env/lib/R/library/reticulate/python/rpytools/loader.py”, line 120, in _hook
return find_and_load(name, import)
File “”, line 1360, in _find_and_load
File “”, line 1331, in _find_and_load_unlocked
File “”, line 935, in _load_unlocked
File “”, line 999, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 264, in
_lazy_call(_check_capability)
File “/home/guest001/miniconda3/envs/scvi-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 261, in _lazy_call
_queued_calls.append((callable, traceback.format_stack()))

── R Traceback ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

  1. └─model$train(accelerator = “gpu”)
  2. └─reticulate:::py_call_impl(callable, call_args$unnamed, call_args$named)
    See reticulate::py_last_error()$r_trace$full_call for more details.

=============
Here is my GPU information:

Wed Mar 5 15:50:35 2025
±----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.90.07 Driver Version: 550.90.07 CUDA Version: 12.4 |
|-----------------------------------------±-----------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 Quadro P4000 Off | 00000000:17:00.0 Off | N/A |
| 85% 85C P0 66W / 105W | 7461MiB / 8192MiB | 100% Default |
| | | N/A |
±----------------------------------------±-----------------------±---------------------+
| 1 Quadro P1000 Off | 00000000:73:00.0 Off | N/A |
| 34% 29C P8 N/A / N/A | 61MiB / 4096MiB | 0% Default |
| | | N/A |
±----------------------------------------±-----------------------±---------------------+

±----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 3725 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 6867 C dorado_basecall_server 910MiB |
| 0 N/A N/A 1438629 C guppy_basecaller 6542MiB |
| 1 N/A N/A 3725 G /usr/lib/xorg/Xorg 12MiB |
| 1 N/A N/A 5471 G /usr/bin/gnome-shell 2MiB |
| 1 N/A N/A 6867 C dorado_basecall_server 40MiB |
±----------------------------------------------------------------------------------------+
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Wed_Jan_15_19:20:09_PST_2025
Cuda compilation tools, release 12.8, V12.8.61
Build cuda_12.8.r12.8/compiler.35404655_0

======== re-installation
pip uninstall torch torchvision torchaudio -y
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

kinda hard for me to understand whats going on.
I do see some anndata related messages.

My suggestion to you is to: create a new clean environment, install scvi-tools v1.3 on python 3.12 backend & make sure you can run code (any simple code, not scvi) on the GPU, given this torch + cuda settings you have in your R env, then move to try to run a scvi code, and start from a very basic dummy test.

does the same code trains on the cpu? so we will know there no data issues

Yes! The data is able to be trained on cpu.

scvi$model$SCVI$setup_anndata(adata_5_in_1, batch_key = “batch”)
model ← scvi$model$SCVI(adata_5_in_1)
model$train()

Thanks for the suggestions!!! It works!!

1 Like

Hi,

I tried running the autotune method you suggested, but I couldn’t get it to work.

=== Here is my script ===

Import the necessary Python modules

scvi ← import(“scvi”)
scvi_data ← import(“scvi.data”) # Import scvi.data for datasets
torch ← import(“torch”)
anndata ← import(“anndata”)
tune ← import(“ray.tune”)
ray ← import(“ray”)

adata_5_in_1 ← anndata$read_h5ad(“adata_5_in_1.h5ad”)

py$adata_5_in_1 ← adata_5_in_1

py_run_string("
from scvi.model import SCVI
from ray import tune

SCVI.setup_anndata(adata_5_in_1, batch_key=‘batch’)

def train_scvi(config):
model = SCVI(
adata_5_in_1,
n_hidden=config[‘n_hidden’],
n_layers=config[‘n_layers’],
n_latent=config[‘n_latent’],
gene_likelihood=config[‘gene_likelihood’]
)
model.train(max_epochs=100, plan_kwargs={‘lr’: config[‘lr’]})
return {‘elbo’: model.get_elbo()}

search_space = {
‘n_hidden’: tune.choice([64, 128, 256]),
‘n_layers’: tune.choice([1, 2, 3, 4]),
‘n_latent’: tune.choice([5, 10, 20, 30, 40, 50]),
‘gene_likelihood’: tune.choice([‘nb’, ‘zinb’]),
‘lr’: tune.loguniform(1e-4, 1e-2)
}

analysis = tune.run(
train_scvi,
config=search_space,
num_samples=20, # Number of trials
metric=‘elbo’,
mode=‘min’
)

best_params = analysis.best_config

ray.shutdown()
")

=== Here are the errors ====

  1. └─reticulate::py_run_string(“\nanalysis = tune.run(\n train_scvi,\n config=search_space,\n num_samples=5, # Reduce number of trials\n metric=‘elbo’,\n mode=‘min’\n)\n”)
  2. └─reticulate:::py_run_string_impl(code, local, convert)
    See reticulate::py_last_error()$r_trace$full_call for more details.

=== Then I tried some basic things and they are all good ===

#ray stop
#ray start --head --num-cpus=50 --num-gpus=1

py_run_string("
import os
import ray

ray.shutdown()

os.environ[‘CUDA_VISIBLE_DEVICES’] = ‘1’

ray.init(address=‘auto’)
print(ray.is_initialized())
")

Test Training a Single SCVI Model

py_run_string("
from scvi.model import SCVI

print(type(adata_5_in_1))

SCVI.setup_anndata(adata_5_in_1, batch_key=‘batch’)

model = SCVI(
adata_5_in_1,
n_hidden=128,
n_layers=2,
n_latent=20,
gene_likelihood=‘nb’
)
model.train(max_epochs=10, plan_kwargs={‘lr’: 0.001})
print(‘Model trained successfully’)
")

Hi, could you train directly in Python? This would make the error log much simpler and avoid some errors like wrong type setting that come with reticulate.

hey @joweihsieh
I agree to try to run this directly on python, especially also because in the R code you run most of it in python code…

In any case, using scvi-tools 1.3v, we run the autotune code a bit different now. For example this was working for me in R env (just replace to your adata):

library(reticulate)
options(reticulate.verbose = TRUE)
py_module_available('scvi')
scvi <- import('scvi')
ray <- import('ray')

adata <- scvi$data$synthetic_iid()

py$adata <- adata

py_run_string("
from scvi.model import SCVI
from ray import tune
from scvi.data import synthetic_iid
from scvi.autotune import run_autotune

SCVI.setup_anndata(adata)

experiment = run_autotune(
  SCVI,
  adata,
  metrics=['elbo_validation'],
  mode='min',
  search_space={
    'model_params': {
      'n_hidden': tune.choice([64, 128, 256]),
      'n_layers': tune.choice([1, 2, 3, 4]),
      'n_latent': tune.choice([5, 10, 20, 30, 40, 50]),
      'gene_likelihood': tune.choice(['nb', 'zinb']),
    },
    'train_params': {
      'max_epochs': 100,
      'plan_kwargs': {'lr': tune.loguniform(1e-4, 1e-2)}
    },
  },
  num_samples=20,
  seed=0,
  scheduler='asha',
  searcher='hyperopt',
  ignore_reinit_error=True,
)
")