Not sure if scvi is using my GPU or not

I got this message and I am not sure if scvi is using my GPU or not
Another thing that I have in mind,
Is there a way to get normalised gene expression from model to my Seurat object?

SCVI Model with the following params: 
n_hidden: 128, n_latent: 20, n_layers: 1, dropout_rate: 0.1, dispersion: gene, 
gene_likelihood: zinb, latent_distribution: normal
Training status: Not Trained
Model's adata is minified?: False
> model$train()
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

Hi, you can check that the model training is using your GPU on the following line:

GPU available: True (cuda), used: True

Let me get back to you regarding getting normalized expression values into Seurat objects.

For anyone wondering, you can also run:

nvidia-smi

in the command line which will show you if GPU is running anything.

1 Like

You can also run: gpustat

1 Like

Thanks alot for you help

So with model training not running

Mon May  8 13:36:28 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05    Driver Version: 520.61.05    CUDA Version: 11.8     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Quadro P4000        On   | 00000000:2D:00.0  On |                  N/A |
| 53%   58C    P8    12W / 105W |   1319MiB /  8192MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1150      G   /usr/lib/xorg/Xorg                334MiB |
|    0   N/A  N/A      1387      G   /usr/bin/gnome-shell               79MiB |
|    0   N/A  N/A     72614      G   ...RendererForSitePerProcess       71MiB |
|    0   N/A  N/A    135703      G   ...mviewer/tv_bin/TeamViewer        5MiB |
|    0   N/A  N/A    181236      C   ...esources/app/bin/rsession      730MiB |
|    0   N/A  N/A    227985      G   ...715383201972569677,131072       91MiB

with model training running

Mon May  8 13:38:43 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05    Driver Version: 520.61.05    CUDA Version: 11.8     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Quadro P4000        On   | 00000000:2D:00.0  On |                  N/A |
| 53%   55C    P0    34W / 105W |   1357MiB /  8192MiB |     18%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1150      G   /usr/lib/xorg/Xorg                348MiB |
|    0   N/A  N/A      1387      G   /usr/bin/gnome-shell              100MiB |
|    0   N/A  N/A     72614      G   ...RendererForSitePerProcess       85MiB |
|    0   N/A  N/A    135703      G   ...mviewer/tv_bin/TeamViewer        5MiB |
|    0   N/A  N/A    181236      C   ...esources/app/bin/rsession      728MiB |
|    0   N/A  N/A    227985      G   ...715383201972569677,131072       83MiB |
+-----------------------------------------------------------------------------+

So apparently the GPU is not used, is there any fix for that?? I mean more power is running thru the GPU but RAM usage is not going up…
When using Python and scanpy it runs faster like 3x faster

scvi does not use much gpu memory. It only uses the memory of the model (~15 mb) and the memory of 128 cells (also should be in low MB range).

how fast is the model training? How fast does it train if you add (use_gpu=False) to .train(use_gpu=False)

I tried it without GPU and it took so much longer,
Thanks alot I think it using the GPU after all

and for normalized gene expression is used this

Nor <- model$get_normalized_expression(adata)
Nor1  <- py_to_r(Nor)
Nor1 <- t(Nor1)
GEXscvi[["Nor"]] <- CreateAssayObject(counts = Nor1[, colnames(GEXscvi)])
DefaultAssay(GEXscvi) <- "Nor"