I got this message and I am not sure if scvi is using my GPU or not
Another thing that I have in mind,
Is there a way to get normalised gene expression from model to my Seurat object?
SCVI Model with the following params:
n_hidden: 128, n_latent: 20, n_layers: 1, dropout_rate: 0.1, dispersion: gene,
gene_likelihood: zinb, latent_distribution: normal
Training status: Not Trained
Model's adata is minified?: False
> model$train()
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Mon May 8 13:36:28 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro P4000 On | 00000000:2D:00.0 On | N/A |
| 53% 58C P8 12W / 105W | 1319MiB / 8192MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1150 G /usr/lib/xorg/Xorg 334MiB |
| 0 N/A N/A 1387 G /usr/bin/gnome-shell 79MiB |
| 0 N/A N/A 72614 G ...RendererForSitePerProcess 71MiB |
| 0 N/A N/A 135703 G ...mviewer/tv_bin/TeamViewer 5MiB |
| 0 N/A N/A 181236 C ...esources/app/bin/rsession 730MiB |
| 0 N/A N/A 227985 G ...715383201972569677,131072 91MiB
with model training running
Mon May 8 13:38:43 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro P4000 On | 00000000:2D:00.0 On | N/A |
| 53% 55C P0 34W / 105W | 1357MiB / 8192MiB | 18% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1150 G /usr/lib/xorg/Xorg 348MiB |
| 0 N/A N/A 1387 G /usr/bin/gnome-shell 100MiB |
| 0 N/A N/A 72614 G ...RendererForSitePerProcess 85MiB |
| 0 N/A N/A 135703 G ...mviewer/tv_bin/TeamViewer 5MiB |
| 0 N/A N/A 181236 C ...esources/app/bin/rsession 728MiB |
| 0 N/A N/A 227985 G ...715383201972569677,131072 83MiB |
+-----------------------------------------------------------------------------+
So apparently the GPU is not used, is there any fix for that?? I mean more power is running thru the GPU but RAM usage is not going up…
When using Python and scanpy it runs faster like 3x faster