Error when concat 2 samples

Hi all,

I have 2 adata each has n_vars around 36,000. However, when I concat, the new data has n_vars only 2,000. Is there anything wrong here? Also, when I run:


INFO:pytorch_lightning.utilities.rank_zero:GPU available: True (cuda), used: True
INFO:pytorch_lightning.utilities.rank_zero:TPU available: False, using: 0 TPU cores
INFO:pytorch_lightning.utilities.rank_zero:IPU available: False, using: 0 IPUs
INFO:pytorch_lightning.utilities.rank_zero:HPU available: False, using: 0 HPUs
INFO:pytorch_lightning.accelerators.cuda:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 1/400:   0%|          | 0/400 [00:00<?, ?it/s]
ValueError                                Traceback (most recent call last)
<ipython-input-139-c72315b99576> in <module>
----> 1 model.train()

46 frames
/usr/local/lib/python3.7/dist-packages/torch/distributions/ in __init__(self, batch_shape, event_shape, validate_args)
     54                 if not valid.all():
     55                     raise ValueError(
---> 56                         f"Expected parameter {param} "
     57                         f"({type(value).__name__} of shape {tuple(value.shape)}) "
     58                         f"of distribution {repr(self)} "

ValueError: Expected parameter loc (Tensor of shape (128, 10)) of distribution Normal(loc: torch.Size([128, 10]), scale: torch.Size([128, 10])) to satisfy the constraint Real(), but found invalid values:
tensor([[nan, nan, nan,  ..., nan, nan, nan],
        [nan, nan, nan,  ..., nan, nan, nan],
        [nan, nan, nan,  ..., nan, nan, nan],
        [nan, nan, nan,  ..., nan, nan, nan],
        [nan, nan, nan,  ..., nan, nan, nan],
        [nan, nan, nan,  ..., nan, nan, nan]], device='cuda:0',

Would you have advice in this case? Thank you so much!

I would double check what’s going on in the concatenation process (do both adatas have the same gene names?)

1 Like

Thank you for your suggestion! When I try with h5 format, data1.var has 2000 rows and 19 columns. data2.var has 2000 rows and 15 columns. Is that the reason? I don’t know why n_vars dropped from 36,000 to only 2,000 with mtx format.