Hi all,
I have 2 adata each has n_vars around 36,000. However, when I use concat(), the new data has n_vars only 2,000. Is there anything wrong here? Also, when I run:
model.train()
INFO:pytorch_lightning.utilities.rank_zero:GPU available: True (cuda), used: True INFO:pytorch_lightning.utilities.rank_zero:TPU available: False, using: 0 TPU cores INFO:pytorch_lightning.utilities.rank_zero:IPU available: False, using: 0 IPUs INFO:pytorch_lightning.utilities.rank_zero:HPU available: False, using: 0 HPUs INFO:pytorch_lightning.accelerators.cuda:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 1/400: 0%| | 0/400 [00:00<?, ?it/s]
ValueError Traceback (most recent call last)
<ipython-input-139-c72315b99576> in <module> ----> 1 model.train()
46 frames /usr/local/lib/python3.7/dist-packages/torch/distributions/distribution.py in __init__(self, batch_shape, event_shape, validate_args) 54 if not valid.all(): 55 raise ValueError( ---> 56 f"Expected parameter {param} " 57 f"({type(value).__name__} of shape {tuple(value.shape)}) " 58 f"of distribution {repr(self)} "
ValueError: Expected parameter loc (Tensor of shape (128, 10)) of distribution Normal(loc: torch.Size([128, 10]), scale: torch.Size([128, 10])) to satisfy the constraint Real(), but found invalid values:
tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0',
grad_fn=<AddmmBackward0>)
Would you have advice in this case? I appreciate your help and have a great week!