Using my own datasets with spatialData

I have followed along with the tutorials at spatialdata tutorial to load a visium dataset and a xenium dataset and align the images by landmarks. Now I want to try an alignment with my own visium slides. Is there a resource or tutorial that I can use to create my own visium.zarr files using my own datasets?

So far I tried scanpy.read_visium(“Visium output”) to create an anndata object then the write_zarr() function to create the zarr file, but using spatialdata.read_zarr(“my zarr file”) give me

>>> visium_sdata
SpatialData object with:
with coordinate systems:

If there are any resources for creating my own visium.zarr dataset to be used for spatialData, I would be greatly appreciative. I am hoping someone could point me in the right direction.

Thanks,
Ryan

Hi @rmiller, thanks for reaching out.

Please use the visium() function from spatialdata-io, as the one from scanpy is to support legacy workflows based on AnnData.

If you need further customization I suggest to copy and modify the source code of the visium() function to fit your needs.

Please let me know if this covers your use case.

Thank you, Luca.

That worked for me.

Regards,
Ryan

Happy to hear it worked!

Best,
Luca

Another quick question related to this work flow. How do I use napari-spatialdata to open more than one zarr file in the viewer? So that I can visualize images from different samples side by side. Or are they meant to be loaded one at a time adding the landmarks?

Regards,
Ryan

You can do it by having multiple images in the same SpatialData object, or by passing to Interactive() a list of SpatialData objects. Some example workflows can be found in the docs, or here: spatialdata-sandbox/notebooks/czi_demo/xenium_visium.ipynb at main · giovp/spatialdata-sandbox · GitHub.

I don’t see the bit where you use Interactive to actually load them.

just this bit visium_sdata = sd.read_zarr("spatialdata-sandbox/visium_associated_xenium_io/data.zarr") visium_sdata and then the description of the spatialdata object. Then a screenshot of napari open with the images loaded, but I don’t see anything to make that happen.

I verified this, and it seems that GitHub has some problems rendering the full notebook. You can see the full notebook with colab by changing the first part of the URL, in this case using this: Google Colab.

In any case, I see that I have deleted the cells that show the napari screenshots (I just left the screenshots). To visualize the data with napari you can use Interactive([visium_sdata, xenium_sdata]).

From the command line you can do

python -m napari_spatialdata view my_data1.zarr my_data2.zarr

This should be enough for you, but in case please also check the docs for further arguments of the Interactive() class.

yes thank you. that opened both images. can they be moved individually or are they linked on the canvas?

Great to hear this! Within napari they can’t be moved, but you can assign a transformation to them individually using the SpatialData APIs. This notebook shows how to do it: Transformations and coordinate systems — spatialdata