Spatialdata wrong shapes translation

I have a xenium image, which is the concatenation of 4 different samples. To get samples, I use bounding_box_query().
For instance, I get the sample number 3 like that:

crop_sample_3 = lambda x: bounding_box_query(
    x,
    min_coordinate=[15000, 70000],
    max_coordinate=[17000, 70000 + 17000],
    axes=("x", "y"),
    target_coordinate_system="global",
)

sdata_sample3 = crop_sample_3(sdata)

Then, I need to reset the coordinates (I want the point in the upper left corner of the image to have coordinates (0, 0) instead of (15000, 70000).
To do so, I create the following translation:

translation = Translation([0, 0], axes=("x", "y"))

Then I apply the transformation to my spatialdata object:

set_transformation(sdata_sample3.images["morphology_mip"], translation, to_coordinate_system="global")
set_transformation(sdata_sample3.labels["cell_labels"], translation, to_coordinate_system="global")
set_transformation(sdata_sample3.shapes["cell_boundaries"], translation, to_coordinate_system="global")

But, the transformation works fine for images and labels, but It does something really weirds for shapes: shapes are translated on different coordinates, and seemed to be ‘downscaled’ (They appear smaller when I plot them).

In the image, i applied the same translation to the image, labels, and to one shape. We see that the shape got translated in the bottom right corner. tried many scaling/translation coordinates, the result is always weird.
img4

I cant put more than 1 image in my post, so:
Here is the cropped spatialdata:

Here is what small translation applied only to image and one label looks like:
img3
So i expect, when translating shapes, that they move the same way that image and labels moved

Hi, thanks for reaching out. Can you please post the code that you use for plotting?

One note, the syntax required for translation requires a fix, here (0, 0) will be used as the translation vector, so no translation will be made. To translate to 0, 0 you will need to translate by [-15000, -70000].

Hello, thanks for your answer !

Concerning the syntax for translation, I started by translating by [-15000, -70000], but the output were coordinates whose origin (in the upper left of the image) was the point (-15000, -70000), this is why I changed for [0, 0]

Here is all the code I used for plotting images

# Load spatial data
sdata = sd.read_zarr("/data/tmp/aconsten/data/imm_05.zarr")

# Crop spatial data. I selected a tile of size (2000, 2000)
crop_sample_3 = lambda x: bounding_box_query(
    x,
    min_coordinate=[15000, 70000],
    max_coordinate=[15000 + 2000, 70000 + 2000],
    axes=("x", "y"),
    target_coordinate_system="global",
)

sdata_sample3 = crop_sample_3(sdata)

# plot image with shapes and labels
sdata_sample3.pl.render_images().pl.render_shapes().pl.render_labels().pl.show(title="Image with shapes and labels")

# Define small transformation of 500 pixels
translation = Translation([15000 + 500, 70000 + 500], axes=("x", "y"))

# Plot translated image with shapes and labels
set_transformation(sdata_sample3.images["morphology_mip"], translation, to_coordinate_system="global")
sdata_sample3.pl.render_images().pl.render_shapes().pl.render_labels().pl.show(title="Image translated with shapes and labels")

# Plot translated image and labels, with shapes
set_transformation(sdata_sample3.labels["cell_labels"], translation, to_coordinate_system="global")
sdata_sample3.pl.render_images().pl.render_shapes().pl.render_labels().pl.show(title="Image & labels translated with shapes")

# Plot translated image, labels and shapes. This is the weird plot as shown above
set_transformation(sdata_sample3.shapes["cell_boundaries"], translation, to_coordinate_system="global")
sdata_sample3.pl.render_images().pl.render_shapes().pl.render_labels().pl.show(title="Image & labels & shapes translated")

Here is what happens if i set the translation to [-15000, -70000], and apply to to the image. Image gets located in position (-15000, -70000), and labels are in position (15000, 70000)

translation = Translation([-15000, -70000], axes=("x", "y"))
set_transformation(sdata_sample3.images["morphology_mip"], translation, to_coordinate_system="global")
sdata_sample3.pl.render_images().pl.render_labels().pl.show(title="Image with [-15000, -70000] translation")

translation

Hi, apologies for the late answer, I had time to look at this again today.

transformations after cropping

Also my previous answer was ambiguous (didn’t read your code carefully enough, apologies). Here is the explanation for what you got in your plots.

After the cropping these are the transformations present in the data:


Precisely:

  • the image and the labels object before cropping had an identity, now they have a translation that takes into account for the new position of the pixel (0, 0).
  • the shapes before cropping had a scale, now after cropping they have the same scale and no translation is needed (the cropping operation didn’t change the origin).

translating by 500

Now, to translate everything by (500, 500) units you have two options, to replace the existing transformation, to append a new transformation to the already existing one.

replacing the transformation

Now, set_transformation() follows the first option, so the new transformation for images should be equal to (15000 + 500, 70000 + 500), as you are doing. This works, in this case, for images, but won’t for shapes. Note also that in general this approach it’s not guaranteed to work for images: for instance with Visium data images have a scale, so replacing the transformation will displace the old image alignment.

appending the transformation

The approach I recommend is to append a new transformation. Here is a function to do this:

def _postpone_transformation(
    sdata: SpatialData, 
    from_coordinate_system: str, 
    to_coordinate_system: str, 
    transformation: BaseTransformation
):
    for element in sdata._gen_spatial_element_values():
        d = get_transformation(element, get_all=True)
        assert isinstance(d, dict)
        t = d[from_coordinate_system]
        sequence = Sequence([t, transformation])
        set_transformation(element, sequence, to_coordinate_system)

We could introduce a new public API to perform this opertation.

code

Here is the adjusted code (please notice that you have to replace the dataset path and change x0, y1, as I was using a different dataset).

##
# https://discourse.scverse.org/t/spatialdata-wrong-shapes-translation/2276/6
import spatialdata as sd
import spatialdata_plot
import matplotlib.pyplot as plt

# Load spatial data
f = "/Users/macbook/embl/projects/basel/spatialdata-sandbox/xenium_rep1_io/data.zarr"
sdata = sd.read_zarr(f)

##
sd.get_extent(sdata)

##
# Crop spatial data. I selected a tile of size (2000, 2000)
x0 = 15000
y0 = 20000
x1 = x0 + 2000
y1 = y0 + 2000
crop_sample_3 = lambda x: sd.bounding_box_query(
    x,
    min_coordinate=[x0, y0],
    max_coordinate=[x1, y1],
    axes=("x", "y"),
    target_coordinate_system="global",
)

sdata_sample3 = crop_sample_3(sdata)

##
# plot image with shapes and labels
sdata_sample3.pl.render_images().pl.render_shapes().pl.render_labels().pl.show(title="Image with shapes and labels")
plt.show()

##
from spatialdata.transformations import (
    Translation,
    set_transformation,
    BaseTransformation,
    Sequence,
    get_transformation,
)


def _postpone_transformation(
    sdata: sd.SpatialData, from_coordinate_system: str, to_coordinate_system: str, transformation: BaseTransformation
):
    for element in sdata._gen_spatial_element_values():
        d = get_transformation(element, get_all=True)
        assert isinstance(d, dict)
        print(d)
        t = d[from_coordinate_system]
        sequence = Sequence([t, transformation])
        set_transformation(element, sequence, to_coordinate_system)


##

# Define small transformation of 500 pixels
translation = Translation([500, 500], axes=("x", "y"))

_postpone_transformation(sdata_sample3, "global", "global", translation)

# Plot translated image with shapes and labels
sdata_sample3.pl.render_images().pl.render_shapes().pl.render_labels().pl.show(
    title="Image translated with shapes and labels"
)
plt.show()

# Plot translated image and labels, with shapes
sdata_sample3.pl.render_images().pl.render_shapes().pl.render_labels().pl.show(
    title="Image & labels translated with shapes"
)
plt.show()

# Plot translated image, labels and shapes. This is the weird plot as shown above
sdata_sample3.pl.render_images().pl.render_shapes().pl.render_labels().pl.show(
    title="Image & labels & shapes translated"
)
plt.show()

final note

A final note, in the above I talked about units (of the coordinate system 'global') and not pixels (even if in this case pixels and units coincide for the image and labels because no scale is present); in general our APIs require to specify a target coordinate system and deal with coordinates in terms of units in that coordinate system.

There was also a consideration to be made to the plot, which explains why you were getting two subplots. I have explained this here Custom title when multiple coordinate system present and no coordinate system specified · Issue #269 · scverse/spatialdata-plot · GitHub. Please see “Solution for this case”.

Hi, thank you very much for taking time to answer ! This really helps me

Alexis

1 Like