Thank you @srivarra for reaching out and for the reproducible example. The behavior you encountered is expected and I will elaborate on the reasons, how to circumvent it and on the limitations.

TLDR; if you need positional indexing you can use `fov.isel(y=1)`

or `fov.isel(y=slice(1, 2))`

.

## Saving and reading

In `SpatialData`

we use the NGFF specification to save and load raster tensors to Zarr. The in-memory representation of this Zarr storage is similar to what `xarray.DataArray`

offers, but with some differences. One of these differences is that NGFF doesn’t save the z, y, x coordinates explicitly (it saves only c) and it has a more laborious (but more powerful) way to deal with non-regular grids, which includes also more complex warps (see specification here). One note: the specification that I linked is still in draft mode and we currently don’t support non-linear transformations, but we have one master student, Tobias, that started working on this.

Considering the above, when we save an `xarray.DataArray`

object to disk we discard the z, y, x coordinates, and we reconstruct them upon loading the object. By design in doing this we reconstruct uniformly spaced coordinates.

## Centered vs non-centered coordinates

Before even explaining why we use centered coordinates as you saw, here are two limitations of this approach:

- if you have coordinates that are non-linearly spaced, like 0, 1, 10, these will be lost and there is no way to save and load them for the moment.
- if you have two tensors with shape 1x3x3 (1 channel image) with uniformly spaced coordinates, one with 0-based coordinates 0, 1, 2, and one with centered coordinates 0.5, 1.5, 2.5, when you save them and load them they will be reconstructed in only one way. As I said currently we use centered coordinates. So in both cases you will get 0.5., 1.5, 2.5.

The reason why we opted for centered coordinates is for when we deal with multiscale images, which are objects that are represented with `xarray.DataTree`

and store multiple `xarray.DataArray`

images with decreasing resolutions and aligned coordinates.

Say that you have an image with resolution 999x999 spanning between the coordinates 0 and 1000. With centered coordinates the xarray coordinates of the image are 0.5, 1.5, … 999.5. If you parse the image with `ImageModel.parse(image, scale_factors=[2, 2, 2, 2])`

(which leads to 5 total scales, the smallest having a 16x downscale), the smallest scale will have this coordinates

```
└── DataTree('scale4')
Dimensions: (c: 3, y: 62, x: 62)
Coordinates:
* c (c) int64 0 1 2
* y (y) float64 8.056 24.17 40.28 56.4 ... 942.6 958.7 974.8 990.9
* x (x) float64 8.056 24.17 40.28 56.4 ... 942.6 958.7 974.8 990.9
Data variables:
image (c, y, x) float64 dask.array<chunksize=(3, 62, 62), meta=np.ndarray>
```

Here, if you do some slicing operations based on physical coordinates, having non-centered coordinates would lead to some asymmetry in what is selected and what not.

Also if you suppose to do operations like queries (subset) or aggregations (e.g. summing pixels by raster masks), I think it’s natural to interpret the weigh of the pixel as in the center of the pixel. This comes automatic if we use centered coordinates.

## Positional slicing

Please notice that the above implementation doesn’t impede the users from doing positional based indexing slicing operations for instance `fov.isel(y=1)`

, which would be equivalent to

`fov.sel(y=1.5)`

.

Similarly one can use `fov.sel(y=slice(2, 3))`

, which is equivalent to the more natural `fov.sel(y=(1, 2))`

.

Anyway when dealing with physical coordinates, `sel`

(non-positional) indexing is probably more common.

## Asking for feedback

To conclude, I want to anticipate that soon we will refactor the way coordinate transformations interact with xarray coordinates, improving the ergonomics of the library. For instance if one applies a scale operation to the coordinates (i.e. only the coordinates change but the shape of the tensor doesn’t change), we will have a function that saves this as a tensor with a NGFF Scale transformation, and then upon loading the scale transformation will be converted to xarray coordinates.

In doing so it will be crucial to have uniformly spaced coordinates, but it doesn’t change much if we use centered vs non-centered coordinates. So we kindly ask you for feedback on this. In the end the argumentations above were more like suggestions for us to use centered coordinates, but we could also go for non-centered ones if many users find this more natural.

The feedbacks we got so far is that centered coordinates are fine to use, but that we need to improve the documentation making it clear that we use them, and showing examples (`isel`

, `sel`

) on how to deal with those.