Impact of scaling on clustering results and controlling minimum cluster size in Leiden clustering

Description
Hi scanpy team,
I’ve encountered two related issues with clustering that I’d like to understand better:
Issue 1: Dramatic difference in cluster numbers with/without scaling
I’m observing a significant discrepancy in the number of clusters depending on whether I apply scaling:

With sc.pp.scale(): 36 clusters
Without sc.pp.scale(): 11 clusters

Questions:

Why does scaling lead to such a dramatic increase in cluster numbers (more than 3x)?
Which result should be considered more reliable for downstream analysis?
What are the best practices for deciding whether to scale before clustering?

Issue 2: Many small clusters with very few cells
When using sc.tl.leiden(), I’m getting many clusters with very small cell counts (some with only a few dozen cells, or even just a handful of cells).
Question:

Is there a parameter to control the minimum cluster size or to merge very small clusters automatically?
What’s the recommended approach to handle these small clusters?

import scanpy as sc
adata = sc.read_h5ad(‘a.h5ad’)
sc.pp.calculate_qc_metrics(adata)
sc.pp.filter_cells(adata, min_genes=1)
sc.pp.normalize_total(adata)
sc.pp.log1p(adata)

sc.pp.highly_variable_genes(adata, min_mean=0.0125, max_mean=3, min_disp=0.5, n_top_genes=3000)

sc.pp.scale(adata, zero_center=False)

sc.pp.pca(adata)
sc.pp.neighbors(adata, n_pcs=30)
sc.tl.umap(adata)
sc.tl.leiden(adata, neighbors_key=“neighbors”, key_added=“leiden”, resolution=1)
sc.pl.spatial(adata, color=[‘leiden’], frameon=False, ncols=1, spot_size=100)

can someone help me. thanks a lot