What is the consequence in the final results of coding (1) instead of (2)?
Is there a tutorial available where flavor=‘seurat_v3’ is used?
Besides, has someone already checked that using code (2) in Scanpy provides exactly the same results, for the same input data, as using the following code in Seurat v3+?
There may be some inconsistency when finding HVG with the batch_key argument in which the way genes are ranked is slightly different. The core computation of normalized variances is approximately equal though.
Regarding the equivalence between “Seurat v3” and “Scanpy with flavor seurat_v3”, I ran a test on a given count matrix and I measured 98.65% of common genes detected as HVG among 2000 genes, which means that 27 genes were not detected as HVG by both methods.
By using the layer argument as you recommended, the agreement rate is now much better (98.65% instead of 52.14%) but it is not 100%. Note that I did not use the “batch_key” argument.
Is there an approximation somewhere is Scanpy code compared to Seurat v3 code which could explain that we don’t always reach an agreement rate of 100%?
Did you also get non exact matching when running your equivalence tests?
If yes, what maximum mismatch rate did you allow?
If not, could you please detail all the parameter values we should use in Scanpy and in Seurat v3 to get exactly the same results?
There is no approximation, but there can be heuristic differences. For example, with this approach many genes can get the same normalized variance score, so which genes get included comes down to how they are sorted, which is what I expect to be happening in this case.
After running you should see adata.var. variances_norm this is the score that is used for ranking genes. We would very much appreciate if you compared this to Seurat