BMC Systems Biology

, 13:35 | Cite as

Ultrafast clustering of single-cell flow cytometry data using FlowGrid

  • Xiaoxin Ye
  • Joshua W. K. HoEmail author
Open Access



Flow cytometry is a popular technology for quantitative single-cell profiling of cell surface markers. It enables expression measurement of tens of cell surface protein markers in millions of single cells. It is a powerful tool for discovering cell sub-populations and quantifying cell population heterogeneity. Traditionally, scientists use manual gating to identify cell types, but the process is subjective and is not effective for large multidimensional data. Many clustering algorithms have been developed to analyse these data but most of them are not scalable to very large data sets with more than ten million cells.


Here, we present a new clustering algorithm that combines the advantages of density-based clustering algorithm DBSCAN with the scalability of grid-based clustering. This new clustering algorithm is implemented in python as an open source package, FlowGrid. FlowGrid is memory efficient and scales linearly with respect to the number of cells. We have evaluated the performance of FlowGrid against other state-of-the-art clustering programs and found that FlowGrid produces similar clustering results but with substantially less time. For example, FlowGrid is able to complete a clustering task on a data set of 23.6 million cells in less than 12 seconds, while other algorithms take more than 500 seconds or get into error.


FlowGrid is an ultrafast clustering algorithm for large single-cell flow cytometry data. The source code is available at


Clustering Flow cytometry Single cell DBSCAN 



Adjusted rand index


Breadth first search


Mass cytometry


Density-based spatial clustering of applications with noise


Flow cytometry critical assessment of population identification methods


Self-organising map


Recent technological advancement has made it possible to quantitatively measure the expression of a handful of protein markers in millions of cells in a flow cytometry experiment [1]. The ability to profile such a large number of cells allows us to gain insights into cellular heterogeneity at an unprecedented resolution. Traditionally, cell types are identified based on manual gating of several markers in flow cytometry data. Manual gating relies on visual inspection of a series of two dimensional scatter plots, which makes it difficult to discover structure in high dimensions. It also suffers subjectivity, in terms of the order in which pairs of protein markers are explored, and the inherent uncertainty of manually drawing the cluster boundaries [2]. An emerging solution is to use unsupervised clustering algorithms to automatically identify clusters in potentially multidimensional flow cytometry data.

The Flow Cytometry Critical Assessment of Population Identification Methods (Flow-CAP) challenge has compared the performance of many flow cytometry clustering algorithms [3]. In the challenge, ADIcyt has the highest accuracy but has a long runtime, which makes it impractical for routine usage. Flock [4] maintains a high accuracy and reasonable runtime. After the challenge, several algorithms have been built for flow cytometry data analysis such as FlowPeaks [5], FlowSOM [6] and BayesFlow [7].

FlowPeaks and Flock are largely based on k-means clustering. k-means clustering requires the number of clusters (k) to be defined prior to the analysis. It is hard to determine a suitable k in practice. FlowPeaks performs k-means clustering with a large initial k, and iteratively merges nearby clusters that are not separated by low density regions into one cluster. Flock utilises grids to identify high density regions, which the algorithm then uses to identify initial cluster centres for k-means clustering. This grid-based method of identifying high density region allows k-means clustering to converge much quicker compared to using random initialisation of cluster centres, and also directly identifies a suitable value for k. FlowSOM starts with training Self-Organising Map (SOM), followed by consensus hierarchical clustering of the cells for meta-clustering. In the algorithm, the number of clusters (k) is required for meta-clustering.

BayesFlow uses a Bayesian hierarchical model to identify different cell populations in one or many samples. The key benefit of this method is its ability to incorporate prior knowledge, and captures the variability in shapes and locations of populations between the samples [7]. However, BayesFlow tends to be computational expensive as Markov Chain Monte Carlo sampling requires a large number of iterations. Therefore, BayesFlow is often impractical for flow cytometry data sets of realistic size.

These algorithms perform well on the Flow-CAP data sets, but they may not be scalable to larger data sets that we are dealing with nowadays – those with tens of millions of cells. Aiming to quantify cell population heterogeneity in huge data sets, we have to develop an ultrafast and scalable clustering algorithm.

In this paper, we present a new clustering algorithm that combines the benefit of DBSCAN [8] (a widely-based density-based clustering algorithm) and a grid-based approach to achieve scalability. DBSCAN is fast and can detect clusters with complex shapes in the presence of outliers [8]. DBSCAN starts with identifying core points that have a large number of neighbours within a user-defined region. Once the core points are found, nearby core points and closely located non-core points are grouped together to form clusters. This algorithm will identify clusters that are defined as high-density regions that are separated by the low-density regions. However, DBSCAN is memory inefficient if the data set is very large, or has large highly connected components.

To reduce the computational search space and memory requirement, our algorithm extends the idea of DBSCAN by using equal-spaced grids like Flock. We implemented our algorithm in an open source python package called FlowGrid. Using a range of real data sets, we demonstrate that FlowGrid is much faster than other state-of-the-art flow cytometry clustering algorithms, and produce similar clustering results. The detail of the algorithm is presented in the Methods section.


The key idea of our algorithm is to replace the calculation of density from individual points to discrete bins as defined by a uniform grid. This way, the clustering step of the algorithm will scale with the number of non-empty bins, which is significantly smaller than the number of points in lower dimensional data sets. Therefore the overall time complexity of our algorithm is dominated by the binning step, which is in the order of O(N). This is significantly better than the time complexity of DBSCAN, which is in the order of O(N log N). The definition and algorithm are presented in the following subsections.


The key terms involved in the algorithm are defined in this subsection. A graphical example can be found in Fig. 1.
  • Nbin is the number of equally sized bins in each dimension. In theory, there are (Nbin)d bins in the data space, where d is the number of dimensions. However, in practice, we only consider the non-empty bins. The number of non-empty bins (N) is less than (Nbin)d, especially for high dimensional data. Each non-empty bin is assigned an integer index i=1…N.
    Fig. 1

    An illustrative example of the FlowGrid clustering algorithm. In this example, Bin 1, Bin 2, Bin 3 and Bin 6 are core bins as their Denb are larger than MinDenb (5 in this example), their Denc are larger than MinDenc (20 in this example), and their Denb are larger than ρ% (75% in this example) of its directly connected bins. \(Dist(C_{1}, C_{2}) =\sqrt {1^{2}+1^{2}} =\sqrt {2} \leq \sqrt {\epsilon }\) (ε=2 in this example), so Bin 1 and Bin 2 are directly connected. \(Dist(C_{2}, C_{4}) =\sqrt {1^{2}+1^{2}} = \sqrt {2} \leq \sqrt {\epsilon }\), so Bin 2 and Bin 4 are directly connected. Therefore, Bin 1, Bin 2 and Bin 4 are mutually connected, and they are assigned into the same cluster. Bin 5 is not a core bin but is a border bin, as it is directly connected to Bin 6, which is a core bin. Bin 3 is a outlier bin, as it is not a core bin nor a border bin. In practice, MinDenb is set to be 3, MinDenc is set to 40 and ρ is set to be 85

  • Bini is labelled by a tuple with d positive integers Ci=(Ci1,Ci2,Ci3,…,Cid) where Ci1 is the coordinate (the bin index) at dimension 1. For example, if Bini has coordinate Ci=(2,3,5), this bin is located in second bin in dimension 1, third bin in dimension 2 and the fifth bin in dimension 3.

  • The distance between Bini and Binj is defined as
    $$ Dist(C_{i}, C_{j})=\sqrt{\sum_{k=1}^{d}\left(C_{ik}-C_{jk}\right)^{2}} $$
  • Bini and Binj are defined to be directly connected if \(Dist(C_{i},C_{j}) \leqslant \sqrt {\epsilon }\), where ε is a user-specified parameter.

  • Denb(Ci) is the density of Bini, which is defined as the number of points in Bini.

  • Denc(Ci) is the collective density of Bini, calculated by
    $$ {Den}_{c}(C_{i})=\sum_{\{j|{Bin}_{j} \text{ and} {Bin}_{i} \text{ are directly connected}\}}{Den}_{b}(C_{j}) $$
  • Bini is a core bin if
    1. 1

      Denb(Ci) is larger than MinDenb, a user-specified parameter.

    2. 2

      Denb(Ci) is larger than ρ% of its directly connected bins, where ρ is a user-specified parameter.

    3. 3

      Denc(Ci) is larger than MinDenc, a user-specified parameter.

  • Bini is a border bin if it is not a core bin but it is directly connected to a core bin.

  • Bini is an outlier bin, if it is not a core bin nor a border bin.

  • Bina and Binb are in the same cluster, if they satisfy one of the following conditions:
    1. 1

      they are directly connected and at least one of them is core bin;

    2. 2

      they are not directly connected but are connected by a sequence of directly connected core bins.

  • Two points are in the same cluster, if they belong to the same bin or their corresponding bins belong to the same cluster.


Algorithm 1 describes the key steps of FlowGrid, starting with normalising the values in each dimension to range between 1 and (Nbin+1). Then, we use the integer part of the normalised value as the coordinate of its corresponding bin. Then, the SearchCore algorithm is applied to discover the core bins and their directly connected bins. Once the core bins and connections are found, Breadth First Search(BFS) is used to group the connected bins into a cluster. The cells are labelled by the label of their corresponding bins.



FlowGrid aims to be an ultrafast and accurate clustering algorithm for very large flow cytometry data. Therefore, both the accuracy and scalability performance need to be evaluated. The benchmark data sets from Flow-CAP [3], the multi-centre CyTOF data from Li et al. [9] and the SeaFlow project [10] are selected to compare the performance of FlowGrid against other state-of-the-art algorithms, FlowSOM, FlowPeaks, and FLOCK. These three algorithms are chosen because they are widely used, are generally considered to be quite fast, and have good accuracy.

Three benchmark data sets from Flow-CAP [3] are selected for evaluation, including the Diffuse Large B-cell Lymphoma (DLBL), Hematopoietic Stem Cell Transplant (HSCT), and Graft versus Host Disease(GvHD) data set. Each data set contains 10-30 samples with 3-4 markers, and each sample includes 2,000-35,000 cells.

The multi-centre CyTOF data set from Li et al. [9] provides a labelled data set with 16 samples. Each samples contains 40,000-70,000 cells and 26 markers. Since only 8 out fo 26 markers are determined to be relevant markers in the original paper [9], only these 8 markers are used for clustering.

We also use three data sets from the SeaFlow project [10] and they contain many samples. Instead of analysing the independent samples, we analyse the concatenated data sets as the original paper [10] and these concatenated data sets contain 12.7, 22.7 and 23.6 millions of cells respectively. Each data sets include 15 features but the original study only uses four features for clustering analysis. The four features are forward scatter (small and perpendicular), phycoerythrin, and chlorophyll (small) [10].

In the evaluation, we treat the manual gating label as the gold standard for measuring the quality of clustering. In the pre-precessing step, we apply the inverse hyperbolic function with the factor of 5 to transform the multi-centre data and the SeaFlow data. As the Flow-CAP and multi-centre CyTOF data contain many samples and we treat each sample as a data set, we run all algorithms on each sample. The performances are measured by the ARI and runtime, which are reported by the arithmetic means (\(\bar {x}\)) and standard deviation (sd). For the Seaflow data sets, we treat each concatenated data set as a data set. In the evaluation, all algorithms are applied on these concatenated data sets.

To evaluate the scalability of each algorithm, we down-sample the largest concatenated data set from the SeaFlow project, generating 10 sub-sampled data sets in which the numbers of cells range from 20 thousand to 20 million.

Performance measure

The efficiency performance is measured by the runtime while the clustering performance is measured by Adjusted Rand Index (ARI). ARI is used to measure the clustering performance. ARI is the corrected-for-chance version of the Rand index [11]. Although it may result in negative values if the index is less than expected, it tends to be more robust than many other measures like F-measure and Rand index.

ARI is calculated as follow. Given a set S of n elements, and two groups of cluster labels (one group of ground truth label and one group of predicted labels) of these elements, namely X={X1,X2,…,Xr} and Y={Y1,Y2,…,Ys}, the overlap between X and Y can be summarized by nij where nij denotes the number of objects in common between Xi and Yj: nij=|XiYj|.
$${\begin{aligned} ARI \,=\, \frac{\sum_{ij} \left(\underset{2}{n_{ij}} \right)- \left[ \sum_{i} \left(\underset{2}{a_{i}} \right) \sum_{j} \left(\underset{2}{b_{j}}\right) \right]/ \left(\underset{2}{n} \right)}{\frac{1}{2}\left[\sum_{i} \left(\underset{2}{a_{i}}\right)+ \sum_{j} \left(\underset{2}{b_{j}}\right) \right]- \left[\sum_{i} \left(\underset{2}{a_{i}}\right) \sum_{j} \left(\underset{2}{b_{j}}\right) \right] / \left(\underset{2}{n}\right)} \end{aligned}} $$
where \(a_{i}=\sum _{j} n_{ij}\) and \(b_{j}=\sum _{i} n_{ij}\)


FlowGrid is publicly available as an open source program on GitHub. FlowSOM and FlowPeaks are available as R packages from Bioconductor. The source code of Flock is downloaded from its Sourceforge repository. To reproduce all the comparisons presented in this paper, the source code and data can be downloaded from the GitHub repository FlowGrid_compare. We run all the experiments on six 2.60 GHz cores CPU with 32 G RAM.

FlowPeaks and Flock provide automated version without any user-input parameter. FlowSOM requires one user-supplied parameter (k, the number of clusters in meta-clustering step). FlowGrid requires two user-supplied parameters (binn and ε). To optimise the result, we try many k for FlowSOM and many combinations of binn and ε for our algorithm.


Performance comparison

Table 1 summaries the performance of our algorithm and three other algorithms – FlowSOM, FlowPeaks, and Flock in terms of runtime. Our algorithm is substantially faster than other clustering algorithms in all the data sets. This improvement of runtime is especially substantial in the Seaflow data sets. FLOCK and FlowPeaks sometimes fail to complete in some of the data sets. In a data set of 23.6 million cells, FlowSOM completes the execution in 572 s, whereas FlowGrid completes the execution in only 12 s. This is a 50 × speed up. Table 2 summaries the clustering accuracy performance. In Flow-CAP and the multi-centre data sets, FlowGrid shares the similar clustering accuracy (in terms of ARI) with other clustering algorithms but in Seaflow data sets, FlowGrid gives higher accuracy than other clustering algorithms.
Table 1

Comparison of runtime (in seconds) of FlowGrid against other clustering algorithms

Data set




Time in second (\(\bar {x}\pm sd\))









29-77 ×103

0.23 ± 0.09

4.01 ± 1.08

2.27 ± 0.61

10.3 ± 3.45




12-33 ×103

0.07 ± 0.04

2.16 ± 0.54

0.28 ± 0.16

0.58 ± 0.28




2-25 ×103

0.04 ± 0.01

1.25 ± 0.32

0.10 ± 0.09

0.22 ± 0.16




6-9 ×103

0.04 ± 0.02

1.35 ± 0.28

0.11 ± 0.02

0.28 ± 0.06




23.6 ×106








12.7 ×106








22.7 ×106





NA represents that the algorithm got error in the data set

Table 2

Comparison of accuracy (in ARI) of FlowGrid against other clustering algorithms

Data set

ARI (\(\bar {x}\pm sd\))







0.66 ± 0.20

0.75 ± 0.17

0.68 ± 0.20

0.66 ± 0.16


0.79 ± 0.15

0.85 ± 0.11

0.72 ± 0.16

0.47 ± 0.20


0.85 ± 0.10

0.84 ± 0.10

0.82 ± 0.15

0.84 ± 0.09


0.90 ± 0.08

0.87 ± 0.14

0.83 ± 0.24

0.57 ± 0.27
















NA represents that the algorithm got error in the data set

Figure 2 shows that the clustering results of our algorithm and three other algorithms in a HSCT sample. FlowGrid, FlowSOM and FlowPeaks recover similar number of clusters, and the clustering results are largely similar. Flock generates too many clusters in this case. It is important to note that FlowGrid also identifies cells that do not belong to a main cluster (i.e., a high density region). These cells can be viewed as ’outliers’, and are labelled as ’-1’ in Fig. 2. This is a feature that is not present in other clustering algorithms.
Fig. 2

Visual comparison of the clustering performance of FlowGrid, FlowPeaks, FlowSOM, and Flock using manual gating (top row) as the gold standard

Scalability analysis

To further evaluate the scalability of the algorithms, we sub-sample one Seaflow data set and the sampled data sets range from 20 thousand to 20 million cells. Figure 3 shows the scalability of our algorithm and three other algorithms. Flock has a low runtime when processing a small data set, but its runtime dramatically increases to 6640 s for a 20 million-cell data set. FlowPeaks and FlowSOM share similar scalability but FlowPeaks is not able to execute 20 million data set. Our algorithm have the best performance in the evaluation as FlowGrid is faster than other algorithm in all the sampled data by an order of magnitude.
Fig. 3

Comparison of the runtime of FlowGrid, FlowPeaks, FlowSOM, and Flock using data sets with different number of cells

Parameter robustness analysis

Like other density-based clustering algorithm, parameter setting is important. In our experience, Binn and ε are data-set-dependent. We recommend trying out different combinations of Binn between 4 and 15, and ε between 1 and 5. To pick the best parameter combinations, some prior knowledge is helpful such as the expected number of clusters and the proportion of outliers which should be less than 10% in our experience.

We found that other parameters, namely MinDenb, MinDenc and ρ are mostly robust across a wide range of values.

To demonstrate this robustness, we used the benchmark data sets from Flow-CAP for a parameter sensitivity analysis. For these experiments, we first set 3, 40, 85, 4 and 1 as the default value for MinDenb, MinDenc, ρ, Binn and ε, respectively. In each experiment, we only change one parameter to test its sensitivity to the overall classification result. The performance is measured by ARI and runtime. In the first experiment, we varied MinDenb from 1 to 50 while fixing other parameters. In the second experiment, we varied MinDenc ranging from 10 to 300 while fixing other parameters. In the third experiment, we varied ρ ranging from 70 to 95 while fixing other parameters.

Figure 4 demonstrates that the clustering accuracy and runtime are largely insensitive to MinDenb, MinDenc and ρ across a large range of parameter values. The experiments are applied to all the benchmark data sets from Flow-CAP and similar results are observed in all the benchmark data sets. In our experiments, when MinDenb, MinDenc and ρ are set to be 3, 40 and 85 respectively, FlowGrid maintains good clustering performance and excellent runtime. They are therefore set as the default parameters for FlowGrid.
Fig. 4

Sensitivity analysis of three different parameters on clustering accuracy (as measured by adjusted rand index; ARI) and runtime (seconds)


In this paper, we have developed an ultrafast clustering algorithm, FlowGrid, for single-cell flow cytometry analysis, and compared it against other state-of-the-art algorithms such as Flock, FlowSOM and FlowPeaks. FlowGrid borrows ideas from DBSCAN for detection of high density regions and outliers. It does not only perform well in the presence of outliers, but also have great scalability without getting into memory issues. It is both time efficient and memory efficient. FlowGrid shares similar clustering accuracy with state-of-the-art flow cytometry clustering algorithms, but it is substantially faster than them. With any given number of markers, the runtime of FlowGrid scales linearly with the number of cells, which is a useful property for extremely large data sets.

MinDenb and MinDenc are density threshold parameters to reduce the search space of high density bins. If the parameters are set very low, the runtime may fractionally increase but the accuracy is not likely to be affected. However, if the parameters are set very high, the runtime will also fractionally decreases but it may lead to separation of real clusters and create spurious outliers. In any case, we showed that the performance of FlowGrid is generally robust against changes in MinDenb, MinDenc and ρ.

The current implementation of FlowGrid is already very fast for most practical purposes. In the future, if the data size grows even larger, it is possible to further speed up FlowGrid by parallelising the binning step of the algorithm, which is currently the most computationally intensive step of the algorithm.



We thank members of the Ho Laboratory for their valuable comments.


This work was supported in part by funds from the New South Wales Ministry of Health, a National Health and Medical Research Council Career Development Fellowship (1105271 to JWKH), and a National Heart Foundation Future Leader Fellowship (100848 to JWKH). Publication charge is supported by the Victor Chang Cardiac Research Institute.

Availability of data and materials

∙ Project Name: FlowGrid

∙ Project Home Page:

∙ Operating Systems: Unix, Mac, Windows

∙ Programming Languages: Python

∙ Other Requirements: sklearn, numpy

∙ License: MIT Public License

∙ Any Restrictions to Use By Non-Academics: None

About this supplement

This article has been published as part of BMC Systems Biology Volume 13 Supplement 2, 2019: Selected articles from the 17th Asia Pacific Bioinformatics Conference (APBC 2019): systems biology. The full contents of the supplement are available online at

Authors’ contributions

XY and JWKH initiated and designed the project. XY implemented the algorithm, carried out all the experiments, and wrote the paper. JWKH revised the paper. Both authors approved the final version of the paper.

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.


  1. 1.
    Weber LM, Robinson MD. Comparison of clustering methods for high-dimensional single-cell flow and mass cytometry data. Cytom Part A. 2016; 89(12):1084–96.CrossRefGoogle Scholar
  2. 2.
    Saeys Y, Van Gassen S, Lambrecht BN. Computational flow cytometry: helping to make sense of high-dimensional immunology data. Nat Rev Immunol. 2016; 16(7):449.CrossRefGoogle Scholar
  3. 3.
    Aghaeepour N, Finak G, Hoos H, Mosmann TR, Brinkman R, Gottardo R, Scheuermann RH, Consortium F, Consortium D, et al. Critical assessment of automated flow cytometry data analysis techniques. Nat Methods. 2013; 10(3):228.CrossRefGoogle Scholar
  4. 4.
    Qian Y, Wei C, Eun-Hyung Lee F, Campbell J, Halliley J, Lee JA, Cai J, Kong YM, Sadat E, Thomson E, et al. Elucidation of seventeen human peripheral blood B-cell subsets and quantification of the tetanus response using a density-based method for the automated identification of cell populations in multidimensional flow cytometry data. Cytom Part B Clin Cytom. 2010; 78(S1):69–82.CrossRefGoogle Scholar
  5. 5.
    Ge Y, Sealfon SC. flowPeaks: a fast unsupervised clustering for flow cytometry data via k-means and density peak finding. Bioinformatics. 2012; 28(15):2052–8.CrossRefGoogle Scholar
  6. 6.
    Van Gassen S, Callebaut B, Van Helden MJ, Lambrecht BN, Demeester P, Dhaene T, Saeys Y. FlowSOM: Using self-organizing maps for visualization and interpretation of cytometry data. Cytom Part A. 2015; 87(7):636–45.CrossRefGoogle Scholar
  7. 7.
    Johnsson K, Wallin J, Fontes M. BayesFlow: latent modeling of flow cytometry cell populations. BMC Bioinformatics. 2016; 17(1):25.CrossRefGoogle Scholar
  8. 8.
    Ester M, Kriegel H-P, Sander J, Xu X, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD’96).Portland: AAAI Press: 1996. p. 226–231.Google Scholar
  9. 9.
    Li H, Shaham U, Stanton KP, Yao Y, Montgomery RR, Kluger Y. Gating mass cytometry data by deep learning. Bioinformatics. 2017; 33(21):3423–30.CrossRefGoogle Scholar
  10. 10.
    Hyrkas J, Clayton S, Ribalet F, Halperin D, Virginia Armbrust E, Howe B. Scalable clustering algorithms for continuous environmental flow cytometry. Bioinformatics. 2015; 32(3):417–23.CrossRefGoogle Scholar
  11. 11.
    Hubert L, Arabie P. Comparing partitions. J Classif. 1985; 2(1):193–218.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2019

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Authors and Affiliations

  1. 1.Victor Chang Cardiac Research InstituteSydneyAustralia
  2. 2.University of New South WalesSydneyAustralia
  3. 3.School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, The University of Hong KongPokfulamHong Kong

Personalised recommendations