Minimum distance histograms with universal performance guarantees

We present a data-adaptive multivariate histogram estimator of an unknown density f based on n independent samples from it. Such histograms are based on binary trees called regular pavings (RPs). RPs represent a computationally convenient class of simple functions that remain closed under addition and scalar multiplication. Unlike other density estimation methods, including various regularization and Bayesian methods based on the likelihood, the minimum distance estimate (MDE) is guaranteed to be within an L1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_1$$\end{document} distance bound from f for a given n, no matter what the underlying f happens to be, and is thus said to have universal performance guarantees (Devroye and Lugosi, Combinatorial methods in density estimation. Springer, New York, 2001). Using a form of tree matrix arithmetic with RPs, we obtain the first generic constructions of an MDE, prove that it has universal performance guarantees and demonstrate its performance with simulated and real-world data. Our main contribution is a constructive implementation of an MDE histogram that can handle large multivariate data bursts using a tree-based partition that is computationally conducive to subsequent statistical operations.


Introduction
Suppose our random variable X has an unknown density f on ℝ d , then for all Borel sets A ⊆ ℝ d , Any density estimate f n (x) ∶= f n (x;X 1 , X 2 , … , X n ) ∶ ℝ d × ℝ d n → ℝ is a map from ℝ d n+1 to ℝ . The objective in density estimation is to estimate the unknown f from an independent and identically distributed (IID) sample X 1 , X 2 , … , X n drawn from f. Density estimation is often the first step in many learning tasks, including anomaly detection, classification, regression and clustering.
The quality of f n is naturally measured by how well it performs the assigned task of computing the probabilities of sets under the total variation criterion: where B d are Borel sets in ℝ d . The last equality above is due to Scheffé's identity and this equates the L 1 distance between f n and f, in the absolute scale of [0, 1], to the total variation distance between them.
A non-parametric density estimator is said to have universal performance guarantees when the underlying f is allowed to be any density in L 1 (Devroye and Lugosi 2001, p. 1). Histograms and kernel density estimators can approximate f in this universal sense in an asymptotic setting, i.e., as the number of data points n approaches infinity (the so-called asymptotic consistency of the estimator f n ). But for a fixed n, however large but finite, classical studies of the rate of convergence of f n to f require additional assumptions on the smoothness class (to solve this so-called smoothing problem), such as f ∈ L 2 ≠ L 1 or f ∈ C k , the set of k times differentiable functions, as opposed to letting f simply belong to the set where densities exist, i.e., f ∈ L 1 , and thereby violate the universality property.
Universal performance guarantee is provided by the minimum distance estimate (MDE) due to Lugosi (2001, 2004). Their fundamentally combinatorial approach combined ideas from Yatracos (1985Yatracos ( , 1988 on minimum distance methods and from Vapnik and Chervonenkis (1971) on uniform convergence of empirical probability measures over classes of sets. See Devroye and Lugosi (2001) for a self-contained introduction to combinatorial methods in density estimation. Unlike the likelihood based methods, MDE gives universal performance guarantees, i.e., MDE does not assume that f is in L 2 in order to address the smoothing problem for the given sample of size n, by directly minimizing the L 1 distance over the so-called Yatracos class-a certain class of subsets of the support set that are induced by the partitions of each ordered pair of histograms in the set of histograms from which one has to choose the optimally smoothed histogram (Devroye and Lugosi 2001).
The Yatracos class is not trivial to represent for the purposes of concretely obtaining the MDE in a nonparametric multivariate setting involving large sample sizes. The particular class of MDEs studied in Lugosi (2001, 2004) were limited to kernel estimates and histograms under simpler partitioning rules. Inspired by this, here we develop an MDE over statistical regular pavings using tree-based partitioning strategies to produce a much more general nonparametric MDE that has (1) data-adaptive partitions (2) in d dimensions with (3) partitioning structures imbued with arithmetic for downstream statistical operations. Briefly, our approach exploits a recursive arithmetic using nodes imbued with recursively computable statistics and a specialized collator structure to compute the supremal deviation of the held-out empirical measure over the Yatracos class of the candidate densities.
Unlike other tree-based partitions, our regular paving structure restricts partitioning by only bisecting a box along its first widest coordinate to make the countable set of such trees closed under addition and scalar multiplication and thereby allowing for computationally efficient computer arithmetic over a dense set of simple functions. See Harlow et al. (2012) for statistical applications of this arithmetic, including conditional density regression and multivariate tail probability computations for anomaly detection. Although a more efficient algorithm (up to pre-processing the L 1 distances for each pair of densities) is characterized in Mahalanabis and Stefankovic (2008), we are not aware of any publicly available implementations of the MDE using data-adaptive multivariate histograms for bursts of data common in many industrial applications today, especially for downstream statistical operations with the density estimate, including anomaly detection (with n ≊ 10 7 in dimensions up to 6 for instance in a non-distributed computational setting over one commodity machine).
To the best of our knowledge, the accompanying code of this paper in mrs2 Sainudiin et al. (2008Sainudiin et al. ( -2019 is the only publicly available implementation of such an MDE estimator. Our main contribution in this work is a rigorous implementation of the minimum distance estimate proposed by Devroye and Lugosi (2001) for the nonparametric multivariate setting that can handle large bursts of data. The estimator has been successfully used in industry-scale problems where one needs to construct a multivariate density estimate in a "batch" setting and use this estimate for producing anomaly scores.
In the next two sections, we give the definitions, algorithms, theorems and proofs needed for our minimum distance estimator. Three core algorithms are given in the Appendix for completeness. We finally conclude after evaluating the performance of the estimator on simulated and real-world datasets.

Regular pavings and histograms
Let x ∶= [x, x] be a compact real interval with lower bound x and upper bound x , where x ≤ x . Let the space of such intervals be ℝ . The width of an interval x is is an interval vector with as the first coordinate of maximum width: The set of all such boxes is ℝ d , i.e., the set of all interval real vectors in dimension d. A bisection or split of x perpendicularly at the mid-point along this first widest coordinate gives the left and right child boxes of x:

3
Such a bisection is said to be regular. Note that this bisection gives the left child box a half-open interval [x , mid (x )) on coordinate so that the intersection of the left and right child boxes is empty. A recursive sequence of selective regular bisections of boxes, with possibly open boundaries, along the first widest coordinate, starting from the root box x in ℝ d is known as a regular paving (Kieffer et al. 2001) or n-tree (Samet 1990) of x . A regular paving of x can also be seen as a binary tree formed by recursively bisecting the box x at the root node. Each node in the binary tree has either no children or two children. These trees are known as plane binary trees in enumerative combinatorics (Stanley 1999, Ex. 6.19(d), p. 220) and as finite, rooted binary trees (frb-trees) in geometric group theory (Meier 2008, Chap. 10). The relationship of trees, labels and partitions is illustrated in Fig. 1 via a sequence of bisections of a square (2-dimensional) root box by always bisecting on the first widest coordinate.
Let ℕ ∶= {1, 2, …} be the set of natural numbers. Let the jth interval of a box x be [x ,j , x ,j ] , the volume of a d-dimensional box x be vol (x ) = ∏ d j=1 (x ,j − x ,j ) . Let the set of all nodes, leaf nodes and internal nodes (or splits) of a regular paving s be (s) ∶= ∪ { { , } j ∶ j ∈ ℕ} , (s) and ̆ (s) ∶= (s) ⧵ (s) , respectively. The set of leaf boxes of a regular paving s with root box x is denoted by x (s) and it specifies a partition of the root box x . Let k be the set of all regular pavings with root box x made of k splits. Note that the number of leaf nodes m = | (s)| = k + 1 if s ∈ k . The number of distinct binary trees with k splits is equal to the Catalan number C k : For i, j ∈ ℤ + , where ℤ + ∶= {0, 1, 2, …} and i ≤ j , let i∶j ∶= ∪ j k=i k be the set of regular pavings with k splits where k ∈ {i, i + 1, … , j} . Let the set of all regular pavings be 0∶∞ ∶= lim j→∞ 0∶j .
A statistical regular paving (SRP) denoted by s is an extension of the RP structure that is able to act as a partitioned 'container' and responsive summarizer for multivariate data. An SRP can be used to create a histogram of a data set. A recursively (1) x ρRL x ρRR Fig. 1 A sequence of selective bisections of boxes (nodes) along the first widest coordinate, starting from the root box (root node) in two dimensions, produces an RP computable statistic (Fisher 1925;Gray and Moore 2003) that an SRP node caches is #x , the count of the number of data points that fell into x . A leaf node with #x > 0 is a non-empty leaf node. The set of non-empty leaves of an SRP s is Figure 2 depicts a small SRP s with root box x ∈ ℝ 2 . The number of sample data points in the root box x is 10. Figure 2a shows the tree, including the count associated with each node in the tree and the partition of the root box represented by the leaf boxes of this tree, with the sample data points superimposed on the boxes. Figure 2b shows how the density estimate is computed from the count and the volume of leaf boxes to obtain the density estimate f n,s as an SRP histogram.
An SRP histogram is obtained from n data points that fell into x of SRP s as follows: It is the maximum likelihood estimator over the class of simple (piecewise-constant) functions given the partition x (s) of the root box of s . We suppress subscripting the histogram by the SRP s for notational convenience. SRP histograms have some similarities to dyadic histograms [for eg. Klemelä (2009, chap. 18), Lu et al. (2013)]. Both are binary tree-based and partition so that a box may only be bisected at the mid-point of one of its coordinates, but the RP structure restricts partitioning further by only bisecting a box on its first widest coordinate in order to make 0∶∞ closed under addition and scalar multiplication and thereby allowing for computationally efficient computer arithmetic over a dense set of simple functions [see Harlow et al. (2012) for statistical applications of this arithmetic]. Crucially, when data bursts

Fig. 2 An SRP and its corresponding histogram
have large sample sizes, this restrictive partitioning does not affect the L 1 errors when compared to a computationally more expensive Bayes estimator (see Sect. 4).
A statistically equivalent block (SEB) partition of a sample space is some partitioning scheme that results in equal numbers of data points in each element (block) of the partition (Tukey 1947). The output of or | (s(T))| = m and T is a corresponding random stopping time. As the initial state S(t = 0) is the root s ∈ 0 , the Markov chain {S(t)} t∈ℤ + on 0∶m−1 satisfies S(t) ∈ t for each t ∈ ℤ + , i.e., the state at time t has t + 1 leaves or t splits. The operation may only be considered to be successful if | (s)| ≤ m and #x ≤ # ∀ ∈ ▽ (s) . Therefore, the sequence of SRP histogram states visited by that successfully terminates at stopping time T will have the terminal histogram with at most # many of the n data points in each of its leaf nodes and with at most m many leaf nodes.
Intuitively, (s, #, m) prioritizes the splitting of leaf nodes with the largest numbers of data points associated with them. As we will see in Theorem 1, the L 1 consistency of requires that m must grow sublinearly (i.e., m∕n → 0 as n → ∞ ) while the volume of leaf boxes shrink such that a combinatorial complexity measure of the partitions in the support of the grows sub-exponentially. Figure 3 shows two different SRP histograms constructed using two different values of # for the same dataset of n = 10 5 points simulated under the standard bivariate Gaussian density. A small # produces a histogram that is under-smoothed with unnecessary spikes (Fig. 3 left), while the other histogram with a larger # is over-smoothed (Fig. 3  right). We will obtain the minimum distance estimate from the SRP histograms visited by the in Theorem 3.

Minimum distance estimation using statistical regular pavings
We show that the SRP density estimate from the -based partitioning scheme is asymptotically L 1 -consistent as n → ∞ provided that # , the maximum sample size in any leaf box in the partition, and m , the maximum number of leaf boxes in the partition grow with the sample size n at appropriate rates. This is done by proving the three conditions in Theorem 1 of Lugosi and Nobel (1996). We will need to show that as the number of sample points increases linearly, the following conditions are met: 1. the number of leaf boxes grows sub-linearly; 2. the partition grows sub-exponentially in terms of a combinatorial complexity measure; 3. and the volume of the leaf boxes in the partition is shrinking.
Let {S n (i)}̇I i=0 on 0∶∞ be the Markov chain of algorithm . The Markov chain terminates at some state ̇s with partition (̇s) . Associated with the Markov chain is a fixed collection of partitions: and the size of the largest partition (̇s) in L n is given by Given n fixed points {x 1 , … , x n } ∈ ℝ d n . Let L n , {x 1 , … , x n } be the number of distinct partitions of the finite set {x 1 , … , x n } that are induced by partitions (̇s) ∈ L n : For any fixed set of n points, the growth function of L n is then Theorem 1 (L 1 -Consistency) Let X 1 , X 2 , … be independent and identical random vectors in ℝ d whose common distribution has a non-atomic density f, i.e., ≪ . Let {S n (i)}̇I i=0 on 0∶∞ be the Markov chain formed using (Algorithm 1) with terminal state ̇s and histogram estimate f n,̇s over the collection of partitions L n . As n → ∞, if # → ∞, #∕n → 0, m ≥ n∕#, and m∕n → 0, then the density estimate f n,̇s is asymptotically consistent in L 1 , i.e., Proof We will assume that # → ∞ , #∕n → 0 , m ≥ n∕# , and m∕n → 0 , as n → ∞ , and show that the three conditions: are satisfied. Then, by Theorem 1 of Lugosi and Nobel (1996) our density estimate f n,̇s is asymptotically consistent in L 1 .
Condition (a) is satisfied by the assumption that m∕n → 0 since m(L n ) ≤ m. The largest number of distinct partitions of any n point subset of ℝ d that are induced by the partitions in L n is upper bounded by the size of the collection of partitions L n ⊆ 0∶m−1 , i.e., where k is the number of splits.
The growth function is thus bounded by the total number of partitions with 0 to m − 1 splits, i.e., the (m − 1) th partial sum of the Catalan numbers. The partial sum can be asymptotically equivalent to (Mattarei 2010): Taking logs and dividing by n on both sides of the above two equations, and using the assumption that m∕n → 0 as n → ∞ , we can see that condition (b) is satisfied * (L n , {x 1 , … , We now prove the final condition. The first term in the parenthesis converges to zero since #∕n → 0 by assumption. For > 0 and n > 4d , the second term goes to zero by applying the Vapnik-Chervonenkis (VC) theorem to boxes in ℝ d with VC dimension 2d and shatter coefficient S( ℝ d , n) ≤ (en∕2d) 2d (Devroye et al. 1996, Thms. 12.5, 13.3 and p. 220), i.e., For any > 0 and finite d, the right-hand side of the above inequality can be made arbitrarily small for n large enough. This convergence in probability is equivalent to the following almost sure convergence by the bounded difference inequality: Thus, for any , > 0,

Therefore, condition (c) is satisfied and this completes the proof. ◻
Let index a set of finitely many density estimates: {f n, ∶ ∈ } , such that ∫ f n, = 1 for each ∈ . We can index the SRP trees by {s ∶ ∈ } , where is the sequence of leaf node depths that uniquely identifies the SRP tree, and denote the density estimate corresponding to s by f n,s or simply by f n, . Now, consider the asymptotically consistent path taken by the Markov chain of . For a fixed sample size n, let {s ∶ ∈ } be an ordered subset of states visited by the Markov chain, with s ≺ s if s is a refinement of s , i.e., if s is visited before s . The goal is to select the optimal estimate from | | many candidates.
When our candidate set of densities are additive like the histograms, we can use the hold-out method proposed by Devroye and Lugosi (2001, Sec. 10.1) for minimum distance estimation as follows. Let 0 < < 1∕2 . Given n data points, use n − n points as the training set and the remaining n points as the validation set (by n we mean ⌊ n⌋ ). Denote the set of training data by T ∶= {x 1 , … , x n− n } and the set of validation data by V ∶= {x n− n+1 , … , x n } = {y 1 , … , y n } . For an ordered pair ( , ) ∈ 2 , with ≠ , the set is known as a Scheffé set. The Yatracos class (Yatracos 1985) is the collection of all such Scheffé sets over : Let n be the empirical measure of the validation set V . Then, the minimum distance estimate or MDE f n− n, * is the density estimate f n− n, constructed from the training set T with the smallest index * that minimizes: Thus, the MDE f n− n, * minimizes the supremal absolute deviation from the heldout empirical measure n over the Yatracos class A .
The SRP is adapted for MDE to mutably cache the counts for training and validation data separately and the n − n training data points in T and the n validation data points in V are accessible from any leaf node v of the SRP via pointers to x i ∈ T and y i ∈ V , respectively. The training data drive the Markov chain (s, #, m) to produce a sequence of SRP states: s 1 , s 2 , … that are further selected to build the candidate set of adaptive histogram density estimates given by {f n− n, i ∶ i ∈ } . For each i ∈ , the validation data are allowed to flow through s i and drop into the leaf boxes of s i . A graphical representation of an SRP with training counter #x v and validation counter # x v is shown in Fig. 4. Computing the MDE objective i in (3) requires the histogram estimate f n− n ( v) = #x v ∕n (x v ) and the empirical measure of the validation data n (x v ) =#x v ∕ n at any node v . These can be readily obtained from #x v and # x v .
Our approach to obtaining the MDE f n− n, * with optimal SRP s * exploits the partition refinement order in {s ∶ ∈ } , a subset of states along the path taken by the . Using nodes imbued with recursively computable statistics for both training and validation data, and a specialized collation according to SRPCollate (Algorithm 3) over SRPs, we compute the objective in (3) using GetDelta (Algorithm 2) via a dynamically grown Yatracos Matrix with pointers to all Scheffé sets constituting the Yatracos class according to GetYatracos (Algorithm 4). We briefly outline the core ideas in these three algorithms next [see Appendix for their pseudocode and mrs2 Sainudiin et al. (2008Sainudiin et al. ( -2019 for details]. In the MDE procedure, pairwise comparisons of the heights of the candidate density estimates f n− n, and f n− n, are needed to get the Scheffé sets that make up the Yatracos class. An efficient way to approach this is to collate the SRPs corresponding to the density estimates onto a collator regular paving (CRP) where the space of CRP trees is also 0∶∞ . Consider now two SRPs s and s for which the corresponding histogram estimates f n, and f n, are computed. Both SRPs s and s have the same root box x . By collating the two SRPs, we get a CRP c with the same root box and the tree obtained from a union of s and s . Unlike the union operation over RPs (Harlow et al. 2012, Algorithm 1), each node v of the SRP collator c stores f n, and f n, as a vector f n,c ( v) ∶= (f n, ( v), f n, ( v)) . The empirical measure of the validation data n (x v ) will also be stored at each node v and can be easily accessed via pointers. Figure 5 shows how CRP c can collate two SRPs s and s using SRPCollate.
We now use Theorem 10.1 of Devroye and Lugosi (2001, p. 99) and Theorem 6.6 of Devroye and Lugosi (2001, p. 54) to obtain the L 1 -error bound of the minimum distance estimate f n− n, * , with * ∈ and | | < ∞. The size of is kept small (typically less than 100) and independent of n by an adaptive search. Note that | | is upper-bounded by m if we were to exhaustively consider each SRP state along the entire path of the in , our set of candidate SRP partitions. Such an exhaustive approach is computationally inefficient as the Yatracos matrix that updates the Scheffé sets grows quadratically with | | . We take a simple adaptive search approach by considering only k (typically 10 ≤ k ≤ 20 ) SRP states in each iteration. In the initial iteration, we add k states to by picking uniformly spaced states from a long-enough path that starts from the root node and ends at a state with a large number of leaves and a significantly higher score than its preceding states. Then, we simply zoom-in around the states with the lowest values and add another k states along the same path close to such optimal states from the first iteration. We repeat this adaptive search process until we are unable to zoom-in further. Typically, we are able to find nearly optimal states within 5 or fewer iterations. By Theorem 1, we know that the histogram partitioning strategy of is asymptotically consistent. Thus, the adaptive search set that is selected iteratively from the set of histogram states along the path of with optimal values will naturally contain densities that approach f as n increases. However, the rate at which the L 1 distance between the best density in and f approach 0 will depend on the complexity of f in terms of the number of leaves needed to uniformly approximate f using simple functions with SRP partitions, a class that is dense in C(x , ℝ) , the algebra of real-valued continuous functions over the root box x by the Stone-Weierstrass Theorem (Harlow et al. 2012, Theorem 4.1). This dependence on the structural complexity of f is evaluated next.

Simulations
To evaluate the performance of our MDE we first choose the unstructured multivariate uniform density. Although the dimension d of the uniform density on [0, 1] d ranges in {1, 10, 100, 1000} , the true density is given by the root box, the first candidate density indexed by . Based on the mean integrated absolute errors (MIAE) shown in Table 1 for each d and n in {10 2 , 10 3 , 10 4 , 10 5 , 10 6 , 10 7 } , there is a dimension-free performance by the MDE for such a target density that immediately belongs to the set of candidate densities indexed by . The sample mean of the integrated absolute errors was taken over five replicate simulations with standard error less than half of the MIAE values. When the sample size is 10 7 and dimension is 1000, the data cannot be represented in a machine with 32GB of memory (as indicated by the '-' entry in Table 1).
We independently verified the inequalities in Eqs. 4 and 7 of Theorems 2 and 3, respectively, by explicitly computing the left and right hand-sides of the inequalities for MDEs and checking that they are indeed satisfied for the simulated datasets in the previous sub-section. This verification is in examples/StatsSubPav/ MinimumDistanceEstimation/MDETest of the mrs2 module companions/mrs-1.0-YatracosThis/. These results are shown for multivariate uniform densities in Fig. 6. The bounds are not sharp although they do decrease with the sample size. This is because they are extremely general by construction and based merely on the cardinality of the set of candidate densities.
To evaluate the performance of our MDE, we also chose two structured multivariate densities: the spherically symmetric Gaussian with a simple concentrated structure and the highly structured Rosenbrock density [whose expression up to normalization is given in (8)] in d dimensions for various sample sizes: The sample standard deviations about the mean integrated absolute errors or MIAEs for the MDE method, i.e., L 1 (f n, * , f ) (shown in the top panel of Table 2), based on ten trials, are below 10 −3 and 10 −4 for values of n in {10 4 , 10 5 } and {10 6 , 10 7 } , respectively. Thus, these standard errors are not shown. However, the L 1 distance between the MDE and the best estimate in the candidate set , L 1 (f n, * , f ) − min ∈ L 1 (f n, , f ) , is shown in Table 2 for each density and sample size. For comparison, as shown in the bottom panel of Table 2, we used the Bayes estimator from the posterior mean histograms (Sainudiin et al. 2013, see for details on this evaluation). Note how the L 1 errors decrease with the sample size and how the errors  are comparable between the methods, albeit the MDE method is at least an order of magnitude faster than the posterior mean histogram that does not provide universal performance guarantees like most density estimators. Due to the use of space-partitioning regularly paved trees, our MDE histograms cannot provide small L 1 errors for highly structured densities beyond 10 or so dimensions on the basis of sample sizes in the order of millions. The reason is simply due to the large L 1 distance between the best candidate density in based on a reasonable maximal number of splits. However, using modern dimensionality reduction techniques including auto-encoders we can often project high dimensional data nonlinearly to a lower dimensional space and use the MDE histograms to construct a density estimate and do further statistical processing as we show below.
All experiments were performed on the same physical machine that is currently considered to be commodity hardware (Sainudiin et al. 2013, for machine specifications and CPU times).

Detecting bot flows using MDE tail probabilities
We apply the MDE histogram on the real-world scenario 11 of the CTU-13 dataset of botnet traffic on a computer network (Garcia et al. 2014). The dataset captured 8164 real botnet traffic flows mixed with 99087 normal and background traffic flows. These flows were augmented into 80 dimensions using Word2Vec embeddings of the flows (datapoints) and reduced to 8 dimensions by training a deep auto-encoder with a bottleneck layer of eight nodes by Ramström (2019), who gives domain-specific details on how the raw data was augmented and fitted to an auto-encoder. For the purpose of our application, it suffices to note that a deep auto-encoder was trained on appropriately augmented normal flows in order Table 2 The MIAE for MDE and posterior mean estimates with different sample sizes for the 1D-, 2D-, and 5D-Gaussian densities, as well as the 2D-and 5D-Rosenbrock densities n Standard Gaussian densities Rosenbrock densities 1D 2D 5D 2D 5D Minimum distance estimate's mean L 1 (f n, * , f ) , L 1 (f n, * , f ) − min ∈ L 1 (f n, , f ) to non-linearly reduce the dimensions from 80 to 8. We then used the n = 99087 samples in 8 dimensions to obtain the MDE histogram. For each data point x from the normal as well as the botnet flow, we computed its multivariate tail probability (Harlow et al. 2012, Algorithm 9). Briefly, this is given by 1 minus the sum of the probability mass of all leaf nodes in the MDE histogram whose density ("height") is larger than that of the leaf node whose box contains x. The tail probability can be directly used as a score of how unlikely an event is under the density estimate constructed with the normal flows. We obtain these tail probabilities for all 107251 flows (mixed with 7.6% botnet flows) and sort them by their tail probabilities. Our histogram estimate was able to identify 87% and 99.1% of the botnet flows, i.e., 7115 and 8090 out of 8164 botnet flows were within the lowest 7.6% and 10% of the tail probabilities, respectively. Thus, using the tail probabilities of the MDE histogram estimated from the normal flows was extremely effective in identifying the botnet flows.

Conclusion and future directions
Thus, using the collator regular paving (CRP), we obtain the minimum distance estimate (MDE) with universal performance guarantees. All the methods are implemented and available in mrs2 Sainudiin et al. (2008Sainudiin et al. ( -2019, including the downstream statistical operations for anomaly detection and conditional density regression (Harlow et al. 2012). We limited our minimum distance estimate (MDE) to the candidate set given by the SRP histograms visited along the path of the Markov chain . This was done to take advantage of the structure of consecutive refinements of the tree partitions along a single path of . However, obtaining the MDE from an arbitrary set of SRP histograms taken from 0∶∞ will need more sophisticated collators. Initial experiments using the Scheffé tournament approach (as opposed to the MDE) to find the best estimate in a candidate set of arbitrary SRP histograms (not just those along a path in 0∶∞ ) look feasible. Such a Scheffé tournament will allow us to compare estimates from entirely different methodological schools (Bayesian, penalized likelihood, etc.). Finally, the pure tree structure allows one to possibly extend this MDE to a distributed fault-tolerant computational setting such as Zaharia et al. (2016) as the sample size becomes too large for the memory of a single machine.