1 Introduction

k-Means continues to remain among the most-used techniques both in data mining and in machine learning. Next to being employed for analysis, k-means is also actively researched and further developed as such. Among recent work, for instance, we find that Stemmer (2021) studied k-means in the differentially private setting, Klochkov et al. (2021) developed a robustified version and report non-asymptotic bounds, Liu (2021) provided improved learning bounds, Ghadiri et al. (2021) came up with a fair k-means, while Cohen-Addad et al. (2021) developed and analyzed an online version. The, largely theoretical, study that we provide in this work concerns the learning curve behavior of k-means.

Learning curves (or, alternatively, sample complexity curves (Zubek and Plewczynski 2016)) consider, on a specific learning problem, the average performance of a learner against the sample size of the training set (Perlich, 2010; Viering & Loog, 2022). At least since the work by Vallet et al. (1989), it is known that such curves can exhibit counterintuitive behavior and show deteriorating performance with increasing training set sizes, i.e., these learners can display so-called nonmonotonic behavior (Loog et al., 2019).

The investigation of such a, arguably, problematic property (Loog & Viering, 2022) has found renewed attention in the wake of the important work by Belkin et al. (2019). They raise various issues concerning classical learning theory in the light of modern-day overparameterized learners. The phenomenon for which this work seems to be cited primarily, however, was essentially described by Vallet et al. (1989) already (see also Loog et al., 2020). Following Belkin et al. (2019), it is nowadays referred to as double descent, also in the learning curve setting: the curve starts off as expected and improves with increasing numbers of training samples, then its performance starts to deteriorate over a consecutive number of training sizes, following which it again gets to improved performance with more training data.

There is a gamut of surprising and curious learning curve behaviors next to double descent. One of the more peculiar ones is the zigzag curve that least absolute deviation regression can display and which has been explained recently by Chen et al. (2023). Loog and Viering (2022) provide a complete overview of the present state of affairs. Our current work is, more specifically, in line with the findings from Loog et al. (2019). There it is shown that learners that rely on empirical risk minimization (ERM), being at the basis of many learners these days, can act nonmonotonically no matter the training sample size. Stated differently, with some sense of drama, Loog et al. (2019) show that even if, during the training phase, we optimize for the loss that we are also using during the test phase,Footnote 1 test-time performance can still deteriorate in expectation. The original work showed the emergence of such behavior for classification, regression, and density estimation.

1.1 Contributions and outline

In this work, we show, in a precise sense, that k-means clustering is not devoid of such quirks either. Our contribution is limited in the sense that we merely manage to prove nonmonotonicity for \(k=2\). Nevertheless, this is significant, firstly, for the fact that we can at all prove that something like this happens for k-means and, secondly, because it demonstrates that nonmonotonicity extends to settings beyond classification, regression, and density estimation.

The next section goes through some preliminaries and provides further related work. It primarily puts k-means in the context of empirical risk minimization and introduces the precise notion of monotonicity. Section 3 formulates our two main results: for \(k=1\) the learning curve of k-means always behaves properly, i.e., it improves with more data, but for \(k=2\) its behavior can be problematic. More precisely, we show that no matter the size of the training set, there are clustering problems for which the 2-means performance still becomes worse with even more training data. While the proof for the former result is provided in the same section, all of Sect. 4 revolves around the proof of the second result. Section 5 discusses and concludes our work.

2 Preliminaries and additional related work

We formulate k-means clustering within the framework of empirical risk minimization (ERM) and touch upon some further relevant literature in this context. In addition, we make precise the notion of monotonicity that we are going to employ. Regarding the latter, we largely follow the notation and definitions as proposed in (Viering et al., 2019; Loog et al., 2019).

2.1 Empirical risk minimization and k-means

Let \(S_N = \left( x_1, \ldots , x_N\right) \in \mathop {\mathrm {\mathcal {X}}}\limits ^N\) be a training set of size N. This is, in our k-means setting, an i.i.d. sample from a distribution D over the standard d-dimensional feature space \(\mathop {\mathrm {\mathcal {X}}}\limits = \mathop {\mathrm {\mathbb {R}}}\limits ^d\). In addition, we have as hypothesis class the set of sets of k means

$$\begin{aligned} {\mathop {\mathrm {\mathcal {M}}}\limits }_k = \left\{ m=\{m_1,\ldots ,m_k\} \big \vert m_i \in {\mathop {\mathrm {\mathbb {R}}}\limits }^d, \forall i\in \{1,\ldots ,k\} \right\} . \end{aligned}$$
(1)

Note that, as every \(m \in \mathop {\mathrm {\mathcal {M}}}\limits _k\) is an actual set, their cardinality \(\vert m \vert\) is smaller than k in case \(m_i = m_j\) for one or more pairs \(i \ne j\). That is, redundant means are discarded.

The particular loss function that we consider in our case could be termed the cluster-wise or group-wise squared loss. We, however, go with within-group squared (WGS) loss, inspired by the term within-group sum of squares that Hartigan (1978) considers in the context of k-means:

$$\begin{aligned} \begin{aligned} \ell _{\textrm{WGS}} :&\mathop {\mathrm {\mathcal {X}}}\limits \times {\mathop {\mathrm {\mathcal {M}}}\limits }_k \rightarrow \mathop {\mathrm {\mathbb {R}}}\limits \\&(x,m) \mapsto \min _{i\in \{1,\ldots ,\vert m \vert \}} \Vert x-m_i \Vert ^2, \end{aligned} \end{aligned}$$
(2)

The ultimate objective is to minimize the expected loss, i.e., the risk:

$$\begin{aligned} R_{D}(m) := \mathop {\mathrm {\mathbb {E}}}\limits _{x \sim D} \ell _{\textrm{WGS}}(x,m) . \end{aligned}$$
(3)

As we do not know the actual underlying distribution D, the principle of ERM (Vapnik, 1982) suggests the learner to rely on its empirical distribution, defined by the training sample of size N, and to consider the loss on this distribution:

$$\begin{aligned} R_{S_N}(m) := \frac{1}{N} \sum _{j=1}^N \ell _{\textrm{WGS}}(x_j,m) = \frac{1}{N} \sum _{j=1}^N \min _{i\in \{1,\ldots , \vert m \vert \}} \Vert x_j-m_i \Vert ^2. \end{aligned}$$
(4)

This empirical risk is then equivalent to Hartigan (1978)’s within-group sum of squares.

We can now define the following:

Definition 1

(k-means clustering) k-means clustering is the learner \(A_k\) that maps from the set of all samples \(\mathop {\mathrm {\mathcal {S}}}\limits := \bigcup _{i=1}^{\infty } \mathop {\mathrm {\mathcal {X}}}\limits ^i\) to the hypothesis class \({\mathop {\mathrm {\mathcal {M}}}\limits }_k\), i.e., \(A_k:\mathop {\mathrm {\mathcal {S}}}\limits \rightarrow {\mathop {\mathrm {\mathcal {M}}}\limits }_k\), according to the optimality assignment

$$\begin{aligned} \begin{aligned} \mathop {\mathrm {\mathcal {A}}}\limits (S_N) :=&\mathop {\textrm{argmin}}\limits _{m\in {\mathop {\mathrm {\mathcal {M}}}\limits }_k} R_{S_N}(m),\\ A_k(S_N) :=&\mathop {\textrm{argmin}}\limits _{m\in \mathop {\mathrm {\mathcal {A}}}\limits (S_N)} \vert m \vert . \end{aligned} \end{aligned}$$
(5)

The first minimizers in our definition picks out all mean sets that minimize the empirical risk. \(\mathop {\mathrm {\mathcal {A}}}\limits (S_N)\) can indeed be a set of solutions, in particular when \(N<k\). The second minimizer then makes sure that we obtain a solution set of minimal cardinality.

The ERM view on k-means that we consider is equivalent to the formulation that Pollard (1981) provides. Different formulations are possible, for instance, where ones does not consider the squared distance to the closest means, but where one looks for a partitioning of the space in optimal regions. The latter can be found, for example, in the works by Dalenius (1950), MacQueen (1967), Ben-David et al. (2006). The former “center-based” formulation can also be found in Rakhlin (2005), Rakhlin and Caponnetto (2006) and Ben-David (2004) (under the name vector quantization problem). Buhmann (1998) provides a slightly different ERM setting and normalizes the influence of every cluster with its size. Bock (2007) relates different center-based and partitioning based approaches.

2.2 Idealized and practical k-means

Note that the specific k-means that we consider is, in some sense, idealized, because we actually assume that it minimizes the empirical risk globally, which is known to be an NP-hard problem (Dasgupta, 2008; Aloise et al., 2009).

For this reason, in practice, one often needs to resort to suboptimal approaches when carrying out the optimization in Eq. (5). This is where the well-known alternating optimization method of assigned points to means, then updating the means, and repeating this process comes in (Steinhaus, 1956; Jancey 1966; Bock 2007). Often, k-means is actually associated with exactly this algorithm, which we expressly do not do in this paper.

The successful k-means++ algorithm (Arthur & Vassilvitskii, 2007) has been shown to provide a reasonable solution to the k-means problem, despite it being NP-hard. More specifically, it is guaranteed to achieve a WGS risk in polynomial time that is, in expectation, no worse than \(\log k + 2\) times the optimal. It should be noted that k-means++ considers the WGS risk of the training set, while we are primarily interested in the expected loss of k-means on the full problem distribution.

2.3 Monotonic or smart?

We note that more than 20 years before Loog et al. (2019), Devroye et al. (1996) already suggested the notion of smart rules, which refers to classifiers that show nonincreasing expected error rates with increasing training set sizes. The terms smart and smartness, may be better choices than the terms monotonic and monotonicity. The latter term, as it is used originally, does of course not distinguish between increasing or decreasing curves. What we are after, in particular, is a nonincreasing curve in terms of the expected risk. As we evaluate in terms of the same loss as the one that is being optimized during training, this risk is not the error rate in our setting, but equals the expected within-group squared loss from Eq. (3).

2.4 Monotonicity

The behavior we are interested in is that we get better, or at least not worse, test performance when having more data to train on. In particular, we want k-means to not perform worse with increasing N in terms of the expected within-group squared loss as given by Eq. (3). As adding a single very bad sample can always ruin performance, it is reasonable to merely ask for such performance non-deterioration in expectation, i.e., over all possible samples \(S_N\) and \(S_{N+1}\) of size N and \(N+1\), respectively. A basic initial definition is therefore the following.

Definition 2

(local monotonicity) \(A_k\) is \((D,\ell _\textrm{WGS},N)\)-monotonic with respect to a distribution D and an integer \(N \in \mathop {\mathrm {\mathbb {N}}}\limits := \{ 1,2,\ldots \}\) if

$$\begin{aligned} \Delta _N^{N+1} := \mathop {\mathrm {\mathbb {E}}}\limits _{S_{N}\sim D^{N}} [R_{D}(A_k(S_N)) ] - \mathop {\mathrm {\mathbb {E}}}\limits _{S_{N+1}\sim D^{N+1}} [ R_{D}(A_k(S_{N+1}))] \ge 0. \end{aligned}$$
(6)

The two entities we would like to get rid of in Definition 2 are N and D. The former, because we would like our learner to act monotonically irrespective of the sample size. The latter, because we typically do not know the underlying distribution. What we do know, however, is in which domain we are operating, which is \(\mathbb{R}^{d}\) for our k-means. Therefore, employing the difference \(\Delta _N^{N+1}\) as defined in Eq. (6), the following is appropriate.

Definition 3

(local \(\mathbb {R} ^d\)-monotonicity) \(A_k\) is (locally) \(({{\mathbb {R}}}^d,\ell _\textrm{WGS},N)\)-monotonic with respect to an integer \(N \in \mathop {\mathrm {\mathbb {N}}}\limits\) if, for all distributions D on \(\mathbb{R}^{d}\) for which \(\Delta _N^{N+1}\) exists, \(\Delta _N^{N+1} \ge 0\).

Note that the above definition is a refinement of the original from (Viering et al., 2019; Loog et al., 2019). In particular, we added the statement on the existence of the difference in expected risks to make sure that the learner is \(\mathbb{R}^{d}\)-monotonic even though on some distributions the necessary integrals may not exist or the difference in itself is problematic. Such issues arise, for instance, when both expectations in the difference evaluate to infinity and \(\Delta _N^{N+1}\) would become \(\infty - \infty\).

The double-descent phenomenon that we covered earlier shows that learning curves can have some difficulties being monotonous from the start. The second best thing to hope for is that a learner becomes monotonic after some sample size, which leads to a weak form of monotonicity.

Definition 4

(weak \(\mathbb{R}^{d}\)-monotonicity) \(A_k\) is weakly \(({ {\mathbb {R}}}^d,\ell _\textrm{WGS},n)\)-monotonic if there is an integer \(n \in \mathop {\mathrm {\mathbb {N}}}\limits\) such that for all \(N \ge n\), the learner is locally \(({{\mathbb {R}}} ^d,\ell _\textrm{WGS},N)\)-monotonic.

If the n from Definition 4 can be set to 1, the learner is called globally \(\mathbb{R}^{d}\)-monotonic.

Definition 5

(global \(\mathbb{R}^{d}\)-monotonicity) \(A_k\) is globally \(({{\mathbb {R}}} ^d,\ell _\textrm{WGS})\)-monotonic if for every integer \(N \in \mathop {\mathrm {\mathbb {N}}}\limits\), the learner is locally \(({\mathbb {R}} ^d,\ell _\textrm{WGS},N)\)-monotonic.

2.5 Learning curves and bounds

Meek et al. (2002) were possibly the first to consider learning curves for clustering techniques in an application setting. In particular, they studied mixtures of Gaussians, but the notion can be transferred readily to the setting of k-means as seen in what follows. Interestingly, apart from Meek et al. (2002), there seems little additional learning curve work.

There is more work available on theoretical bounds for k-means. These report non-asymptotic bounds on the excess risk—or excess distortion as it is also called in this setting. Some of the more recent works are (Levrard, 2015; Maurer, 2016; Chichignoud & Loustau, 2014; Levrard, 2013; Biau et al., 2008). Earlier mentioned (Klochkov et al., 2021) is one of the recent additions, which, alternatively, studies a robust version of k-means. Their bounds show the typical 1/N or \(\sqrt{1/N}\) power-law behavior in terms of the training sample size N. Clearly, as these are bounds, they do not necessarily say a lot about the actual (local) behavior of the learning curve.

3 Main theoretical results

Having laid the groundwork in the previous section, we can now come to our main results.

Theorem 1

\(A_\textrm{1}\) is globally \((\mathop {\mathrm {\mathbb {R}}}\limits ^d,\ell _\textrm{WGS})\)-monotonic.

The proof follows in Sect. 3.1. It is fairly uninvolved—certainly compared to the proof of the next theorem. Nevertheless, the result is there both for completeness and to contrast it with what happens when \(k=2\), in which case the behavior changes notably.

Theorem 2

For \(A_\textrm{2}\) in combination with any integer \(N \ge 14\), there exists a distribution D for which \(\Delta _N^{N+1} < 0\) and, therefore, \(A_\textrm{2}\) is not weakly \(({{\mathbb {R}}}^d,\ell _\textrm{WGS},n)\)-monotonic for any \(n \in \mathop {\mathrm {\mathbb {N}}}\limits\).

The proof of this result is relatively involved and could be considered a bit cumbersome. We therefore dedicate a separate section to it, which is Sect. 4.

3.1 Proof of Theorem 1

Proof

It is easy to check that the minimizing hypothesis of the empirical risk \(A_1(S_N)\) is attained for the mean: \(m_N := \tfrac{1}{N} \sum _{i=1}^N x_i\), as we are dealing with only one cluster.

Let \(\mathop {\mathrm {\mathbb {E}}}\limits\) now denote the expectation both over the training sample \(S_{N}\sim D^{N}\) and over the test sample \(x \sim D\), where this latter expectation comes from the risk \(R_D\) expressed by Eq. 3. With \(\mu\) the true mean of the distribution D, we can write the following for the expected within-cluster loss for sample size N:

$$\begin{aligned} \mathop {\mathrm {\mathbb {E}}}\limits \left[ \Vert x-m_N \Vert ^2 \right] = \mathop {\mathrm {\mathbb {E}}}\limits \left[ \Vert x-\mu +\mu -m_N \Vert ^2 \right] = \mathop {\mathrm {\mathbb {E}}}\limits \left[ \Vert x-\mu \Vert ^2 \right] + \mathop {\mathrm {\mathbb {E}}}\limits \left[ \Vert m_N-\mu \Vert ^2 \right] . \end{aligned}$$
(7)

Only the last term matters as it is the only part depending on the training set size. We have

$$\begin{aligned} \begin{aligned}&\mathop {\mathrm {\mathbb {E}}}\limits \left[ \Vert m_N-\mu \Vert ^2 \right] = \mathop {\mathrm {\mathbb {E}}}\limits \left[ \left\| \tfrac{1}{N} \sum _{i=1}^n x_i-\mu \right\| ^2 \right] = \mathop {\mathrm {\mathbb {E}}}\limits \left[ \left\| \tfrac{1}{N} \sum _{i=1}^n x_i \right\| ^2 \right] - \Vert \mu \Vert ^2 \\ =&\mathop {\mathrm {\mathbb {E}}}\limits \left[ \tfrac{1}{N^2} \sum _{i=1}^n \Vert x_i \Vert ^2 + \tfrac{1}{N^2} \sum _{j\ne k} x_j^T x_k \right] - \Vert \mu \Vert ^2 \\ =&\mathop {\mathrm {\mathbb {E}}}\limits \left[ \tfrac{N}{N^2} \Vert x\Vert ^2 + \tfrac{N^2-N}{N^2} \Vert \mu \Vert ^2 \right] - \Vert \mu \Vert ^2 = \mathop {\mathrm {\mathbb {E}}}\limits \left[ \tfrac{1}{N} \Vert x\Vert ^2 \right] - \tfrac{1}{N} \Vert \mu \Vert ^2 \\ =&\tfrac{1}{N} \mathop {\mathrm {\mathbb {E}}}\limits \left[ \Vert x-\mu \Vert ^2 \right] . \end{aligned} \end{aligned}$$
(8)

Now, for \(\Delta _N^{N+1}\) (as defined in Eq. (6)) to exist, \(\mathop {\mathrm {\mathbb {E}}}\limits \left[ \Vert x-\mu \Vert ^2 \right]\) needs to be finite. In that case, as \(\mathop {\mathrm {\mathbb {E}}}\limits \left[ \Vert x-\mu \Vert ^2 \right]\) is nonnegative, we have that

$$\begin{aligned} \Delta _N^{N+1} = \tfrac{1}{N} \mathop {\mathrm {\mathbb {E}}}\limits \left[ \Vert x-\mu \Vert ^2 \right] - \tfrac{1}{N+1} \mathop {\mathrm {\mathbb {E}}}\limits \left[ \Vert x-\mu \Vert ^2 \right] \ge 0 \end{aligned}$$
(9)

for any N and so \(A_1\) is globally monotonic. \(\square\)

4 Proof of Theorem 2

At a high level, the proof is fairly straightforward. We explicitly construct a class of distributions that has two free parameters and we show that by having those parameters depend on N in the right way, we can always make the learning curve go up when going from a sample of size N to one of size \(N+1\) on that same distribution, i.e., \(\Delta _N^{N+1} < 0\). As we can construct such a distribution for any \(N \ge 14\) (so the distribution is allowed to be different for every \((N,N+1)\)-pair), \(A_2\) is not locally \(\mathbb{R}^{d}\)-monotonic for any \(N \ge 14\). As such, \(A_2\) is also not weakly monotonic for any \(n \in \mathop {\mathrm {\mathbb {N}}}\limits\), as there is always an \(N \ge n\) and a corresponding distribution for which \(\Delta _N^{N+1} < 0\).

The distributions that we construct for this consist simply of three point masses in one-dimensional feature space \(\mathop {\mathrm {\mathbb {R}}}\limits\). Because we can always embed this one-dimensional problem in \(\mathbb{R}^{d}\) for any \(d \in \mathop {\mathrm {\mathbb {N}}}\limits\), we can limit ourselves to this specific problem in our proof.

The complication in proving Theorem 2 stems mainly from the fact that we cannot explicitly evaluate the difference \(\Delta _N^{N+1}\) between two consecutive points on the learning curve in full. We therefore demonstrate that the curve goes up by showing that the difference is strictly negative by bounding it away from zero.

We start the preparations for the proof in the next subsection where we introduce our parameterized three-point problem. In Sect. 4.2, we then formulate and prove six lemmas, which give us different handles on parts of the behavior of the true expected risks. Sect. 4.3 brings it all together and finalizes the proof of Theorem 2.

4.1 A parameterized three-point problem and its risk

Definition 6

(three-point problem) This clustering problem, which depends on two parameters c and p, both in the open interval (0, 1), considers three locations in one-dimensional Euclidean space with associated probability mass function P. The specific points are A at \(-1\), B at the origin, and C at c, while the associated probability masses are \(P(A) = p\) and \(P(B)=P(C)=\tfrac{1}{2}(1-p)\).

In addition, let us now introduce the following notation and definitions. Firstly, let the number of training samples from A, B, and C, equal i, j, and k, respectively. Secondly, let \(\ell _X(i,j,k)\) equal the true loss incurred at point \(X \in \{A,B,C\}\). It is important to note that these three losses are, of course, dependent on the precise counts for i, j, and k, as those determine the hypothesis for that training set. We denote its associated hypothesis from \({\mathop {\mathrm {\mathcal {M}}}\limits }_k\) by m(ijk). A further definition we use is

$$\begin{aligned} R(i,j,k) := p \ell _A(i,j,k) + \tfrac{1}{2}(1-p)\left( \ell _B(i,j,k)+\ell _C(i,j,k)\right) , \end{aligned}$$
(10)

which denotes the true risk given the counts (ijk) for the three points A, B, and C.

Finally, for a training set of size N, the expected risk for the three-point clustering problem, which we simply denote by E(N) from now on, is equal to

$$\begin{aligned} E(N) := \sum _{i=0}^N \sum _{j=0}^{N-i} \frac{N!}{i!j!(N-i-j)!} p^{i} \left( \frac{1-p}{2}\right) ^{N-i} R(i,j,N-i-j), \end{aligned}$$
(11)

where the count for point C equals \(k = N-i-j\).

4.2 Six preparatory lemmas

The six lemmas presented in this section provide four different types of results. To start with, Lemma 1 describes a specific situation in which a minimizer for our three-point problem can be identified easily and uniquely. Lemmas 2 and 3 provide simplifications of some of the expressions involving binomials an risks that we will encounter. Lemmas 4 and 5 provide bounds for some of the expressions that appear in the proof of Theorem 2 when considering the difference in expected risk \(\Delta _N^{N+1}\) at training sample size N and \(N+1\) under the three-point distribution parameterized by the same c and p. Our ultimate lemma shows a specific one-dimensional function to be negative beyond a certain point. This is merely a technical result, that will ultimately be used to lower-bound the increase in the risk.

Lemma 1

With \(i \ge 1\) and \(j+k \ge 1\),

$$\begin{aligned} m(i,j,k)=\left\{ -1,c \frac{k}{j+k}\right\} \end{aligned}$$
(12)

is the unique minimizer of the empirical risk for the three-point problem if and only if

$$\begin{aligned} c^2 k (i+j) < i(j+k). \end{aligned}$$
(13)

Proof

First observe that using two rather than one mean, will always improve the empirical risk in the setting considered. Secondly, assigning points observed at the same location to two different means can never reduce the minimum risk. Thirdly, associating one mean with the middle location B and the other with the observations on both sides at A and C cannot be optimal. All in all, the optimal solution should either associate the one mean with A and the other with B and C or the one with A and B and the other with C. The mean associated with one point should, of course, be exactly that point. The other mean, in order to minimize the squared loss, is the weighted average of the two observed locations. As a result, either the hypothesis as specified in Eq. (12) is the optimizer or \(\left\{ -\tfrac{i}{i+j},c\right\}\) is.

Now, with \(N=i+j+k\), the respective empirical risks for these two hypotheses are

$$\begin{aligned} \frac{j}{N}\left( c \frac{k}{j+k}\right) ^2+ \frac{k}{N} \left( c - c \frac{k}{j+k}\right) ^2 = \frac{c^2 jk}{N(j+k)} \end{aligned}$$
(14)

and

$$\begin{aligned} \frac{i}{N}\left( -1+\frac{i}{i+j}\right) ^2 + \frac{j}{N}\left( \frac{i}{i+j}\right) ^2 = \frac{ij}{N(i+j)}. \end{aligned}$$
(15)

One therefore uniquely chooses the first hypothesis in case the value in Eq. (14) is (strictly) smaller than that in Eq. (15), as these are the only two hypotheses that we need to consider. It is easy to see that this holds if and only if \(c^2 k (i+j) < i(j+k)\). \(\square\)

Lemma 2

For the three-point problem from Definition 6 and any \(N\ge 1\),

$$\begin{aligned} \sum _{j=0}^{N} \frac{N!}{j!(N-j)!} \left( \frac{1-p}{2}\right) ^{N} R(0,j,N-j) = \left( \frac{c(c+2p)}{2^N}+p\right) (1-p)^N, \end{aligned}$$
(16)

which is the contribution to the expected risk of all training sets that do not contain samples at A.

Proof

When \(j=0\) or \(j=N\), the hypotheses minimizing the empirical risk are in fact single means at c and 0, respectively. In these cases, we have

$$\begin{aligned} R(0,N,0)&=p + \tfrac{1}{2}(1-p)c^2\end{aligned}$$
(17)
$$\begin{aligned} R(0,0,N)&= p(1+c)^2 + \tfrac{1}{2}(1-p)c^2. \end{aligned}$$
(18)

For \(1 \le j \le N-1\), both B and C are in the training set and the minimizer equals \(m(0,j,k) = \left\{ 0,c\right\}\). The associated true risk is therefore \(R(0,j,k) = p\). Working out the summation now leads to

$$\begin{aligned} \begin{aligned} \sum _{j=0}^{N} \frac{N!}{j!(N-j)!} \left( \frac{1-p}{2}\right) ^{N} R(0,j,N-j)&= \\ \left( \frac{1-p}{2}\right) ^{N} \left( p + \tfrac{1}{2}(1-p)c^2 + p(1+c)^2 + \tfrac{1}{2}(1-p)c^2 + \sum _{j=1}^{N-1} \frac{N!}{j!(N-j)!} p \right)&= \\ \left( \frac{1-p}{2}\right) ^{N} \left( p + p(1+c)^2 + (1-p)c^2 + (2^N-2)p \right)&=\\ \left( \frac{c(c+2p)}{2^N}+p\right) (1-p)^N&. \end{aligned} \end{aligned}$$
(19)

\(\square \)

Lemma 3

Consider the three-point problem from Definition 6 and assume \(c^2 < \frac{4(N-1)}{N^2}\). If \(N\ge 2\) and \(1 \le i \le N-1\), then

$$\begin{aligned} \begin{aligned} \sum _{j=0}^{N-i} \frac{N!}{i!j!(N-i-j)!} p^{i} \left( \frac{1-p}{2}\right) ^{N-i} R(i,j,N-i-j)&= \\ \left( {\begin{array}{c}N\\ i\end{array}}\right) \frac{c^2(N-i+1)(1-p)^{N-i+1}p^i}{4(N-i)}&. \end{aligned} \end{aligned}$$
(20)

Proof

According to Lemma 1, with \(k=N-i-j\), the hypothesis from Eq. (12) is the unique optimum, if \(c^2(N-i-j)(i+j)\le i(N-i)\). The left-hand side is quadratic in i and j and takes its maximum for any \(i+j = \frac{N}{2}\). The right-hand side is quadratic and takes its minimum in \(i = 1\) and \(i = N-1\). Therefore, under the assumptions that \(c^2 < \frac{4(N-1)}{N^2}\), we indeed have that

$$\begin{aligned} c^2(N-i-j)(i+j) \le c^2 \frac{N}{2} \frac{N}{2} < (N-1) \le i(N-i) . \end{aligned}$$
(21)

Now, the optimal hypothesis \(\left\{ -1,c\tfrac{k}{j+k}\right\}\) has corresponding true risk

$$\begin{aligned} R(i,j,k) = \tfrac{1}{2}(1-p)\left( \left( c\frac{k}{j+k}\right) ^2+\left( c-c\frac{k}{j+k}\right) ^2\right) = \tfrac{1}{2}c^2(1-p)\frac{j^2+k^2}{(j+k)^2} \end{aligned}$$
(22)

and we find the desired identity by working the sum on the left-hand side of Eq. (20).

$$\begin{aligned} \begin{aligned} \sum _{j=0}^{N-i} \frac{N!}{i!j!(N-i-j)!} p^{i} \left( \frac{1-p}{2}\right) ^{N-i} R(i,j,N-i-j)&= \\ \sum _{j=0}^{N-i} \frac{N!}{i!j!(N-i-j)!} p^{i} \left( \frac{1-p}{2}\right) ^{N-i} \tfrac{1}{2}c^2(1-p) \frac{j^2+(N-i-j)^2}{(N-i)^2}&= \\ \frac{N!}{i!(N-i)!}\frac{c^2 p^{i}(1-p)^{N-i+1}}{2^{N-i+1}(N-i)^2}&\times \\ \sum _{j=0}^{N-i} \frac{(N-i)!}{j!(N-i-j)!} \left( 2 j^2 - 2(N-i)j + (N-i)^2\right)&= (\star ) \end{aligned} \end{aligned}$$
(23)

The next equality is established with the use of the identities \(\sum _{k=0}^n \left( {\begin{array}{c}n\\ k\end{array}}\right) = 2^n\), \(\sum _{k=0}^n \left( {\begin{array}{c}n\\ k\end{array}}\right) k= n 2^{n-1}\), and \(\sum _{k=0}^n \left( {\begin{array}{c}n\\ k\end{array}}\right) k^2 = (n^2+n)2^{n-2}\). All of these can be readily derived from the binomial theorem: the first sum is standard, the other two identities are reported in Gould (2010) as Eq.s (1.70) and (1.67), respectively.

$$\begin{aligned} \begin{aligned} (\star ) = \left( {\begin{array}{c}N\\ i\end{array}}\right) \frac{c^2 p^{i}(1-p)^{N-i+1}}{2^{N-i+1}(N-i)^2}&\times \\ \left( ((N-i)^2+N-i)2^{N-i-1} - (N-i)^2 2^{N-i} + (N-i)^2 2^{N-i} \right)&\\ = \left( {\begin{array}{c}N\\ i\end{array}}\right) \frac{c^2(N-i+1)p^{i}(1-p)^{N-i+1}}{4(N-i)}&. \end{aligned} \end{aligned}$$
(24)

\(\square \)

Lemma 4

Given \(N\ge 2\), if \(p<\frac{1}{N+7}\), then

$$\begin{aligned} \begin{aligned} \left( {\begin{array}{c}N\\ i\end{array}}\right) \frac{c^2(N-i+1)(1-p)^{N-i+1}p^i}{4(N-i)}&+ \\ -\left( {\begin{array}{c}N+1\\ i\end{array}}\right) \frac{c^2(N-i+2)(1-p)^{N-i+2}p^i}{4(N-i+1)}&\le \\ \left( 1- \frac{N+7}{N+6}(1-p)\right) \left( {\begin{array}{c}N\\ i\end{array}}\right) \frac{c^2(N-i+1)(1-p)^{N-i+1}p^i}{4(N-i)}&< 0 \end{aligned} \end{aligned}$$
(25)

for all \(1 \le i \le N-1\).

Proof

To start with, observe that the term \(1- \frac{N+7}{N+6}(1-p)\) equals 0 when p takes on its supremum \(\frac{1}{N+7}\). As soon as p becomes smaller than \(\frac{1}{N+7}\), the term becomes strictly negative and we have the second, strict inequality.

To show the first inequality, note initially that we can rewrite it into the equivalent requirement that

$$\begin{aligned} \begin{aligned} \frac{\left( {\begin{array}{c}N+1\\ i\end{array}}\right) \frac{c^2(N-i+2)(1-p)^{N-i+2}p^i}{4(N-i+1)}}{\left( {\begin{array}{c}N\\ i\end{array}}\right) \frac{c^2(N-i+1)(1-p)^{N-i+1}p^i}{4(N-i)}} = \frac{(N-i+2)(N+1)(N-i)(1-p)}{(N-i+1)^3}&\ge \\ \frac{N+7}{N+6}(1-p)&. \end{aligned} \end{aligned}$$
(26)

Demonstrating this for \(i=1\) comes down to showing that

$$\begin{aligned} \frac{(N+1)^2(N-1)}{N^3} \ge \frac{N+7}{N+6}. \end{aligned}$$
(27)

To see that this holds, multiply left and right by \(N^3(N+6)\) and reorganize terms to come to \(5N^2-7N-6 \ge 0\). Equality is attained for \(N=-\frac{3}{5}\) and \(N=2\), and so the inequality indeed holds for all \(N \ge 2\), as we are dealing with a convex quadratic equation.

To show now that the inequality holds for any \(1 \le i \le N-1\), consider the derivative to i:

$$\begin{aligned} \frac{d}{di} \frac{(N-i+2)(N+1)(N-i)}{(N-i+1)^3} = \frac{(N+1)(i^2 - 2(N + 1)i + N^2 + 2N - 2)}{(N-i+1)^4}. \end{aligned}$$

This only becomes zero, when \(i^2 - 2(N + 1)i + N^2 + 2N - 2 = 0\), which only happens when \(i = N+1\pm \sqrt{3}\). Both these solutions are larger than \(N-1\). Therefore, on the whole interval \([1,N-1]\), the derivative is positive, \(\frac{(N-i+2)(N+1)(N-i)}{(N-i+1)^3}\) is strictly increasing over that same domain, and we have that

$$\begin{aligned} \frac{(N-i+2)(N+1)(N-i)}{(N-i+1)^3} \ge \frac{(N+1)^2(N-1)}{N^3} \ge \frac{N+7}{N+6} \end{aligned}$$
(28)

for all \(i \in [1,N-1]\) in general and for \(1 \le i \le N-1\) in particular. \(\square\)

Lemma 5

For the three-point problem from Definition 6, any \(N\ge 2\), and any \(p \le \frac{1}{2}\):

$$\begin{aligned} \begin{aligned} p^N R(N,0,0) +&\left( \frac{c(c+2p)}{2^N}+p\right) (1-p)^N \\ - p^{N+1} R(N+1,0,0) -&\left( \frac{c(c+2p)}{2^{N+1}}+p\right) (1-p)^{N+1} \\ \le&\left( \frac{c(c+2p)}{2^N}+2p^2\right) (1-p)^N . \end{aligned} \end{aligned}$$
(29)

Proof

The term R(N, 0, 0) gives the risk in case the training set only contains samples from location A, which means we have one mean at \(-1\) and therefore

$$\begin{aligned} R(N,0,0) = \tfrac{1}{2}(1-p)\left( 1+(1+c)^2\right) . \end{aligned}$$
(30)

Using this term together with Eq. (16) from Lemma 2, we can rewrite the part of Eq. (29) on the left-hand side of the inequality, as

$$\begin{aligned} \begin{aligned} p^N \tfrac{1}{2}(1-p)\left( 1+(1+c)^2\right) + \left( \frac{c(c+2p)}{2^N}+p\right) (1-p)^N&\\ - p^{N+1} \tfrac{1}{2}(1-p)\left( 1+(1+c)^2\right) - \left( \frac{c(c+2p)}{2^{N+1}}+p\right) (1-p)^{N+1}&= \\ \frac{c(p + 1)(c + 2p)(1 - p)^N}{2^{N+1}} + p^2(1 - p)^N + \frac{1}{2}(c^2 - 2c + 2)(1 - p)^2p^N. \end{aligned} \end{aligned}$$
(31)

For the first term in the sum of the right-hand side, as \(\frac{p+1}{2}\le 1\), we have the following upper bound.

$$\begin{aligned} \frac{c(p + 1)(c + 2p)(1 - p)^N}{2^{N+1}} \le \frac{c(c + 2p)(1 - p)^N}{2^N}. \end{aligned}$$
(32)

For the third term, we have

$$\begin{aligned} \frac{1}{2}(c^2 - 2c + 2) \le \frac{1}{2}2 = 1 \end{aligned}$$
(33)

because \(c \in (0,1)\) and the quadratic function takes on its supremum at \(c=0\). In addition, as \(\left( \frac{p}{1-p}\right) ^N \le (2p)^N\) for all \(N\ge 1\) if \(p<\frac{1}{2}\), we have

$$\begin{aligned} (1 - p)^2p^N = p^2 \frac{p^{N-2}}{(1-p)^{N-2}} (1-p)^{N} \le p^2 (2p)^{N-2}(1-p)^N \le p^2(1-p)^N. \end{aligned}$$
(34)

\(\square \)

Lemma 6

For \(N \ge 14\),

$$\begin{aligned} (32N^2 + 8N)2^{-N}+1 - \frac{4N^2 - N - 7}{2(N + 6)(N - 1)} < 0. \end{aligned}$$
(35)

Proof

For \(N > 1\), the denominator \(2(N + 6)(N - 1)\) is positive. Multiplying all terms with this term and simplifying leads to

$$\begin{aligned} \begin{aligned}&2(N + 6)(N - 1)((32N^2 + 8N)2^{-N}+1) - (4N^2 - N - 7) \\ =&16N(N - 1)(N + 6)(4N + 1)2^{-N} - 2N^2 + 11N - 5. \end{aligned} \end{aligned}$$
(36)

The logarithm of the expression \(16N(N - 1)(N + 6)(4N + 1)2^{-N}\) is concave for \(N>1\) and so we can upper-bound that log-expression by a linear function. In particular, based on the first Taylor polynomial (or tangent) at \(N=14\), we may write:

$$\begin{aligned} \begin{aligned}&\log (16N(N - 1)(N + 6)(4N + 1)2^{-N}) \\ \le&\log \left( \frac{25935}{128}\right) + \left( \frac{27857}{103740} - \log (2)\right) (N - 14). \end{aligned} \end{aligned}$$
(37)

As the coefficient \(\frac{45299}{157092} - \log (2)\) for the linear term in the latter part of the inequality is negative, we can fill in \(N=14\) to obtain a value that upper-bounds the expression in Eq. (37) for all \(N \ge 13\). The exponent of this value, which equals \(\frac{25935}{128}\), we can then use to upper-bound Eq. (36) for \(N \ge 14\):

$$\begin{aligned} \begin{aligned} 2(N + 6)(N - 1)((32N^2 + 8N)2^{-N}+1) - (4N^2 - N - 7)&\\ = 16N(N - 1)(N + 6)(4N + 1)2^{-N} - 2N^2 + 11N - 5&\\ \le \frac{25935}{128} - 2N^2 + 11N - 5&. \end{aligned} \end{aligned}$$
(38)

This last quadratic upper bound is concave and takes on its maximum value at \(N=\frac{11}{4}\), which is smaller than 14. Therefore, the value of the upper bound in Eq. (38) at 14, provides an upper bound for \(\frac{25935}{128} - 2N^2 + 11N - 5\) for all \(N \ge 14\). Its value is \(\frac{25935}{128} - 2\cdot 4^2 + 11\cdot 4 - 5 = -\frac{5169}{128} < 0\) and, as \(2(N + 6)(N - 1)\) is positive in that case, we also have that the left-hand side of Eq. (35) is strictly smaller than 0 for all \(N \ge 14\). \(\square\)

4.3 Finalizing the proof

Proof of Theorem 2

To start with, that \(A_2\) is not weakly monotonic for any \(n \in \mathop {\mathrm {\mathbb {N}}}\limits\) follows readily from the first part of the theorem. If for every integer \(N \ge 14\), there exists a distribution D such that \(\Delta _N^{N+1} < 0\) then there is always a distribution and an \(N \ge n\) for which \(\Delta _N^{N+1} < 0\), which means \(A_2\) cannot be weakly \((\mathop {\mathrm {\mathbb {R}}}\limits ^d,\ell _\textrm{WGS},n)\)-monotonic. The remainder of the proof therefore focuses on demonstrating the first claim from the theorem.

Without loss of generality, we can limit our attention to one-dimensional \(\mathop {\mathrm {\mathbb {R}}}\limits\), as we can always embed a problem from that space in \(\mathbb{R}^{d}\). As such, nonmonotonicity of \(A_2\) in \(\mathop {\mathrm {\mathbb {R}}}\limits\) carries over to \(\mathbb{R}^{d}\). Let us therefore merely consider the one-dimensional problem from Definition 6. Moreover, take \(N \ge 14\) and take \(c = \frac{2}{N}\), in which case \(c^2\) is strictly smaller than \(\frac{4(N-1)}{N^2}\). Additionally, take \(p = \frac{1}{4 N^2}\), which is strictly smaller than \(\frac{1}{N+7}\) for \(N \ge 14\). These choices make sure that all six lemmas hold and, in addition, determines a specific three-point distribution D for every N. We now show that the learning curve increases on this D when going from N to \(N+1\) training samples.

Following Lemma 2 and Lemma 3, we can write

$$\begin{aligned} \begin{aligned} E(N)&= p^N R(N,0,0) + \left( \frac{c(c+2p)}{2^N}+p\right) (1-p)^N \\ {}&+ \sum _{i=1}^{N-1}\left( {\begin{array}{c}N\\ i\end{array}}\right) \frac{c^2(N-i+1)(1-p)^{N-i+1}p^i}{4(N-i)} . \end{aligned} \end{aligned}$$
(39)

Subsequently, let us consider the difference \(\Delta _N^{N+1} = E(N)-E(N+1)\), where both expectations are taken with respect to the same underlying distribution D. This difference needs to be smaller than 0 to show that \(A_2\) is not monotonic on the problem defined by D when going from N to \(N+1\).

Using Lemmas 4 and 5, we find that

$$\begin{aligned} \begin{aligned} {\Delta _N^{N+1}} \le&\left( \frac{c(c+2p)}{2^N}+2p^2\right) (1-p)^N \\ +&\sum _{i=1}^{N-1} \left( 1- \frac{N+7}{N+6}(1-p)\right) \left( {\begin{array}{c}N\\ i\end{array}}\right) \frac{c^2(N-i+1)(1-p)^{N-i+1}p^i}{4(N-i)} \\ \le&\left( \frac{c(c+2p)}{2^N}+2p^2\right) (1-p)^N + \left( 1- \frac{N+7}{N+6}(1-p)\right) \frac{c^2 N^2 (1-p)^{N}p}{4(N-1)}, \end{aligned} \end{aligned}$$
(40)

where the last inequality holds because all terms in the summation are smaller than 0, so removing those for \(i \in \{2,\ldots ,N\}\) only increases the value.

In our next step, we fill in our choices for c and p in the above inequality and simplifying the expression. This gives us the following bound on the change in expected risk on D:

$$\begin{aligned} \begin{aligned} {\Delta _N^{N+1}} \le&\left( \frac{\frac{2}{N}\left( \frac{2}{N}+2\frac{1}{4 N^2}\right) }{2^N}+2\left( \frac{1}{4 N^2}\right) ^2\right) \left( 1-\frac{1}{4 N^2}\right) ^N \\ +&\left( 1- \frac{N+7}{N+6}\left( 1-\frac{1}{4 N^2}\right) \right) \frac{\left( \frac{2}{N}\right) ^2 N^2 \left( 1-\frac{1}{4 N^2}\right) ^{N}\frac{1}{4 N^2}}{4(N-1)} \\ =&\frac{1}{8N^4}\left( (32N^2 + 8N)2^{-N}+1 - \frac{4N^2 - N - 7}{2(N + 6)(N - 1)}\right) \left( 1-\frac{1}{4 N^2}\right) ^N . \end{aligned} \end{aligned}$$
(41)

In this expression, both \(\frac{1}{8N^4}\) and \(\left( 1-\frac{1}{4 N^2}\right) ^N\) are positive for all \(N \ge 14\). The middle term is strictly negative according to Lemma 6 and, therefore, we have that Eq. (41) is upper-bounded by 0. In particular, we have shown that for every \(N \ge 14\) there is a problem distribution D such that \({\Delta _N^{N+1}} < 0\), which was to be demonstrated. \(\square\)

5 Discussion and conclusion

Having shown that 1-means is monotonic, while 2-means is not even weakly monotonic, the obvious question that comes up is what we can say about \(k>2\). Our current conviction is that we can probably design problematic cases, similar to the one for 2-means, for \(k>2\) as well. We have, however, not been able to do so up until now. A first step could be to empirically show that, say, for 3-means there are at all distributions where nonmonotonicity occurs. It may only then be sensible to look for a proof showing that it is not weakly monotonic. Incidentally, note that for 3-means, we need at least a four-point problem, which may point at an even more involved proof for this case.

The current proof seems to strongly hinge on the discreteness of the chosen three-point distributions from Definition 6. We conjecture, however, that a similar proof can be constructed on the basis of a class of continuous distributions, rather than discrete ones. Our current idea is that every one of the three discrete locations A, B, and C can probably be replaced by a narrow enough uniform distribution around that point such that the crucial steps in our proof still go through. We do admit, however, that it may be rather nontrivial to precisely reformulate all arguments. Nonetheless, we do believe that the distribution’s discreteness is not essential.

Two further research direction that, we think, are of interest are, firstly, whether k-means can be turned into a monotonic learner and, secondly, what we can say about the learning curve behavior of closely related Gaussian mixture models (Welling & Kurihara, 2006; McLachlan et al., 2019; Lücke & Forster, 2019). The latter are closely linked to k-means in particular when the covariance matrices of the mixture components are assumed to be the identity and the mixture priors are all equal. What we wonder especially is whether moving from the within-group squared loss to the (negative) log-likelihood and/or going from hard to soft assignments of points to clusters provides any benefits when it comes to monotonic behavior.

As for the former research question, there are some wrapper techniques to turn learners monotonic in expectation. Viering et al. (2020) and Bousquet et al. (2022) rely on the 0-1 loss specifically and cannot be applied directly to our setting. On the other hand, Mhammedi (2021)’s wrapper only assumes the loss to be bounded, which is readily fulfilled for k-means in case the support of the distribution is bounded as well. Unfortunately, the proof in (Mhammedi, 2021) turned out to be defective and the original result was updated in a correction mentioned in (Footnote 6 (Mhammedi, 2022)). It now only ensures monotonicity up to an additive term with an \(N^{-1}\) rate. Still, assuming bounded distributional support, Mhammedi (2021) at least gives us some possibility to control the monotonicity of k-means.

In the context of potential wrappers, what is important to note, however, is that these algorithms essentially change the base algorithm. Are we still dealing with k-means now or is it a different learner altogether? More directly related to the idealized version that we consider are the k-means algorithms in practical use (Sect. 2.2 provides some pointers). From a monotonicity point of view, the fact that these typically do actually not provide a globally optimal clustering could result in a smoother, maybe even monotonous learning curve. In other words, the regularizing effect from the suboptimality of practical k-means optimization could actually promote monotonicity. Regularization can, however, both fix and create nonmonotonicity (Loog et al., 2019; Nakkiran et al., 2020; Viering & Loog, 2022) and it could be interesting to see how different k-means++ possibly behaves compared to other lesser-optimal algorithms.

All in all, our findings add clustering to the list of potentially nonmonotonic behaving learners, next to some classifiers, regression techniques, and density estimators (cf. Loog et al, 2019). From a theoretical point of view, this is crucial as it provides us, for instance, with a deeper understanding of how learning curves can at all behave. The practitioner may object that “this does not happen on real-world problems.” Though we are, in principle, willing to believe this, what proof do we really have? If anything, some recent study by Mohr et al. (2022), on a large number of data sets in combination with various classifiers, showed that nonmonotonic learning curve behavior does occur. The least any practitioner should be is aware that nonmonotonicity can happen.