Online Percentile Estimation (OPE)

We describe a control loop over a sequential online algorithm. The control loop either computes or uses the sequential algorithm to estimate the temporal percentiles of any univariate input data sequence. It transforms an input sequence of numbers into output sequence of histograms. This is performed without storing or sorting all of the observations. The algorithm continuously tests whether the input data is stationary, and reacts to events that do not appear to be stationary. It also indicates how to interpret the histograms since their information content varies according to whether a histogram was computed or estimated. It works with parameter-defined, fixed-size small memory and limited CPU power. It can be used for a statistical reduction of a numerical data stream and, therefore, applied to various Internet of Things applications, edge analytics or fog computing. The algorithm has a built-in feasibility metric called applicability. The applicability metric indicates whether the use of the algorithm is justified for a data source: it works for an arbitrary univariate numerical input, but it is statistically feasible only under some requirements, which are explicitly stated here. We also show the results of a performance study, which was executed on the algorithm with synthetic data.


Introduction
The Internet of Things (IoT) concept reflects the tens of billions connected devices on the Internet in the near future. Many of such devices produce a measurement or monitor data periodically or even continuously. The growth of data will be huge. This offers engineering challenges and business opportunities for the future. Opportunities, however, require first solutions to the challenges, especially the management of increased data due to the amount of networked sensors or measurement agents. Often, at least a portion of the data needs to be stored for later use while, simultaneously, supporting real-time situation awareness is equally important. The transmission of this big data from the network edge to the cloud servers for processing consumes Jorma Kilpi jorma.kilpi@vtt.fi Timo Kyntäjä timo.kyntaja@vtt.fi Tomi Räty tomi.raty@vtt.fi 1 VTT Technical Research Centre of Finland, Espoo, Finland bandwidth and adds latency [2]. Instead of performing all the computations in a cloud server, some computations can be done already in the proximity of the origins of the data. However, memory, CPU power and communication capacity can be limited where data is measured and/or collected. Occasionally, the observed historical data needs to be recovered with sufficient detail.
When a statistical event appears to be predictable, its information content can be assumed to be small and, therefore, it does not need many bits of information to be stored. A stationary state is such a condition in which the amount of stored data bytes per event can be reduced. Very rare events occur also in a stationary state. However, reacting to all rare events is compulsory, since it is not possible to know beforehand whether an event is significant or not. Afterwards, it may be possible to judge whether the rare event was significant or not, provided that the stored data support the judgement. Responding to a rare event is possible if the latest history of events is available. A rare event is construed as rare in relation to the history of occurred events within a designated period of time. We exploit a probability distribution to identify the rarity of the events, because the percentiles of the distribution provide several fast ways to distinguish a rare event. The process of specifying the probability distribution is informative in itself as it presents indications of both stationary and non-stationary events. The utilization of probability distributions also facilitates the comparison of current events to previously occurred events simply by comparing the different distributions.
In this paper, we provide the details of an algorithm, which attempts to perform the heuristic described above. It continuously tests whether the input data appear stationary and reacts to events that do not appear to be stationary. The algorithm transforms an input sequence of numbers into an output sequence of histograms. It also indicates how to interpret the histograms since their information content vary according to whether the histogram was computed by sorting a small amount of input data or estimated by assuming stationary input data.
The fields of smart grids and rural industrial systems serve as examples in which the proposed method can be applied. These systems produce massive amounts of data to be analysed, but sending it onward is not always cost effective. An example of this would be industrial big data related to systems predictive maintenance [22]. These sensoring processes (e.g. smart grid cable conditions metering, machine and/or tool vibrations) produce large amounts of data to be analysed to predict possible system faults. Our method can produce both the reduced amount of data and indication of the changes within the measurements. Furthermore, this method can be integrated in 5G communication systems, in which Mobile Edge Computing (MEC) can be used as a service [1,12]. This paper is structured as follows. In Section 2, we provide background to our solution, relate it to existent, similar research and further motivate our approach. In Section 3, we present our design assumptions, which are simultaneously requirements for our algorithm. Section 4 gives a top-level view of the algorithm and Section 5 its details. In Section 6, we describe the output of the algorithm. Sections 7 and 8 contain the statistical details. In Section 9, we present results of selected performance tests and finally the conlusions are drawn in Section 10.

Background and Related Work
Our control loop utilizes a sequential algorithm called the extended P 2 -algorithm. We will also discuss other alternatives later in this section. The P 2 -algorithm was first developed by Jain and Chlamtac [10] for the sequential estimation of a single percentile of a distribution without storing the observations, then extended by Raatikainen in [15] for the simultaneous estimation of several quantiles 1 , and further analyzed by Raatikainen in [17]. Raatikainen used the extended P 2 -algorithm as a part of his PhD 1 The words percentile, quantile and fractile are synonyms. thesis [16]. The publication [15] of Raatikainen contains the remarkably short pseudocode of the algorithm in such detail that its implementation is straightforward. Recently, the P 2algorithm was included in the comparison analysis of Tiwari and Pandey in [21].
The extended P 2 -algorithm is a very fast algorithm since it does not sort the input. However, there exist several computationally simpler algorithms which we will shortly discuss. The P 2 -algorithm approximates the inverse of the empirical cumulative distribution function (CDF) by utilizing a piecewise-parabola 2 to adjust the estimates of the percentiles. If the parabolic adjustment cannot be applied, then linear adjustment is used instead.
The extended P 2 -algorithm has some drawbacks: 1. It is suitable only for strictly stationary data sources, that is, for sources whose distribution really have such percentiles which do not depend on the observation time [4,5,14]. If the data source is not strictly stationary, then the estimates need not have any meaningful statistical interpretation [21]. This reduces the applicability of the extended P 2 -algorithm. 2. The estimates of the percentile near the median are more accurate, but the tail percentiles are less accurate. However, this is not a problem of the extended P 2algorithm alone, as the estimation of the extreme tail probabilities is in general very hard due to a lack of observations [9]. Our solution to this is simply to avoid the use of the extended P 2 -algorithm in the estimation of the extreme tail percentiles. 3. The extended P 2 -algorithm requires initialization data.
As in [17], a good statistical representativeness of the initialization values is required. This may be hard to achieve in applications. It is worthwhile to use some computational resources to achieve better representativeness. 4. Initialization values must also be unequal, distinct values. This may cause problems in applications in which this cannot be guaranteed. Placing effort in the previous item (3) also helps this issue.
At first sight, these restrictions may sound hard for useful applications. However, we will show that the area of application can be significantly extended to cover some (not all) of the data sources which are not strictly stationary. The data source is allowed to change from a stationary state to another stationary state and the number of stationary states is not restricted. Statistical background knowledge includes the research of Heidelberger and Lewis [9], which examines the quantile estimation for dependent data and presents many practical methods, such as the Maximum Transformation and the Nested Group Quantiles. The work of Feldman and Tucker [7] gives insight what can happen when we try to estimate a non-unique quantile and Zieliński [24] shows that estimation problems can occur even if the quantile is unique.
Alternatives to the extended P 2 -algorithm are the various recursive algorithms based on the stochastic approximation method of [18]. These algorithms include the Stochastic Approximation (SA) by Tierney [20], the Exponentially Weighted Stochastic Approximation (EWSA) by Chen et al. [6], Log Odds Ratio Algorithm (LORA) by Bakshi and Hoeflin [3] and the recent Dynamic Quantile Tracking using Range Estimation (DQTRE) of Tiwari and Pandey [21]. Yazidi and Hammer [8,23], present recursive approaches based on stochastic learning. A common property of all the estimation algorithms is the need to obtain good initialization data.
The recursive algorithms EWSA, LORA and DQTRE adapt, to some extent, into some specific types of nonstationary data streams. However, the data source can be non-stationary in a myriad of different ways. If the distribution of data depends on the time of the observation, this dependency can be smooth or non-smooth. For example, EWSA was shown to adapt into a linear trend in [6], but in [21] DQTRE performed better in the simulated case of a sudden non-smooth change of the distribution. The estimation of time-dependent quantiles is conceptually difficult since, if the quantiles depend on time, then also their estimates depend on time. We are not aware of any general theory on the estimation of time-dependent quantiles.
Our contribution is to provide a control loop around the extended version of the P 2 -algorithm. The controlled extended P 2 -algorithm will be referred to as Online Percentile Estimation (OPE) and it consists of two separate parts: the extended P 2 -algorithm of [15] and our control loop. Our intention is to keep the control loop as computationally simple as possible. We emphasize the separation into two parts, because it is possible to replace the P 2 -algorithm with any of the recursive algorithms discussed above. For example, the authors of DQTRE [21] sketch the estimation of several quantiles simultaneously. The extended P 2 -algorithm has a built-in property to keep the estimates of the quantiles strictly monotone while DQTRE temporarily allows non-monotone quantile estimates.
The purpose of the control loop is to detect temporal behavior which is not strictly stationary and to avoid the use of the extended P 2 -algorithm in these situations. In cases which are not strictly stationary, the sample percentiles, obtained by sorting a relatively small sample of data, have the most reliable interpretation. For example, the sample median, denoted in this paper as x ( 0.5n ) in which n is the sample size, always has the interpretation that 50% of the sample values has been smaller than x ( 0.5n ) and 50% has been larger than x ( 0.5n ) . This holds true due to sorting of the sample and whether or not the data source was (strictly) stationary when the sample values were observed. In the non-stationary case, the data sample represents also the specific interval of time in which it was observed, and additional information is required for the correct statistical interpretation of it.
The strictly stationary case is just the opposite for the data source and, by its definition, it has a time-invariant distribution. Then the uncertainty, which results from the estimation instead of the computation of percentiles, can be tolerated. In this paper, the term source percentile indicates the percentile of the distribution of the (endless) data source. The temporal existence of such source percentiles can be assumed whenever there is evidence that the assumption of strictly stationary case holds.
The authors of the P 2 -algorithm [10] considered a hardware implementation (quantile chip) of their algorithm. Tiwari and Pandey [21] suggested a similar, for example FPGA or ASIC, implementation of their DQTRE algorithm. Our viewpoint is that all online quantile estimation algorithms benefit from a simple control loop which attempts to verify that the assumptions of the algorithms are not violated too heavily.

Assumptions and Requirements for OPE
For further reference, we present the assumptions that were made during the design process of OPE. These assumptions are also requirements for applying the algorithm.
1. It is assumed that there is random variability in the data. 2. It is assumed that the data source can be in some steady state for adequate periods of time that the stationary model can be applied. The number of different transient steady states is unlimited. Momentary periods of behavior, which is not strictly stationary, are allowed but, if the data source cannot ever reach any steady state in which the P 2 -algorithm can be deployed, then one must try to replace the P 2 -algorithm part of OPE by some of the mentioned recursive algorithms which adapt to non-stationary data. A control loop similar of OPE could then be defined to ensure that the changes in the data distribution are such that the adaptation makes sense, for example, the percentile estimates must remain monotone. 3. OPE performs similarly to a low pass filter. It is assumed that the interesting information is not on the small variations of the timescale. Exceptionally small or large values will be visible from the output of OPE, but their exact occurence in time may be lost unless a time-stamping feature is added to the algorithm. The handling of time stamps can be added but, for brevity, this issue is not discussed here. Also, trends can be observed by comparing the current histogram to the previous histogram(s), but a trend within a single histogram is not necessarily observed.
The design principle has been that in all circumstances the algorithm must perform rapidly and never produce unreliable data. Even if it is applied in a situation in which its applicability is not the best possible.

The Control Loop
The control is obtained by applying specific, statistical counter models and periodical testing of the estimates of the extended P 2 -algorithm against the model percentiles that are computed at the initialization phase. The model consists of 1. the model values of the specified source percentiles, 2. three counters and 3. two alarm mechanisms to test each observed value against the model. The alarm mechanisms include four threshold values and two more counters.
The model will be described in Section 5.1.
The model is discarded immediately if the correspondence with the data source is deemed insufficient. When the model is discarded, a new model is built from the new observations. The alarm mechanisms may invalidate the model even during the ongoing observations. The flow chart of Fig. 1 provides an overview of the control loop.
If the model remains valid, it is updated periodically. The updating is done only if the observed, new data fits the model. In this case we can assume that the new data contains information which can improve the existing model.

Phases of the Control Loop
The control loop consists of configuration, initialization, model building and valid model phases. We will describe the idea of these phases next.

Configuration
In the configuration phase, certain statistical and computational parameters are set. These values are never changed during the execution of OPE. The main configuration parameter is m, which is the number of percentiles that will be estimated. Section 5.3 will present more details on the configuration parameters. The data structure of the extended P 2 -algorithm requires 2m+3 initial values. Table 2 contains a quick reference to the numeric values which depend on m.

Initialization
In the initialization phase, we need a buffer of size M b ≥ 2m + 3 observations and an in-place sorting algorithm to sort the buffer. The intention is to obtain 2m + 3 unequal and statistically representative initial values for the extended P 2 -algorithm. The sample percentiles are selected from the buffered M b values after sorting. The sample percentiles will also be used as initialization values of the model for the source percentiles. Figure 2 is a sketch of the initialization process.
The word "marker" in Fig. 2 refers to the data structure of the extended P 2 -algorithm described in [15]. In the marker selection and in the model selection Both selections are made from the sorted buffer, hence the initial values of the model are included in the initial values of the marker: The notation · in Fig. 2 refers to the ceiling function and x (i) is the i:th order statistic, that is, the i:th smallest value.
The q:th sample percentile is just the qn :th order statistic x ( qn ) in which n ≥ 1 is the sample size and 0 < q < 1. The buffer, the marker and the model are each separate data structures. Each of these structures require memory allocations. The amount of memory that is needed depends on m alone, hence this memory allocation can be done in the configuration phase as it is not changed during the execution.
In this publication, the numbers q j , j = 1, . . . , m are called selected probabilities. They always satisfy (2) and are thus equally spaced. Equal spacing is not necessary, but it is difficult to justify any unequal spacing without making any prior assumptions about the data source. Also, the spacings are organized for the quartiles to always be included in them. The inclusion of the quartiles is not necessary either, but it will simplify the presentation and the programming of the algorithm considerably. The consequence of these agreements is that only values m = 3, 7, 15, 31 are considered, which may sound restrictive, but should turn out very natural. Splitting the intervals that the selected probabilities determine into halves is the most efficient approach to have qualitatively different granularities. Other choices for m or q j do not convey any essential, additional prior value.
Sorting the buffer, which is an array of floating point numbers, is computationally the most challenging part of the algorithm, given the assumption that computational resources are limited. Especially in the online computation, in which the sorting will be regularly repeated. When M b > 2m + 3 is sufficiently large, the statistical representativeness of the initialization data usually improves and also the chances that unequal values are obtained increases. Since sorting is computationally hard and memory usage needs to be minimized, the value of M b cannot be too large. Also, the values x M b +1 , x M b +2 , . . . can arrive during the sorting which means that they need to be buffered somewhere else. We assume that the input queue is a first-in-first-out (FIFO) queue and can store a few values. This storage size and the arrival intensity of the observations also affect the size of M b . Therefore, M b is a parameter whose value is chosen according to the given computational resources.
The initialization phase ends successfully only if Note that the initial values of the model will then also be distinct. However, the status of the model is not valid after initialization. The status may change while building the model.

Model Building
The phase of model building is divided into two subphases to compromise with the two conflicting criteria that 1st subphase: bad initialization data should be recognized and ignored as soon as possible, 2nd subphase: while checking the correspondence of the model with the data source requires numerous observations.
In the 1st subphase, we test that the model median stands for the "model". Before the new value is given as input to the extended P 2 -algorithm, the relationship with the model median is accounted. After N 1 observations, the bin counts are tested. The test is described in Section 7.1.
In the 2nd subphase, we test that the model quartiles behave like the source quartiles. Approximately 25% of the observed, new values should fall in each of the four bins Before the new value is given as input to the extended P 2 -algorithm, this bin information is accounted. After N 2 observed values, the bin counts are tested. The test is described in Section 7.2.
If either of these two tests fail, the control goes back to the initialization and a new attempt is performed. Otherwise the model status is stated as valid.
We use the phrase model building instead of model validation, since the model is updated (changed) after the successful termination of the 2nd subphase. We then trust that the initialization phase was succesful and that the marker values contain representative information about the source percentiles. The details of the update are described in Section 5.2.

Valid Model
In the model building phase, the model is already tested twice and, in successful cases, it is updated once. After this, the validity of the model is tested periodically after each n values in which n, the period length, is a variable. There is a lower bound for a meaningful period length which depends on m: n ≥ n min (m), but there is no upper bound. See Table 2 for values of n min (m) and Section 8.1 on how these values are computed.
We use the notation x q * 2j for the current values of marker variables to distinguish them from the current values of the model variables x M q j . The test is twofold: 1) the model quartiles are tested that they still behave like the source quartiles and 2) the model percentiles x M q j and the current estimates of the same percentiles, which are the marker values x q * 2j , are tested that they are not too far from each other. The first test is called the bin frequency test and the latter is called the bin boundary test. These tests are explained in more detail in Sections 7 and 8.
Upon the termination of the model building phase, the value of n is first set to n = n min (m). After each successful test, the size of n is incremented by a step: n ← n + s. As additional tests are accepted by the model, we have more evidence on the strict stationarity and more confidence in the extended P 2 -algorithm. After each successful test the model is updated.
There are also two alarm mechanisms that may invalidate the model during the period. A single value above or below either of the absolute alarm thresholds may invalidate the model. Too many values above or below either of the adaptive alarm thresholds may invalidate the model.

The Model
The model of the control loop consists of the m model percentiles which, for simplicity, are assumed to include the model quartiles x M 0.25 , x M 0.5 , x M 0.75 . In addition to these, the model contains three periodwise bin counters . . , n}, which depend on the period length n, two absolute alarm thresholds A − abs and A + abs , two adaptive alarm thresholds A − ada and A + ada and two periodwise counters for the adaptive thresholds: The negative sign (−) refers to the lower tail of the empirical distribution and the positive sign (+) to the upper tail. All the counters related to the period are naturally initialized to zero in the beginning of the period. The natural choices for the thresholds of the adaptive alarm are but, for example, the marker values can also be selected. The latter choice (9) The period length n is always set and known before the next period begins. The expected values of the counters are also known beforehand.
The purpose of the absolute alarm thresholds is that, given the model, the events x < A − abs and x > A + abs are extremely rare. Since this may require some unavailable prior knowledge, it is always possible to switch off the absolute alarm mechanism by setting A − abs = −∞ and A + abs = ∞. The thresholds of the absolute alarms may also be defined in such a way that they depend only on Eq. 5. We define them as and in which K − and K + are configurable parameters. Here the distance x M q m − x M q 1 serves as a yardstick, a similar concept than "sigma" (σ ) of the normal distribution N(μ, σ 2 ). The choice of the parameters K − and K + requires either prior knowledge or an estimation about the tail behavior of the data source.
The interval [x M q 1 , x M q m ] is called the body bin of the model distribution. If the model is valid, then it can be assumed that this interval contains the majority of the source distribution mass. Its complement consists of two bins which are called the tail bins.

The Updating of the Model
After successful test(s), the model percentiles are updated by the rule where 0 ≤ α ≤ 1 is a configurable parameter. The boundary value α = 0 means that the initialization values are assumed perfect and the model is never changed. In practice, we are not confident that the model could ever be improved by the marker values. The boundary value α = 1 means that the model is updated according to the marker values only. This would be plausible if we could know or trust that the marker values converge to the true percentiles. It can be expected that good choices of α will take into account the size of M b or the ratio M b /(2m + 3). This is studied in Section 9.2. The parameter α is called a learning parameter, which is indicative of the confidence assigned to the marker values when a data source is fed as input.
In the phase of model building, Eq. 13 is the only action. In the phase of valid model, the adaptive and absolute alarm threshold values must also be updated, if they depend on the model percentiles as suggested in Eqs. 8, 11 and 12. Table 1 contains the main configurable parameters. Changing their values affect to the output of OPE. In Table 2, we have collected a quick reference on the numerical values that depend on m.

Configuration Parameters
The message of n min (m) in Table 2 is that a simultaneous estimation of several percentiles from the same data requires a lot of data. The default values for N 1 and N 2 in Table 1 are obtained by first setting N 1 + N 2 = n min (m = 7) = 107, see the last column of Table 2. For K + and K − , values like 1, 2, 3, 4 or 5 can be considered, analogously to "one sigma", "two sigmas", and so on, and the defaults are chosen as K + = K − = 2. The test level β affects the rate of false positives of the bin boundary test that is explained in Section 8. In addition, we suggest that the increment step could be s = s(m) = 2m + 3.
The data structure of the marker of the extended P 2algorithm requires memory of the size of 4 × (2m + 3) = The internal buffer size α The learning parameter s The period length increment 2m + 3 N 1 The period length in the 1st subphase 36 N 2 The period length in the 2nd subphase 71 K − Lower tail alarm yardstick multiplier 2 K + Upper tail alarm yardstick multiplier 2 β The There is plenty of room to add metadata without affecting much of the data reduction.
After the initialization, the structure of the marker contains information about the minimum and the maximum observed values, denoted in Eq. 2 as x (1) and x (M b ) . Until the next initialization, the minimum can only become smaller and the maximum can only become larger. Moreover, they are true observed values, not estimates of any kind. It is natural to include them in the output, so at least m + 2 data values are output. We will use the notation x (1) and x (n) for the minimum and maximum of the period, respectively. Let us consider, for a moment, what values may be computed from the m + 2 output values. In many of the applications, the mean value is of interest. Assuming that the model cumulative distribution function (CDF) F M is the piecewise linear CDF determined by the m + 2 parameters see Section 8.2, then the expected value of F M is In addition to the mean x M , it is easy to compute the variance of the model distribution F M . The formula of the variance is presented in the Section 8.3. Essentially, the uncertainty of x M can also be quantified. Also, the robust and the noisy parts are separated in Eq. 15, which allow to check whether a change in the mean occurs merely due to noise or not. If the model is valid, then Eq. 15 has a well-justified interpretation as an estimate of the source mean, otherwise it is still the mean of the observed sample. Therefore, we need to have the information about the validity of the model together with the percentile data (14). If the model is invalidated, then the failure reason is informative metadata.
In Table 3, an output called phase code is introduced. These are the numbers in the flow chart of Fig. 1. The purpose of the phase code is to indicate the exact place in the algorithm which caused the output record (16). This indicates the validity of the model.
After introducing the phase code, the output format can be defined as a record with a generic format (phase code, percentile data, metadata). (16) This means that OPE will transform the input sequence of values into an output sequence of records. The phase Successful end of a period code generally helps in interpretating the record correctly and parsing the metadata. The phase code provides also an applicability criteria, see the assumption 2) of Section 3. If the output never contains records with the phase code 32, briefly 32-records, then this algorithm is not applicable. Moreover, the proportion of 32-records is a measure of applicability of OPE. This metric of applicability is used in Section 9. The phase code is an informative form of metadata. Other metadata, which is beneficial to obtain, is the period length. In the initialization and model building, the period lengths are M b , N 1 + N 2 , all of which are configuration parameters. In the valid model phase, the period length n is a variable whose final value depends on whether there was an alarm during the period of execution. The information of the period length helps to keep account of the total number of observations handled. In the cases in which the model is invalidated, it is plausible to add the test parameters and the alarm-causing observations to the metadata. Serial numbers for the output data integrity control and time stamps, if available, are also natural metadata.
We emphasize that OPE is not meant to be a change detection algorithm. If change detection is important, then a change detection algorithm can be applied to the output of OPE. The control loop is defined in such a way that the statistical tests in it will regularly give false positives. This happens on purpose, as it is possible to check if there has been a significant change in the source distribution from the history of the output data. Reducing the amount of testing will increase the risk of false negatives which is much worse, because it may not be possible to detect them afterwards from the recorded data alone.
Since n is typically considerably larger than m, the data reduction target is fulfilled. What remains to be shown is that the computational requirements are small and that the information loss depends on the parameters. The latter issue is studied in Section 9. The computational complexity results from the regular sorting of the buffer. Other computations that the control loop requires are described in Sections 7 and 8.

Bin frequency tests
The definition of the counters n 1 , n 2 and n 3 was given in Eq. 6. In Section 2.7 of [19], a standard approach to bin frequencies for i.i.d. data is given. The exact multinomial model of the bin frequency vector (n 1 , n 2 − n 1 , n 3 − n 2 , n − n 3 ) is approximated by a multivariate normal model. This approach is not computationally fast, so we seek for a more simple approach.
The theoretical model behind the bin frequency tests presented below is based on an assumption of independent trials, reducing the use of the multinomial distribution to the binomial distribution by considering conditional counter values and the normal distribution approximation of the binomial distribution.

The Testing in the 1st Subphase
Consider the following simple, classical model for the first subphase of model building. Let X be a random variable modeling the observations and let x M 0.5 = x ( 0.5M b ) be the median value of the model obtained in the initialization process of Fig. 2

. Let
and as in Eq. 6 If the strictly stationary assumption holds, then the distribution of X i does not change and, therefore, p is a constant. If the X i are independent and n is fixed beforehand, then n 2 is binomially distributed random variable with parameters n and p, denoted as n 2 ∼ Bin(n, p). The expected value of n 2 is np and variance is np (1−p). Moreover, the p = n 2 n is the maximum likelihood estimator of p. Based on the central limit theorem (see [19]), an approximation can be done as follows if a = z 0.975 where is the CDF of the standard normal distribution and z 0.975 is the 97.5% percentile of . Furthermore, since z 0.975 ≈ 1.96 ≤ 2 and p(1 − p) ≤ 1/4, the 95% confidence interval of p can be written in a simplified form We set a requirement that 0.5 = 1/2 belongs to this simplified approximative 95% confidence interval of p: Recall that the motivation of the 1st subphase is to quickly recognize and ignore bad initialization data. The length in the first subphase period, n = N 1 , can be short provided that the normal approximation (17) used above is plausible.
Since we can assume that p ≈ 0.5, this is not a hard requirement. Another criteria comes from the extended P 2algorithm. The markers, which are next to the median, affect to the accuracy of the estimated median. Therefore, the minimum length of the period (the number of trials) should satisfy N 1 ≥ n min (3).

Testing of the Quartiles
First notice that the counters n 1 , n 2 and n 3 of Eq. 6 are not independent: n 1 ≤ n 2 ≤ n 3 . We can start with n 2 and get the same condition as Eq. 18, but then we must consider conditioned events n 1 |n 2 and n−n 3 |n−n 2 . Luckily the same method applies to conditional confidence intervals, that is, we can set a requirement that which is equivalent with and set a requirement that which is equivalent with Note the use of 1/2 instead of 1/4 in Eqs. 19 and 21, respectively, which is due to conditioning. Together Eqs. 18, 20 and 22 form the test of a conditional approach started from n 2 which we describe symbolically as The notation "|" refer to conditioning and the arrow reads as, for example, "if n 2 then n 1 given n 2 " (n 2 → n 1 |n 2 ). However, there are four more cases that have to be considered and we list them symbolically below: n 3 → n 2 |n 3 → n 1 |n 2 n 3 → n 1 |n 3 → n 2 |(n 3 − n 1 ) Each of these lead to a different group of three inequality chains listed below: The bin frequency test accepts a triple (n 1 , n 2 , n 3 ) if at least one of the five inequality chain triples is satisfied. If all of them fail, then the test rejects the triple. The combined length of the period n = N 1 + N 2 of both subphases requires that the normal approximation is simultaneously plausible near all of the quartiles, which is one criteria. The extended P 2 -algorithm requires that the four markers next to the quartiles should also have some accuracy. Therefore: N 1 + N 2 ≥ n min (7) is adequate.

Adaptive Alarm Bin Counts
Assume that Eq. 8 are selected and let the r − and r + denote the adaptive alarm bin counters. Since n is fixed, r − and r + are not independent. The binomial model is again applicable in which the success probability can be assumed to be p = 1/(m + 1) and it can be assumed that n > (m + 1) 2 . In the beginning of a period, we can set approximate 95% bounds to, for example, r + as but the conditioning r − |r + leads to which does not work similarly since the final value of r + is not known when the period begins. However, if the r + in Eq. 24 is understood as the current value, not the final value, then it can be implemented. Naturally, the roles of r + and r − in Eqs. 23 and 24 must also be considered in the opposite way: and Finally, the lower bounds in Eqs. 24 and 26 can be dropped since the bin frequency test at the end of the period will handle them. During the period, we are only concerned about drifts away from the current body bin of the observations and take care that the sum r − + r + will not become too large.

Bin Boundary Test
The bin boundary test is based on the following theorem: The proof of Theorem 1 in case m = 1 is given, for example, in Section 2.3.2. of the book [19]. Extending the proof from [19] to the more general case, stated above, requires basic probabilistic reasoning based on the triangle inequality. We omit the proof since it is more important to describe how it is used.
We use the marker x q * 2j of the extended P 2 -algorithm to approximate X ( q j n ) . Furthermore, the CDF F M is strictly increasing with percentiles x M q j , so we use F M in place of F and x M q j in place of x q j . The probabilistic interpretation of δ ε,j is that the smaller it is, the more uncertain the estimation of x q j is. Theorem 1 states that the most uncertain percentile dominates the convergence of all the estimates. Figure 3 visualizes the relationship between ε and δ ε,j when F = F M .
What remains to be specified is the ε > 0. When the piecewise linear F M is used as the model, the selection of ε depends on the scale of the bins. In order to simplify the next formulas, the following convenient notation is now taken into use: Note that x (1) and x (n) , the period's minimum and maximum observations, are not part of the model of the control loop, but they are part of the piecewise linear CDF model F M . The superscript M is, therefore, justified.
With the notations Eqs. 28 and 29, the Eq. 27 of δ ε,j simplifies to . . , m. Also, with the Eqs. 28 and 29 we define the maximum and the minimum bin widths as follows: Now, if ε ≥ ε 1 , then for all j = 1, . . . , m and if ε ≤ ε 2 , then for all j = 1, . . . , m We want to minimize the use of F M since it is only one out of infinitely many CDFs which have the same percentiles x M q j , j = 1, . . . , m. Due to computational feasibility with F M choosing any ε < ε 2 would sound attractive, but it also means that we assume that F M is an accurate model of the source distribution. This is hardly true, at least if m = 3. The accuracy improves according to the larger values of m.
The choices of ε = ε 1 /m and ε = ε 2 /m both seem quite plausible. When the model is updated, the values of ε 1 or ε 2 are recomputed and the test will remain the same.
Assume that either ε = ε 1 /m or ε = ε 2 /m has been selected. The bin boundary test accepts the current marker values or else in all the other cases it results in rejection. The parameter β is assumed to have values such as β = 0.05 or β = 0.01 (default).

The Minimum Period Length n min (m)
Based on Theorem 1 we define n min (m, ε) such that The idea is that the probability estimate of Theorem 1 is not trivial when n ≥ n min (m, ε). The choice ε = ε 1 of Eq. 30 gives a lower bound Eq. 32 to δ ε,j which does not depend on ε or j and, therefore, it gives an upper bound to the probability bound of Theorem 1. We can find a candidate for n min if we set a requirement that from which n min (m) = n min (m, ε 1 ) is n min (m) = 1 2 (m + 1) 2 log 4m .

The Piecewise Linear CDF Model
The piecewise linear model F M is used mainly for visualization purposes. However, in the bin boundary test, it is also used to model the source distribution near the bin boundary points x M q j , j = 1, . . . , m. The formula, with the Eqs. 28 and 29, is the following:

The Variance of F M
Let X ∼ F M . Since Var(X) = E(X 2 ) − (EX) 2 and, since the formula E(X) = x M is already given in Eq. 15, only the second moment remains to be computed. According to Eqs. 28 and 29, the formula is

Testing of OPE with the PoC
We have implemented a version of OPE using the programming language C [11]. It contains all the features described here except for the time stamps and the FIFO input queue which is not part of OPE itself. In this publication, this implementation is called the proof of concept (PoC). In this Section, we will present a collection of tests which are based on this PoC. Even though OPE is designed to be an online algorithm, only offline testing is discussed here. Offline testing allows to run PoC with exactly the same input, but with different values for the parameters. Also, the focus is on the correspondence with the source percentiles and the model percentiles of OPE. Our testing framework is to consider it as a deterministic (time) one pass (or single pass) algorithm, because this framework already has some theoretical results about the space complexity, see [13]. The space complexity of a deterministic one pass algorithm which computes the (population) median of a file with size N numbers is (N), that is, at least of order N asymptotically.
If the global estimates of the source percentiles are computed from all 32-records' temporally local estimates, (e.g., the mean of the percentiles), this whole offline process can be considered as a deterministic, one-pass algorithm with a large input file as a source. This is achieved when the offline data source file is an output of a simulated, stationary time series. The abstract source percentiles are then practically the same as the population percentiles of the simulated file. In this special case, the size of all the output records (16) corresponding to a given input file is considered as a measure of the space complexity of OPE and it is (N).
We mention just briefly that the PoC has been tested in internal projects at VTT with light sensor data, wireless mesh network delay measurement data, and vibration sensor data from a revolving machine.

Performance Metrics
First, we present the performance metrics which are considered within the above described framework of a deterministic, single-pass algorithm. These metrics are accuracy, bandwidth savings and the buffer size.

Accuracy
Measuring accuracy is reasonable only when the true source percentiles x src q j , j = 1, . . . , m, exist and are known. Also, only 32-records can be assumed to contain accurate information about the source percentiles. For brevity, we will only report an overall accuracy which we define next. First, let in which n 32 is the number of 32-records and X M q j ,l is the q j :th model percentile value reported in the l:th 32-record. The averaging in Eq. 37 is reasonable in the test framework since all the 32-records model the same stationary state. The overall accuracy can be defined as the average over the m differences between the estimates X 32 q j and the true values: This is a single number and the average over m allows a comparison between different m.

Bandwidth Savings
The IoT application idea of OPE is that the output records (16) are not stored where they are computed, but transmitted to a cloud server. Therefore, bandwidth savings, defined as is a plausible metric. The output is is according to the provision of the PoC, which includes the phase code, the model percentiles, the period lengths and some other metadata on failure records. The output size depends on the parameters of the PoC. All the output records are included in Output(Bytes).

The Size of the Buffer M b
As the content of the buffer is sorted frequently and sorting is computationally expensive, the buffer size is a performance metric.

Study Case: Buffer Size vs. The Learning Parameter
As we discussed in Section 5.2, the relationship between the buffer size M b and the learning parameter α is of interest. The effect of α is included in the performance case study which is described here.

Input Files
The input files were simulated realizations from a certain moving average (MA) model of order r which we will specify here. See Chapter 3.5.5 of [14], Section 3.4.3 of [5] or Chapter 3 in [4] for general technical details of MA(r) time-series models. The different orders r = 2, 4, 8, 16, 32 and 64 are considered. For each r, there was a separate input file generated with N = 500 000 values. The specific MA(r) model used here is the following. Assume r is chosen and let Z t be an infinite sequence of independent N(0, 1) distributed random variables and define in which the parameters β i are chosen as With these parameters, the corresponding MA(r) model is invertible. Invertibility ensures that there is a unique MA(r) model for a given autocorrelation function (ACF) [5]. With these choices E(X t ) = 0 for all t and, due to independence of Z t , For k ≥ 0, the ACF at lag k ≥ 0 is For lags k < 0, the ACF is defined as ρ(−k) and, therefore, ρ(k) ≥ 0 for all lags k. The time series (X t ) is strictly stationary and, since Z t are N(0, 1) distributed, by Eq. 41 the X t are N(0, 2) distributed. The source percentiles x src q j of the distributions of X t are the same with all the input files and their numerical values are known with high precision.
The previously defined MA(r) models with the ACF (42) were chosen because, with larger orders r, they have the property that the realizations tend to have long sequences of consecutive values which are either above or below zero. Zero is also the source median for all of the specified MA(r) models. Therefore, the property ρ(k) > 0 for all the lags |k| ≤ r challenges the initialization and the model building phases of the control loop, which are the novel features. This is visualized in Fig. 4. If it were that ρ(k) < 0 with some small lag k, then the initialization and the model building would benefit from this.

Parameter Ranges
For each of the six input files, there are different values considered for the parameter m = 3, 7, 15. For each of these m, the buffer sizes are considered. Thus, the buffer sizes are multiplicities of 2m + 3, which is the minimal possible buffer size. For each pair (m, M b (k)), the discretized learning parameter values The accuracy and the bandwidth savings are then considered as functions of (m, M b , α). Other parameters were fixed to their defaults shown in Table 1.

Results
First, a study of the applicability of OPE for the specified MA(r) input files with different orders r was made. The proportion of 32-records in the output is the criterion of applicability as defined in Section 6. In Fig. 4, an exhaustive search was made over all pairs representativeness of the available data in the initialization and model building phases becomes poorer. In Fig. 5, the discretizing parameter pairs (k, h), which produced the maximum applicability in Fig. 4, with the given m and r, are visualized in the space of (k, h). The pairs (k, h) are not necessarily unique. In the non-unique case, the pair with the smallest k is shown. The cases m = 3, 7, 15 are considered and for each m the points corresponding to input data of orders r = 2, 4, 8, 16, 32 and 64 are plotted and joined. The other end points, with r = 64, are marked. The horizontal axis is in scale k = M b (k)/(2m + 3) only. The vertical scale is given in h and in α(h) scale. There is not much structure in Fig. 5, but it appears that two simple policies may be behind the best applicability: either the buffer size M b is large and α is small, or the smaller buffer size is accompanied with a larger α. The notable things are that the choice of the parameters M b and α can significantly affect the performance when the applicability is low (r = 64) and that the maximum considered buffer size M b (12) = 12(3m + 3) is not automatically needed. Figure 6 shows how the accuracy depends on the applicability. The gray area is a rough estimate of the 99%  confidence interval in the case m = 15, obtained from Eq. 38 by assuming in Eq. 37 that (see Chapter 2.1.1 of [19]) X 32 q j ∼ N x src q j , (1 − q j )q j n 32 , j = 1, . . . , m, which is considered as the null hypothesis of accuracy in our test framework. Finally, the bandwidth savings is presented in Fig. 7. It is computed for the maximizing pairs (k, h). If the applicability of OPE is high and the data source is stationary, then the bandwidth savings with m = 15 can be more than 97%. This indicates that there is not much information in the simulated input data with r = 2. This would be interpreted as it is almost white noise with a minor memory effect. Recall that all the output records are included in bandwidth savings.
If the buffer size M b is limited beforehand in an application, but some time-series samples of the application data are available in the design phase, one can try to estimate a sufficient value for the α of OPE in a similar way as presented in this section. This would involve estimating the sample autocorrelation function of the application data, simulating data with a similar autocorrelation structure, feeding it to OPE and then choosing the α with the best applicability, that is, the largest proportion of 32-records.

Complexity Issues
Our control loop alternates between sorting a fixed size buffer M b and the extended P 2 -algorithm. Even though the general complexity of sorting is of the order O (N log N), since the buffer size is constant, the complexity of the sorting phases is at most of constant order, O(M b log M b ) = O (1). Also, the control loop adds complexity to the whole and our main effort has been to guarantee that this complexity of the control loop is also constant. Thus, the overall complexity of OPE is of constant order, but this constant is not the minimal possible. Indeed, there is space for further research on developing an algorithm with lower complexity. For example, replacing the P 2 -algorithm by the other algorithms that were discussed in Section 2 may provide a contolled algorithm with lower complexity.

Conclusion
We have described a sequential, online algorithm OPE, which either computes or estimates the temporal percentiles of the observed, empirical distribution of the data. This is done without storing all of the observations. It works with a parameter-defined, fixed-size small amount of memory and limited CPU power. It can be used for the statistical reduction of a numerical data stream. It transfers the univariate input stream into the output stream of vectors in which each vector can be interpreted as an empirical distribution of data starting immediately after the previous vector. For IoT applications, there is significant reduction in the amount of data (bytes) needed to be transmitted to a cloud server. The output data is structured in a (cumulative) histogram format, which quickly recognizes such trends or anomalies that are visible in the histogram scale. We provided requirements that should be satisfied when OPE is applied and we illustrated that OPE contains a builtin applicability metric, the proportion of 32-records. We also presented a performance study, which was specially designed to challenge the control loop part of OPE. A lesson learned from this performance study is that 32-records are self-informative in the sense that their interpretation does not require more metadata than the knowledge that they are 32-records and they remain sufficiently accurate even when the applicability of OPE decreases.