# Competitive Detector of Changes with a Statistical Test

- 703 Downloads

## Abstract

The detector of jumps or changes in the function value and its derivative designed with the use of the concept of competing approximators is revisited. The previously defined condition for the existence of a jump in the function value is extended by introducing a statistical test of significance. This extension makes it possible to eliminate some false positive detections which appeared in the previously obtained results. The features of the extended detector are demonstrated on some artificial and real-life data.

## Keywords

Competitive detector Function change Statistical test## 1 Introduction

The detector of changes in a one-dimensional signal which is our object of interest originated from the concept of a filter in two-dimensional images denoted as the *competitive filter* in [10, 11]. The change detection ability of this filter was later noticed in [3]. The detector did not work well in two-dimensional images [4] but for one-dimensional signals it appeared useful and made it possible to detect the change in the function value as well as in its first derivative [5]. In the present paper we shall put aside the filtering effect and pay attention to the detection of the change of the function value only. One of the problems noticed in [5] was that the detector made some false positive errors. In this paper we shall complement the basic detector with a statistical test which in our opinion will reduce the number of false positives. The question of detecting changes in signals is one of the domains of intensive research. Within the domain of image processing it was surveyed in [1, 2, 9] and in motion detection in [7, 12]. However, it seems that the concept of competitiveness understood as in the paper [3] was absent in the research reported. Because the detector of our interest has its origin in the domain of image processing, the change will be sometimes called a jump or an edge, the change in the value of the function will be called a step and the change in the derivative of the function can be also called a roof. The concept of the method does not need any assumption on the nature of the data analyzed. The only operation on the data is the approximation with polynomial functions, one at each side of the considered point, without the condition of continuity at this point. In the present paper we use the simple approximation with a linear function, which can be treated as the first approximation of a potentially more developed approach. The assumptions are introduced in the statistical test of the significance of the detected edge. At present, we make an assumption of the Gaussian distribution of noise, but this is also only the first, simplest example, which can easily be replaced with more advanced approaches.

This paper is organized as follows. In the next Sect. 2.1 the main concept of the detector, already described in [5], is reminded in its basic form. As a complement of the heuristic criterion of edge existence, described in Sect. 2.2, the statistical criterion proposed in this paper is introduced in Sect. 2.3. In Sect. 2.4 the functioning of the detector with both criteria is explained on simple data. Finally, the ability to reject false positives in noisy data is shown in Sect. 3.1 and the detection of changes in some real-life data is presented in Sect. 3.2. The results are discussed in Sect. 4 and the paper is closed in Sect. 5.

## 2 The Method

### 2.1 General Concept

A sequence of measurements \(z(x)=y(x)+n(x)\), where the independent variable *x* is discrete and *n*(*x*) is noise, will be considered. The filtering and detection is performed in the point \(x_0\) called the *central point*. If *x* is time, the past measurements are considered to be known up to the point \(x_0\), and also the further measurements, up to \(x_0+D\), are considered as known. The competitive structure of the detector can be seen in that two approximators, referred to as the *L* *eft* and the *R* *ight* one, are used to find \(y(x_0)\). The first one operates on the *past* data at the left side of \(x_0\), using \(z(x), x\in [x_0-s-\varDelta ,x_0-\varDelta ]\) to find \(\hat{y}_L(x_0)\). The second one operates on the *future* data, at the right side of \(x_0\), using \(z(x), x\in [x_0+\varDelta , x_0+s+\varDelta ]\) to find \(\hat{y}_R(x_0)\). The parameter *s* is the scale of the filter. The parameter \(\varDelta \) is the gap between the central point \(x_0\) and the estimators. Each approximator makes its error. Errors \(e_L(x_0)\) and \(e_R(x_0)\), respectively, can be approximated from the differences between the data and the approximated values. The filtered value at the central point, \(\hat{y}(x_0)\), is taken as the output of that filter which has a smaller error. In this the competitiveness of the filter can be seen. As the value of the output, the value at \(x_0-\varDelta \) from the left approximator, or for \(x_0+\varDelta \) for the right one, is used, to avoid using the extrapolated values. This gives stabler results than in the case of values extrapolated to \(x_0\). As in [3], linear least square approximators are used and their mean square errors are used as the approximation errors.

### 2.2 Heuristic Criterion of Jump

*x*the error from the past increases and for decreasing

*x*the error for the future increases. In the present paper it is assumed that at least one of the errors increases in this case. These conditions can be expressed as

^{1}. Let us imagine the process of filtering and edge detection in such a way that the central point, with the two approximators at its left and right side, move along the data from left to right. When a step is encountered, first the right approximator moves over it. The step enters the right approximator’s support. Therefore, the error of the right approximator goes up, as in Fig. 1a. As the analyzed point moves forward, the step leaves the support of the right approximator, so its error goes down, and enters that of the left one, as in Fig. 1b. Hence, the error of the left approximator increases. When both approximators leave the region of the step, both errors go down. It can be noticed that there are no separate conditions for the two edge types detected. In the case of detection, if one of the edge types is missing, then its intensity is zero (examples will be shown further in Fig. 2, where the roof edge is zero for \(x=10,11\) or the step edge is zero for \(x=20\)).

### 2.3 Statistical Criterion of Significance of a Jump

*s*. However, it is not always so; therefore, sometimes the false positive detections (as well as false negative ones) can occur. This is why we have introduced a simple mechanism of additionally testing the edge significance in a statistical way, to exclude false positive detections. In the present Subsection some notations will contain a superscript

*s*as

*s*

*tatistical*to underline the differences between these notations and those from the previous text. Finally the mutual relations of the relevant notations will be explained. Let us assume that the sequence of measurements form a piecewise linear signal, not necessarily continuous, with additive Gaussian noise. For an isolated point \(x_0\) it is observed \(y(x)=a_L+b_Lx+\epsilon _x\) for \(x<x_0\) and \(y(x)=a_R+b_Rx+\epsilon _x\) for \(x\ge {}x_0\), where the noise \(\epsilon _x\) has a zero mean normal distribution. There is a jump at \(x_0\) if \(\theta =a_R-a_L\ne {}0\). Let us verify a hypothesis \(H_0: \theta =0\) – the jump is absent, against the alternative hypothesis \(H_1: \theta \ne {}0\) – the jump is present. To verify this, the test statistics \(|\hat{y}^s_R(x_0)-\hat{y}^s_L(x_0)|\) is used, where \(\hat{y}^s_L(x_0)\) is a linear regression function of

*s*points on the left of \(x_0\), without this point, that is, from the set \(X_L=x\in [x_0-s,x_0-1]\), and \(\hat{y}^s_R(x_0)\) is a linear regression function of

*s*points on the right of \(x_0\), with this point, that is, from the set \(X_R=x\in [x_0,x_0+s-1]\). An isolated jump is detected if

### 2.4 Results for both Criteria

*y*(

*x*). If both are present, the one having a larger modulus is taken. If there is an edge according to (2) and (3), then a statistically significant edge exists. If (2) holds and (3) does not, than the edge is statistically insignificant and it is dismissed. If (2) is false, then there is no need to check (3), although in the present paper both conditions are calculated independently to show the results in a detailed way. In Fig. 2 the result is shown for data in which all the changes detectable by the heuristic algorithm are present: a step edge, a roof edge and a combined step and roof edge. The data are synthetic and clean. What is apparent is that the roof edge is detected in a single point, like this at \(x=20\), while the step edge, like this at \(x=10,11\) is found at two points. This is correct due to that the jump of a discrete function appears between two points. It can be noted that the statistical criterion tends to detect very small changes of the value if the error measures are small, due to that in (4) the threshold depends on the variance. This gave rise to a continuous edge between \(x=20\) and 26. This edge was not accepted by the heuristic condition, though.

## 3 Examples

### 3.1 Rejection of False Positives in Noise

The ability to reject the less significant step changes was tested with data with noise which was actually Gaussian, zero-mean, with \(\sigma =10\). The results are shown in Fig. 3. It can be seen that indeed, some, but not all, false positive detections were successfully rejected, while the most significant, strong jump at \(x=30,31\) was constantly maintained.

### 3.2 Real-Life Example

## 4 Discussion

The use of the statistical test reduced the number of false positive detections. The number of false negatives sacrificed seems to be small, but this needs further analysis. The considered algorithm is characterized by a set of advantages and drawbacks. As the advantages, the following features can be named. Two approximators are used so the jump can be directly modelled. The complexity of the algorithm with respect to the size of the data is linear due to that only a local neighborhood of a data point of a fixed size is considered. Coming to the drawbacks, it should be said that some data concerning the future with respect to the considered data point should be known to perform the analysis, and that the method has some free parameters which should be selected, while the criteria for such selection are not self-explanatory.

## 5 Summary and Prospects

The concept of the competitive filter was extended by adding the statistical test used to check the significance of the jump of the function value. In the test, the results available from the calculations already performed are used, so the computing load is small and the complexity of the algorithm remains linear with respect to the data size. The introduction of the statistical test made the number of false positive detections smaller. The test can be used as a post-processor of the detection results. The design of the test can be extended to the derivatives of the function and its form can be improved. Also the assumptions on the distribution of noise in the data can be changed and the criterion can be reformulated accordingly. This stage of research can be treated as the proof of concept only, but the idea of combining the statistical testing with the heuristics seems to be one of the promising directions of the development of the concept of competitive filtering and detection.

## Footnotes

- 1.
The graphs used in this paper as well as the software were developed in Matlab Open image in new window .

## References

- 1.Basu, M.: Gaussian-based edge-detection methods: a survey. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.)
**32**(3), 252–260 (2002). http://dx.doi.org/10.1109/TSMCC.2002.804448 MathSciNetCrossRefGoogle Scholar - 2.Bhardwaj, S., Mittal, A.: A survey on various edge detector techniques. Procedia Technol.
**4**, 220–226 (2012). http://dx.doi.org/10.1016/j.protcy.2012.05.033 CrossRefGoogle Scholar - 3.Chmielewski, L.: The concept of a competitive step and roof edge detector. Mach. Graph. Vis.
**5**(1–2), 147–156 (1996)Google Scholar - 4.Chmielewski, L.: Failure of the 2D version of the step and roof edge detector derived from a competitive filter, Report of the Division of Optical and Computer Methods in Mechanics, IFTR PAS, December 1997Google Scholar
- 5.Chmielewski, L.J., Orłowski, A.: Detecting changes with the robust competitive detector. In: Alexandre, L.A., Sánchez, J.S., Rodrigues, J.M.F. (eds.) Proceedings of the 8th Iberian Conference on Pattern Recognition and Image Analysis IbPRIA 2017. LNCS, vol. 10255. Springer, Faro, Portugal, 20–23 Jun 2017. doi: 10.1007/978-3-319-58838-4_39
- 6.Furmańczyk, K., Jaworski, S.: Large parametric change-point detection by a V-box control chart. Sequential Anal.
**35**(2), 254–264 (2016). http://dx.doi.org/10.1080/07474946.2016.1165548 MathSciNetCrossRefzbMATHGoogle Scholar - 7.Hu, W., Tan, T., Wang, L., Maybank, S.: A survey on visual surveillance of object motion and behaviors. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.)
**34**(3), 334–352 (2004). http://dx.doi.org/10.1109/TSMCC.2004.829274 CrossRefGoogle Scholar - 8.Jaworski, S., Furmańczyk, K.: On the choice of parameters of change-point detection with application to stock exchange data. Quant. Methods Econ.
**12**(1), 87–96 (2011)Google Scholar - 9.Maini, R., Aggarwal, H.: Study and comparison of various image edge detection techniques. Int. J. Image Process. (IJIP)
**3**(1), 1–11 (2009)Google Scholar - 10.Niedźwiecki, M., Sethares, W.: New filtering algorithms based on the concept of competitive smoothing. In: Proceedings of the 23rd International Symposium on Stochastic Systems and their Applications, pp. 129–132. Osaka (1991)Google Scholar
- 11.Niedźwiecki, M., Suchomski, P.: On a new class of edge-preserving filters for noise rejection from images. Mach. Graph. Vis.
**1–2**(3), 385–392 (1994)zbMATHGoogle Scholar - 12.Räty, T.D.: Survey on contemporary remote surveillance systems for public safety. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.)
**40**(5), 493–515 (2010). http://dx.doi.org/10.1109/TSMCC.2010.2042446 CrossRefGoogle Scholar