Improving state estimation through projection post-processing for activity recognition with application to football

The past decade has seen an increased interest in human activity recognition based on sensor data. Most often, the sensor data come unannotated, creating the need for fast labelling methods. For assessing the quality of the labelling, an appropriate performance measure has to be chosen. Our main contribution is a novel post-processing method for activity recognition. It improves the accuracy of the classification methods by correcting for unrealistic short activities in the estimate. We also propose a new performance measure, the Locally Time-Shifted Measure (LTS measure), which addresses uncertainty in the times of state changes. The effectiveness of the post-processing method is evaluated, using the novel LTS measure, on the basis of a simulated dataset and a real application on sensor data from football. The simulation study is also used to discuss the choice of the parameters of the post-processing method and the LTS measure.


Introduction
In almost all areas of science and technology, sensors are becoming more prevalent.In recent years we have seen applications of sensor technology in fields as diverse as energy saving in smart home environments (Lima et al., 2015), performance assessment in archery (Eckelt et al., 2020), detection of mooring ships (Waterbolk et al., 2019), early detection of Alzheimer disease (Varatharajan et al., 2018) and recognition of emotional states (Ko lakowska et al., 2020), to name just a few.
Our main interest lies in the detection of human activities using sensors attached to the body.Sensors generate unannotated raw data, suggesting the use of unsupervised learning methods.If an activity specified in advance is of interest, then supervised learning and labelled data are required.However, the task of labelling activities manually from sensor data is labour-intensive and prone to errors, which creates the need for fast and accurate automated methods.
Human activity recognition (HAR) attracted much attention since its inception in the '90s.A plethora of methods are currently being used to detect human activities (Lara and Labrador, 2013), with various deep learning techniques leading the charge (Minh Dang et al., 2020;Wang et al., 2019).In many studies (Ronao and Cho, 2017;Capela et al., 2015;Aviles-Cruz et al., 2019) only sensors embedded in a smartphone are used to classify user activities.Physical sensors, such as accelerometers or gyroscopes attached directly to a body or video recordings (from a camera), are the most popular sources of data for activity recognition (Rednic et al., 2012;Zhu and Sheng, 2011;Cornacchia et al., 2016).Similarly, cameras can be either placed on the subject (Li et al., 2011;Ryoo and Matthies, 2013;Watanabe et al., 2011) or they can observe the subject (Song and Chen, 2011;Laptev et al., 2008;Ke et al., 2005).Rarely, both camera and inertial sensor data are captured at the same time (Chen et al., 2015).
The temporal structure of the time series should be taken into account when choosing a method for activity recognition.Simple classification techniques (such as logistic regression or decision trees) ignore time dependencies and will need to be improved after the procedure.Alternatively, methods which are more complicated and more difficult to train have to be deployed.Another challenge lies in the reliability of manual labelling (in case of supervised learning).Quite often it is unreasonable to assume that labels annotating the observed data are exact with regards to timings of transitions from one activity to another (Ward et al., 2006).Timing uncertainty can be caused by a deficiency of the manual labelling or the inability to objectively detect boundaries between different activities.This issue is well-known in the literature, for instance, Yeh et al. (2017) introduced a scalable, parameter-free and domainagnostic algorithm that deals with this problem in the case of one-dimensional time series.
The main contribution of this paper is the introduction of a post-processing procedure, which improves a result of activity classification by eliminating too short activities.The method requires a single parameter which can be interpreted as the minimum duration of the activites (hence the choice of this parameter is driven by domain knowledge).It allows us to mitigate the problem of activites being fragmented in cases where some domain-specific information about state durations is available.Based on empirical evidence, the performance of classical machine learning classifiers improves significantly by our post-processing method.This enables simple and fast but less accurate classification methods to be upgraded to accurate and fast classifiers.
In order to compare the quality of competing activity recognition methods, an appropriate criterion for evaluating the performance is needed (also to demonstrate the performance of the post-processing procedure we introduce).Below are some commonly used performance measures: • accuracy, precision, the F -measure (Lara and Labrador, 2013;Lima et al., 2019), • similarity measures for time series classification (Serrà and Arcos, 2014), such as Dynamic Time Warping or Minimum Jump Costs Dissimilarity, • custom vector-valued performance metric (Ward et al., 2011).
Our objective is to design a performance measure that satisfies problem-specific conditions, which will be specified later.
The outline of the paper is as follows.Section 2 provides a method for improving classification with a post-processing scheme that uses background knowledge on the specific context.In particular, it validates the state durations and provides an improved classification that satisfies the physical constraints on the state durations imposed by the context.Section 3 introduces specialized performance measures for assessing the quality of classification in general and in activity recognition in particular.The new performance measure is also designed to show the advantages of the post-processing fairly.Section 4 presents an application of the techniques in a simulated setting.The postprocessing method was able to improve the estimates significantly.The method achieves similar results in an application to football data.

Improving classification by imposing physical restrictions 2.1 Post-processing by projection
When recognizing human activities, it is often the case that the result of the classification contains events (time intervals in which a classification result is constant) that are too short 1 .Usually ad hoc methods are used in order to discard those events, e.g.removal of any short events and replacing them with the next state in the classification, whose length is above a fixed threshold.The goal of this section is to introduce a formalized approach to correcting for the classifier's mistakes regarding the distribution of durations by introducing a novel post-processing procedure.Consider the set of states S = {1, ..., M } and a metric d on S. Let ρ denote the discrete metric 2 on S. Any states-valued function of time will be called a state sequence.In reality we are only able to obtain a discrete-time signal, however, the relevant information contained in such a signal is a list of all the state transitions, which can more easily be encoded in a function with continuous argument.Hence, we define T , the set of all càdlàg3 functions f : R → S with a finite number of discontinuities.We define the standard distance induced by a metric d between two state sequences as (1) If d is a metric on S, then dist is a metric on T .The standard distance induced by the discrete metric is the time spent by f in a state different from g. Now, we define a measure of closeness between functions in T , as our goal is to find a function close enough to a given function in T , while reducing the number of jumps it has (which in turn will eliminate short events in the state sequence).Let f, g ∈ T .Then we introduce the notation: where J(g) is the set of all discontinuities of g, |J(g)| is the number of all discontinuities of g and γ is a penalty for a single jump of g.Given f ∈ T , our goal is to find any solution f ∈ T of the minimization problem f ∈ arg min As a default, we will use the standard distance induced by the discrete metric.
In order to characterize the solution f of problem (3) we present the following lemma.
Lemma 2.1.Let γ > 0 and f ∈ T .Let J denote the set of all discontinuities of the function f .There exists a solution f of the problem (3) such that it does not contain jumps outside of J.
Lemma 2.1 leads to the conclusion that in search for the solution of the minimization problem we can limit ourselves to a finite set of functions, namely a subset of T with jumps only allowed at the same locations as function f .The proof of lemma 2.1 can be found in the appendix.
In this minimization problem the choice of the parameter γ plays a crucial role.We will now show an interpretation of the penalty parameter that will ease the process of choosing it.It will also allow us to reformulate problem (3).First, we define a new set of functions.
Definition 2.1 (Function with bounded minimum duration of states).Given a parameter γ > 0 we define G γ ⊂ T , the set of functions with bounded minimum duration of states, such that for g ∈ G γ we have ) for some constant n ∈ N, a sequence of states {s 1 , ..., s n−1 }, such that s i = s i+1 for i = 1, ..., n − 2, and an increasing sequence t 1 < t 2 < ... < t n (we allow t 1 = −∞ and t n = ∞), Lemma 2.2 yields a connection between the penalty γ and the minimum duration of states that we impose on the solution of our minimization problem.
This lemma can be used in practice to select the size of the penalty.The proof of lemma 2.2 can be found in the appendix.
Given f ∈ T , the minimization problem (3) is equivalent to the minimization problem f ∈ arg min by lemma 2.2.f will be called a projection of f onto G γ .
As mentioned before, the regularization by penalizing high numbers of jumps narrows down the set of possible solutions to a finite nonempty subset of G γ (thanks to lemma 2.1), which leads to the existence of f .However, the solution might not be unique, as illustrated by the following example.
Consider S = {0, 1}, f = 1 [0.35,0.45)+ 1 [0.55,+∞) and γ = 0.2.Both f1 = 1 [0.35,+∞) as well as f2 = 1 [0.55,+∞) are the projections of f .One could think of it as an issue, however, it does reflect well our understanding of the original problem.The assumption is that f has impossibly short windows, because it is uncertain which activity is actually performed in the interval [0.35, 0.55).Looking only at f we are unable to decide which solution is more suitable, hence it is only natural that the method also returns two possible options.
We close with a remark regarding influence of the extreme values of γ on projection f .Remark 2.1.Let f ∈ T .If γ = 0, then f = f is the only projection of f .If γ = ∞ and E γ (f, g) < ∞ for some function g ∈ T4 , then g is constant and equal everywhere to the most common state of f and f = f5 .

Connection with the shortest path problem
In this section we devise a method for finding a projection in an efficient manner.It will be shown that the problem of finding the shortest path in a particular graph is equivalent to the minimization problem (4).This is possible thanks to the lemmas 2.1 and 2.2, which narrowed down the set of possible solutions to a finite set.
First, we present a lemma which further characterizes a projection of f .
The proof of lemma 2.3 can be found in the appendix.Remark 2.2.If n > 2 and all states are shorter than 2γ (with exception of the first and the last state), there exists a projection such that the second and the second-to-last jump of the original function are not present in it.
Remark 2.2 allows us to ignore the second and the penultimate jump of the original function when searching for jump locations in the projection.The proof of this remark can be found in the appendix.
Without loss of generality we will assume that f has n ≥ 2 jumps at time points t i for i = 1, ..., n: where s i ∈ S for i = 0, ..., n and s i = s i+1 for i = 0, ..., n − 1.We use the following notation: t 0 = −∞, t n+1 = ∞.In light of lemma 2.3 we assume that for i = 1, ..., n − 1.If this is not the case, then the function needs to be split up into several parts in such a way that in each part equation ( 5) holds.
We will now define a graph for the purpose of showing the connection between the problem of finding a projection f and the problem of finding a shortest path in a directed graph.Let G = (V, A) be a directed graph such that the set of vertices V is given by and the set of directed arcs is given by There is a correspondence between each path from t 0 to t n+1 and a sequence of jumps in the interval (t 1 − γ, t n + γ).A path (t 0 , t l1 , ...t lm , t n+1 ) can be associated with a function g with jumps at t l1 , ..., t lm , such that g(t l k ) is the most common value of f in interval [t l k , t l k+1 ).The definition (7) of the set of directed arcs ensures that all paths in the graph G correspond to at least one function in G γ .
We now introduce a weight function W : A → R + ensuring that the cost of the path coincides with the error E(f, •) of the corresponding function in the interval (t 1 − γ, t n + γ).Let I k = t k+1 − t k for k = 0, ..., n.It is noteworthy that I 0 , I n = ∞, while I k < 2γ for k = 1, ..., n − 1.We introduce the penalty for a jump φ k = γ for k = 1, ..., n and φ n+1 = 0. Now we define the weight function W : for (t k , t l ) ∈ A, where s kl represents the most common state in the interval [t k , t l ) of the original function f .The first term equals the dist(f, g) The second term adds a penalty for jump at t l if t l is finite (the penalty for jump at t k was added on a previous arc in the path, if k > 0).
Theorem 2.1 (Problem equivalence).Let γ > 0 and (t 1 , ..., t n ) be the only discontinuities of a function f ∈ T .Let G = (V, A, W ) be a weighted, directed graph as defined in (6), ( 7), ( 8) above.The task of finding a projection of f onto G γ , as defined in (4), is equivalent to finding the shortest path from t 0 to t n+1 in the graph G.
The proof of the theorem can be found in the appendix.Now, we will illustrate the method by an example.
Given γ = 0.2 and S = {0, 1, 2, 3}, consider the function . The graph G, as defined in ( 6), ( 7 (in this case, it can be shown f is the only projection of f ).

Binary case
In case the set of states S consists of only two elements, a stronger result than lemma 2.2 can be achieved.The main advantage of the binary case comes from the fact that we do not need to specify the sequence of states since knowing the starting state, each jump signifies a move to the only other available state.First, we present a supporting remark which further strengthens the relation between jumps of a function from T and its projection.
For the remainder of the section, we will always assume that S = {0, 1}7 .
Lemma 2.4.Let γ > 0 and f ∈ T .Let J denote the set of all discontinuities of the function f .If a function g ∈ G γ contains a jump j ∈ J(f ), but in an opposite direction than in f , then g cannot be a projection of f onto G γ .
Lemma 2.5.Let γ > 0 and f ∈ T .Any solution f of the problem The proofs of remark 2.4 and lemma 2.5 can be found in the appendix.Lemma 2.5 leads to the equivalence of the problem (4) with the minimization problem: f The strengthening of lemma 2.1 by restricting not only the locations of the jumps but also their directions is a favorable change as it narrows the set of possible solutions.Lemma 2.6 potentially reduces the number of jumps that have to be considered in the post-processing.Moreover, remark 2.4 reduces the number of arcs when building the graph making the process of finding the shortest path more effective.
The directed graph G has a different set of directed arcs compared to (7): Theorem 2.2 (Problem equivalence -binary version).Let γ > 0 and (t 1 , ..., t n ) be the only discontinuities of a function f ∈ T .Let G = (V, A, W ) be a weighted, directed graph as defined in (6), ( 10), (8).The task of finding a projection of f onto G 2γ , as defined in (9), is equivalent to finding a shortest path from t 0 to t n+1 in the graph G.
Proofs of lemma 2.6 as well as the theorem 2.2 can be found in the appendix.
3 Incorporating domain knowledge into the performance measure of classification 3.1 Problem-specific requirements on the performance measure In order to choose an appropriate performance measure for a given classification task, it is important to understand the problem-specific demands on the result.The standard distance (1), which can be understood as a continuous analogue of the most common performance metric, namely the misclassification rate, can often be inadequate to compare the classification results as it is a one-fits-all type of metric and if more is known about the problem, it might not represent the idea of accuracy that users have in mind.On the other hand, there have been other approaches to performance metrics, e.g.(Ward et al., 2011).Their approach focuses on characterizing the error in terms of the number of inserted, deleted, merged and fragmented events.Event fragmentation occurs when an event in the true labels8 is represented by more than one event in the estimated labels9 , whereas merging refers to several events in true labels being represented by a single event in the estimated labels.Ward et al. (2011) provide an overview of different performance metrics used in activity recognition proposing a solution to the problem of timing uncertainty as well as event fragmentation and merging.Their solution is based on segments, which are intervals in which neither the true labels nor the estimated labels change the state.If the state in the estimate and the state in the true labels agree in a given segment, they denote it as correctly classified.If that is not true, the segment is classified accordingly as fragmenting segment, inserted segment, deleted segment or merged segment.This provides a deeper level of error characterization, which is then used in different metrics of classifier performance.Their vector-valued performance metric is preferable when in-depth overview of the types of mistakes made by the classifier is needed.We will introduce a novel scalar-valued performance metric, which can be easily compared and includes problem-specific information such as timing uncertainty in the labels.
In this section, we aim at highlighting the main characteristics of the classification of movements based on wearable sensors and at translating them into specific requirements on the performance measure.Our first requirement comes from physical restrictions.The states considered in our application represent human activities, but also in more general contexts they often cannot be arbitrarily short; there is a lower bound on the length of the events in a state sequence.Hence, estimated labels that violate this lower bound indicate a bad performance.The lower bound condition requires two parameters: the lower bound and the penalty for each violation.The lower bound can either be estimated or determined from domain knowledge, while the penalty can be chosen more freely.Through physical restrictions we can see a deeper connection with the method introduced in section 2. It is clear that the standard classification methods cannot ensure that the state sequence contains only events longer than a certain level.The post-processing method addresses this issue directly and as a consequence we can expect classifiers to benefit from it in the context of the new performance measure.
The issue of timing uncertainty should also be addressed when designing the performance measure.To illustrate its importance more clearly, we present an example.Five people were asked to detect boundaries between activities in different time series using a visualization tool.The tool outputs an animated stick figure model 10 given sensor data.
Three time series were selected, each with one of the following activities: running, jumping and ball kick.The start and the end of each activity were recorded by participants.The experiment indicates there is indeed uncertainty regarding the state transitions.Granted that the sample size is very small, we notice more variation in results referring to the end of activities rather than the beginnings.Additionally, we see more variation in the results for the kick than the jumping.So the boundaries of some activities seem to be more difficult to identify than of others.

Globally Time-Shifted distance
The standard distance (1) is an unsatisfying measure to compare two state sequences, since it does not incorporate the requirements posed in the previous section.In order to improve it, we start by modelling the timing uncertainty.
Let f ∈ T be the true labels process and let f have n discontinuities t 1 , ..., t n .The locations of the discontinuities are corrupted by additive noise: for all i = 1, ..., n, where T i is the true and unknown location of the i-th jump.In this section we will assume that X 1 = X 2 = ... = X n (all jumps are moved by the same value; the global time shift), although in general, it is more realistic to assume that X 1 , ..., X n are independent random variables.We will relax this condition later.
We define a class of Globally Time-Shifted distances (GTS distances), loosely inspired by the Skorokhod distance on the space of càdlàg functions (Billingsley, 1999, pp. 121).The GTS distances are parametrized by two parameters.The parameter w controls the weight of misclassification occurring from the uncertainty of the true labels, while the parameter σ controls the magnitude of the shift of activities.
Definition 3.1 (Globally Time-Shifted distance).Let f, g ∈ T .Given w ≥ 0, σ > 0 and a metric d on S we define a Globally Time-Shifted distance as: where for > 0 τ : R → R is a time shift defined as follows: Depending on the choice of parameters the GTS distance possesses certain properties.For w > 0 and σ = ∞, the GTS distance is an extended metric 11 and a proof of this fact is given in the appendix.If w > 0 and σ > 0, then it is a semimetric meaning that it has all properties required for a metric, except for the triangle inequality.
The main downside of using the GTS distance is the unrealistic assumption on timing uncertainty.However, if we know that the true labels preserve the true state durations then it is a good choice.Consider a function f ∈ T with two state transitions located at t 1 and t 2 .Let g ∈ T also feature two state transitions located at t 1 − τ 1 and t 2 − τ 2 .If τ 1 = τ 2 , then there is no global time shift that can align the functions f and g.This implies that the true state durations need to be preserved in the estimate in order to align functions using the global time shift.

Locally Time-Shifted distance and the Duration Penalty Term
The global time shift stresses the state durations, which is not always desirable.For instance, if the true labels do not preserve the real state durations, or e.g. if the additive noise terms in the locations of the jumps are independent.Here is an example: figure 2 shows f and its approximations g i for i = 1, 2, 3.It is impossible to align f with any of the g i with a single time shift, however, it would be possible if each state transition could be shifted 'locally'.Naturally, to accommodate for this issue, a suitable modification would be to replace one global time shift with multiple local time shifts.We now introduce a measure of closeness between state sequeneces which conceptually can be seen as derived from the GTS measure.We will be working with sequences of jumps, but more specifically given two sequences of state boundaries we will combine them together and sort the resulting joint sequence in an increasing order.Subsequent pairs of values in this sequence are determining segments understood as in Ward et al. (2011).We weigh different types of segments and the result is a weighted average of segment lengths, which is supposed to reflect well the error magnitude of the classifier.
We define segments formally and introduce a new distance on T .
Definition 3.2 (Segments).Let f, g ∈ T .The elements of the smallest partition 12 of R such that in each element of the partition neither f nor g changes state will be called segments.
Since functions from T are piece-wise constant and have a finite number of discontinuities, there is always a finite number of segments.The general form of segments that we will use is as follows: where a 1 < a 2 ... < a l if f and g are not both constant on the real line.
Otherwise there is only one segment, consisting of the whole real line.By convention, a 0 = −∞ and a l+1 = ∞, and Definition 3.3 (Locally Time-Shifted distance).Let w ≥ 0, σ > 0 and d be a metric on S. Let f, g ∈ T and their set of segments to be denoted as in (11).
We define the Locally Time-Shifted distance (LTS distance) as where Similarly to the GTS distance, the parameter w controls the weight of misclassification occurring from the uncertainty of the true labels.The case when w < 1 is more interesting to us, since it corresponds to timing uncertainty of the labels.If w ≥ 1, then we put more importance on the timings of the jumps (opposite to timing uncertainty).The LTS distance is an extended semimetric for w > 0 (for a proof, see the appendix).The triangle inequality does not hold in general.
The LTS distance addresses the issue of timing uncertainty in the true labels.Let ζ > 0 13 be the lower bound on the lengths of the events as determined by the domain knowledge (or through estimation if possible).Let λ > 0 be the penalty for each violation of the lower bound condition.For f ∈ T with its discontinuities t 1 , ..., t n , we introduce a duration penalty term: This term will allow to lower the performance of classifications with unrealistically short events.
In practice, we will need to extend the functions to the real line in order to use the LTS distance as it is defined for functions with domain equal to the whole of R. One natural extension could be to extend the first and the last state of each function indefinitely.However, this solution leads to a problem.Let M > 0. Consider two functions f : [0, M ] → S and g : [0, M ] → S such that for some 0 < a < M , f (t) = g(t) on [0, a).No matter how small a is, the distance between extended f and g will always be infinite when using this extension, since in this case extended f and g are in different states on the whole half line (−∞, a).Both functions need to be extended by the same state for the distance to be finite.We extend any function f defined on interval [0, M ] to the real line, setting its value to an arbitrary state outside of [0, M ).The distance is independent of the chosen state, as on the infinite segments that it introduces f and g are both equal.Without loss of generality, we choose state 1.
Notice that this extension does not have the problem stated above as f * and g * are equal on the segments that it introduces and does not change the value on the original segments regardless of the choice of the state outside of [0, M ].We combine the LTS distance and the duration penalty term to define the LTS measure of closeness of two state sequences.Definition 3.4.Let f be a function of true labels and g its estimate, both defined on [0, M ].The LTS measure is defined as: The scaling through the division by M normalizes the LTS distance to the interval [0, 1].The transformation [0, +∞) x → exp(−x) ∈ (0, 1] maps the sum of the LTS distance and the duration penalty term to the interval (0, 1], while reversing the order as well: g is closer to f if the LTS measure is closer to 1.
f will be referred to as the correct labels.We introduce noise into f in the following manner: • two sequences of i.i.d.random variables are considered {Y k } and {Z k }, with Y k ∼ Exp(µ 1 ) and Z k ∼ Exp(µ 2 ) for some parameters µ 1 , µ 2 > 0, • {Y k } represents the time spent in the correct state, while {Z k } represents the time spent in the incorrect state, • we use the sequence Y 1 , Z 1 , Y 2 , Z 2 , ... to generate noisy labels, where the sequence ends when the sum of all drawn numbers is exceeding 60 seconds, • for each variable Z i an incorrect state is chosen randomly out of the remaining two and f is changed to that state on interval [ • µ 1 and µ 2 control the duration of the states.
As our performance measure we choose the LTS measure with parameters: w = 0.6, σ = 0.35, λ = 0.0001, ζ = 0.5, d = ρ.The post-processing is performed for the noisy labels with parameter γ = 0.5s.To demonstrate the utility of post-processing procedure, we draw the noisy functions 1000 times for a given set of parameters (µ 1 , µ 2 ) and compare the results between the accuracy of the noisy labels, the LTS measure of the noisy labels and the LTS measure of the post-processed labels.
In the second setting, we fix µ 1 = 1s.The procedure is repeated for µ 2 ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}. Figure 4 shows the mean accuracy of the noisy labels and the mean LTS measure of both the noisy labels and the post-processed labels.
Both experiments show the improvement in the LTS measure thanks to the use of post-processing.Also, we conclude that the post-processing method behaves better when dealing with multiple shorter intervals rather than fewer longer ones.
We conclude that parameter γ can influence the LTS measure of the postprocessed functions ĝi .Careful consideration needs to be made as too low value of γ will lead to accepting of unrealistically short events, while to high level of γ will eliminate events in true labels.In our case the value of γ between 0.5 and 1 is the most favourable.In practice the minimal length of the events in the true labels will inform on the choice of γ.
We finish the simulation study with a look at the parameters of the LTS measure.We will investigate the weight w first.Let all the other parameters of the LTS measure be set to σ = 0.35, λ = 0.0001, ζ = 0.5.We fix µ 1 = 0.1, µ 2 = 0.08, γ = 0.5.The procedure is repeated for 13 different values of w. Figure 6 shows the mean LTS measure of the post-processed labels.
We conclude that the choice of w is of lesser importance as its effect on the LTS measure is minimal, since all the values on the y-axis of figure 6 are quite close together.
Parameters σ and ζ will not be subjected to the same procedure, as they have a clear interpretation and can be chosen based on domain knowledge.Hence, the only parameter left to investigate is λ.As before, we fix µ 1 = 1, µ 2 = 0.8, γ = 0.5.We choose w = 0.6.The procedure is repeated for values of λ between 0 and 0.1.Figure 7 shows the mean LTS measure of the postprocessed labels.We can see that high values λ can influence the LTS measure significantly, hence choices lower than 0.01 are preferable.We do not want the penalty term to be overshadowing the LTS distance.

Application to football dataset
We will now demonstrate the benefits of the post-processing by projection in a real-life setting, utilizing the LTS measure to compare different methods of classification.Wilmes et al. (2020) give an extensive description of the football dataset of which we give a short summary below.
Eleven amateur football players participated in a coordinated experiment at a training facility of the Royal Dutch Football Association of The Netherlands.Five Inertial Measurement Units (IMUs) were attached to 5 different body parts: left shank (LS), right shank (RS), left thigh (LT), right thigh (RT) and pelvis (P).Each IMU sensor contains a 3-axis accelerometer (Acc) and a 3-axis gyroscope (Gyro).Athletes were asked to perform exercises on command, e.g.'jog for 10 meters' or 'long pass'.For each athlete and exercise this resulted in a 30-dimensional time series (5 body parts times 6 features per IMU) of length varying from 4 to 14 seconds.Each athlete performed 70-100 exercises which amounts to nearly 900 time series (each with a sampling Improving state estimation through projection post-processing Fig. 5: The line shows the LTS measure of the post-processed labels drawn for 8 different values of γ.The mean accuracy of noisy labels was to 0.555 and the mean LTS measure of noisy labels was equal to 0.602.frequency of 500 Hz).Time series are labelled with the command given to an athlete, but there are still other activities performed in each of the time series, for example standing still.This causes a problem; ignoring standing periods and treating them as part of the main signal pollutes the data and lowers the quality of the classification.To show the advantages of post-processing by projection, we select only two states: 'standing' and 'other activity' encoded as 0 and 1, respectively.15 time series (representative of all possible actions performed by athletes) were manually labelled time point by time point in order to be able to train classifiers, and these will form our sample.
In pre-processing we are using the sliding window technique on the sensors (Dietterich, 2002).This method transforms the original raw data using windows of fixed length d and a statistic of choice T : given a time point t, its neighbourhood of size d is fed to the statistic T for each variable separately.Performing the procedure for each time point results in a time series of the same dimension as the original one, but every observation is equipped with some knowledge about the past and the future through the statistic T and through forming the neighbourhoods of size d.Regarding the choice of the statistic T one needs to be careful, since the sensors are highly correlated with each other.The information about standing contained in one variable is comparable to the one in another, namely the variance of the signal is low when the person is standing (differences can occur when considering different legs; a low variance on one leg might be misleading since the other leg might already be transitioning into another position).10-fold cross-validation will be performed in order to select the best performing classification method out of the 7 standard machine learning methods, which will be mentioned later in the paper.A typical approach to k-fold crossvalidation with a training sample of size k − 1 cannot be applied here, since a single time series is not a representative sample of different types of events.15 time series will be used.In each iteration 10 time series will be randomly chosen for training and 5 for testing.The results are going to be shown for post-processed classifiers, unless specified otherwise.Before cross-validation can be performed, we need to fix the parameters of the performance measure we introduced in section 2. The parameters of the LTS measure are chosen as follows: • We have limited information regarding how uncertain locations of state transitions are, but based on the small experiment described in section 3.1 we select σ = 0.35 (the largest deviation between different true labels).• The parameter w is chosen as 0.6, but as shown in section 4.1 its choice is not that important.
• The lower bound γ on the duration of activities is selected as the length of the shortest activity in the learning dataset, which is equal to 0.8s in our case.• A penalty λ represents the cost of additional or missing jumps in a state sequence compared to the true labels.We decide for the penalty λ = 0.01 in order not to overshadow the LTS distance with too much importance placed on the penalty term (more details on that in section 4.1, specifically figure 7).
Before assessing classifiers on the training set, one needs to consider an appropriate feature set.Our variables are highly dependent on one another, so we start with feature selection.We perform feature ranking using the Relieff algorithm and select the 6 most relevant features based on the Relieff weights (more details on the method in Kononenko et al. (1997)).Then we test all possible combinations of these features, which is now computationally feasible, in order to find the best set for each of the classifiers.The features selected by the Relieff algorithm are RTGyroX, RTGyroY, RTAccX, RTAccZ, LTAccY, PAccY, where the naming convention is as follows: RTGyroX refers to the x-axis of the gyroscope located on the right thigh.
Proceeding with the cross-validation we select the following classifiers (with their abbreviations) to be assessed: DT -Decision Tree, kNN -k-Nearest Neighbors, LR -Logistic Regression, MLP -Multi-layer Perceptron, NB -Naive Bayes, RF -Random Forest, SVM -Support Vector Machine.The results of 10-fold cross-validation are shown in table 2. It is striking that the test scores of the post-processed classifiers are at most 0.028 apart.This is due to postprocessing by projection.The correction it provides brings all classifiers closer together.This astonishing result can be extended even further.The test score  of a decision tree ranges from 59% to 86% for different sensor sets before postprocessing, while using the post-processing results in a range of test scores from 93% to 96.5% and this is not specific to decision trees only.
The example shows that the post-processing is crucial.Firstly, it increases the accuracy of a given estimator on a given feature set by 35%.Secondly, it diminishes the impact of feature selection as the difference in accuracy between different feature subsets decreases substantially.Feature selection is of course still important as it decreases computational complexity of the problem and allows to get rid of redundancy in the feature set.However, with methods that only rank features such as Relieff the choice of the threshold we choose to classify a feature as significant or not is less important.Finally and most importantly, the post-processing by projection allows to select a method according to criteria other than the performance, namely the computational speed.

Conclusion
In this paper we have introduced a post-processing scheme that allows to improve estimates.It finds estimated activities that are too short and eliminates them in an optimal way by finding the shortest path in a directed acyclic graph.
A simulation study is conducted to assess the benefits brought by the postprocessing method.Generated noisy labels are improved with the use of the post-processing.The positive effects on the LTS measure are more significant when the noisy sequence contains more shorter intervals of misclassification.
Real-life football sensor data were used to assess the adequacy of the postprocessing scheme in the more realistic setting.It significantly improved the performance of the classifiers.At the same time, post-processed classifiers are closer to each other in performance than the original ones.This allows placing more importance on other criteria, such as the computational speed of a method.It should be noted that post-processing cannot correct for uncertainty Improving state estimation through projection post-processing in the classification result of the estimators.It can be seen in figures 3 and 4 that the worse the original estimate the worse the post-processed one (at least as a rule of thumb as there can be cases when it is reversed).However, most importantly, the results of the application to the football dataset are promising.The post-processing by projection was able to improve the estimators of accuracy ranging from 59% to 86% up to a score of 93% to 96.5%.It is notable that the lowest score that the post-processed estimates have achieved for a given classification method is still higher than the highest score of the original estimates.
Our second contribution are novel measures of classifier performance in the task of activity recognition using wearable sensors.They address the issue of timing offsets as well as unrealistic classifications, while retaining a typical scalar output of a performance measure allowing for easy comparisons between classifiers.Let f be a solution of the problem 3 for a given function f .Assume that for certain γ < γ, f ∈ G γ and f ∈ G γ .Hence there exist two jumps t k and t l of f and f (which follows from lemma 2.1), such that γ < t l − t k < γ.Since the state lasts less than γ, it can be removed (in the sense that one of the jumps is removed and either the previous state or following state is longer by t l − t k ) with a gain in error of less than γ and decrease in error of exactly γ, which means we found a function with lower error than f .This contradiction ends the proof.

Proof of lemma 2.3
Let f be a function with two neighboring jumps t 1 , t 2 and the state s 1 between them.Assume t 2 − t 1 ≥ 2γ.Since the interval is longer than or equal to 2γ it satisfies the condition of the class G γ .Let us assume that the projection f of f contains two neighbouring jumps t a and t b such that t a ≤ t 1 < t 2 ≤ t b and the state in the interval [t a , t b ) is s 2 = s 1 .We introduce notation α := t 1 − t a and β := t b − t 2 .If α, β ≥ γ, then introducing the jumps at t 1 and t 2 with the state s 1 between them is possible, because the condition of the class G γ is satisfied.Moreover, the error is decreased if t 2 − t 1 > 2γ and is not increased if t 2 − t 1 = 2γ.If α ≥ γ and β < γ, then introducing a jump at t 1 such that the state following it is s 1 is possible.Moreover, the error is decreased.Analogously when α < γ and β ≥ γ.If α, β < γ, then changing state s 2 to s 1 reduces the error.
In all cases, we have shown that there exists a projection that does not change the state longer than 2γ.

Proof of remark 2.2
Let f be a projection of f onto G γ .Let t 1 and t 2 be the first two jumps in the original function f .Let s 1 and s 2 be the first two states in the original function f .Assume that t 2 − t 1 < 2γ.If f had a jump at t 2 from the state s 1 , then a function g equal to f outside of interval [t 1 , t 2 ), but such that the jump from state s 1 is moved to the location of the jump t 1 has the same or lower error than f .If f had a jump at t 2 from a state s i = s 1 , then the error is infinite (since the value of f differs from f on the interval (−∞, t 1 )) and f cannot be a projection.
The argument is analogous for the penultimate jump.
Proof of theorem 2.1 We use lemma 2.1 to prove that a projection of a function from T onto G γ can only have jumps at the same positions as the jumps in the original function.This leads to the fact that finding the shortest path in the graph is equivalent to finding f .

Proof of lemma 2.4
Let f be a projection of f onto G γ .Let t k and t k+1 be two consecutive jumps of f .Assume that f contains a jump t k , but in opposite direction than in f .From lemma 2.1 we know that the next jump of f can occur at the earliest at t k+1 .This means that in the interval [t k , t k+1 ) the projection f is equal to 1 − f .In this case, moving the jump at t k to t k+1 (or in the case of t k+1 ∈ J( f ) removing both jumps) reduces the error by t k+1 − t k .Hence, we conclude, a jump from f can only be present in its projection if it is in the same direction as in f .

Fig. 1 :
Fig. 1: Graph G constructed for the function f Lemma 2.6.Let f ∈ T .Suppose f ≡ c on an interval [a, b] for some constant c ∈ R. If b − a > γ, then f ≡ c on [a, b].If b − a = γ, then there exists a projection such that f ≡ c on [a, b].

Fig. 2 :
Fig. 2: The function f represents the true labels with an uncertainty around state boundaries, g i are the approximations of f

Fig. 3 :
Fig. 3: The dashed line shows the mean accuracy of the noisy labels, the dotted line shows the LTS measure of the noisy labels and the solid line shows the LTS measure of the post-processed labels.All lines drawn for 9 different values of µ 2 .

Fig. 4 :
Fig. 4: The dashed line shows the mean accuracy of the noisy labels, the dotted line shows the LTS measure of the noisy labels and the solid line shows the LTS measure of the post-processed labels.All lines drawn for 9 different values of µ 2 .

Fig. 6 :
Fig. 6: The line shows the LTS measure of the post-processed labels drawn for 13 different values of w.The mean accuracy of noisy labels was equal to 0.555.

Fig. 7 :
Fig. 7: The line shows the LTS measure of the post-processed labels drawn for λ.The mean accuracy of noisy labels was equal to 0.56.

Table 1 :
Table1presents the results of the experiment.The results of the labelling experiment; all times are in seconds.The two last rows show the average and the sample standard deviation for each boundary

Table 2 :
Average of the 10-fold cross-validation scores for all classifiers using the best sensor set for each of them.The pre-processing consisted of the sliding window technique in combination with summarizing by the standard deviation.The OG Test averages the LTS measure on the test set for the original classifier, while the PP Test is the same value for the post-processed classifier.