Abstract
Hodrick–Prescott (HP) filter is a popular trend filtering method for univariate macroeconomic time series such as real gross domestic product. This paper considers the quantile regression version of HP filter (qHP filter), which is a filtering method defined by replacing quadratic loss function of HP filter with quantile regression loss function. One of the essential properties of quantile regression is that if the regression includes intercept, then the ratio of negative residuals can be almost controlled. Does the suggested qHP filter also have the property? This paper answers this question. In addition to the main result, we provide an empirical illustration.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Hodrick and Prescott (1997) filter is a popular trend filtering method for univariate macroeconomic time series such as real gross domestic product (GDP). For example, it has been used for calculating the composite leading indicators of the Organisation for Economic Co-operation and Development since December 2008 (OECD 2012 ). Recent studies of the filter include de Jong and Sakarya (2016), Cornea-Madeira (2017), Hamilton (2018), Sakarya and de Jong (2020), Phillips and Jin (2021), Phillips and Shi (2021), Yamada (2015, 2018a, b, 2020, 2022), and Yamada and Jahra (2019). It is defined by
where \(y_{1},\ldots ,y_{T}\) denote T observations of an economic time series, \(\psi \) is a positive smoothing parameter that controls fidelity and smoothness, and \(\Delta \) denotes a difference operator such that \(\Delta x_{t}=x_{t}-x_{t-1}\) and accordingly \(\Delta ^{2} x_{t}=\Delta x_{t}-\Delta x_{t-1}=x_{t}-2x_{t-1}+x_{t-2}\).
Quantile regression was developed by Koenker and Bassett (1978). It is the regression defined by
where \(\rho _{\tau }(u)\) denotes the check function defined by
where \(\tau \in (0,1)\). See Fig. 1, which depicts \(\rho _{\tau }(u)\) for \(\tau =0.1,0.5,0.9\). Thus, it includes least absolute deviations as its special case. One of the essential properties of the \(\tau \) quantile regression above is that if the regression includes intercept, the ratio of negative residuals, denoted by N/T, is less than or equal to \(\tau \) and is greater than or equal to \(\tau \) minus the ratio of zero residuals, denoted by Z/T:
See Theorem 3.4 of Koenker and Bassett (1978) and Theorem 2.2 of Koenker (2005). From (4), for example, if \(\tau =0.1\), then the ratio of negative residuals is at most 10%.
In this paper, we consider the quantile regression version of HP filter. It is the filtering method that is defined by replacing quadratic loss function of HP filter with quantile regression loss function. More precisely, it is:
where \(\lambda \) is a positive smoothing parameter. In the paper, we refer to (5) as quantile Hodrick–Prescott (qHP) filter.
Does the suggested qHP filter also have the property given by (4)? It is not a trivial question and is essential for applying the filter. In this paper, we answer this question. For this purpose, we apply the reparameterization used by Paige and Trindade (2010). Other than the main result, we present an empirical example.
This paper is organized as follows. In Sect. 2, we present some preliminary results. In Sect. 3, we present the main results of the paper. In Sect. 4, we present an empirical example. Section 5 concludes the paper. In the Appendix, we provide MATLAB and R user-defined functions for solving the minimization problem (5).
2 Preliminaries
In this section, after fixing notations, we provide some preliminary results.
2.1 Notations
Let \(\varvec{x}=[x_{1},\ldots ,x_{T}]'\), \(\varvec{\iota }=[1,\ldots ,1]'\in \mathbb {R}^{T}\), \(\varvec{z}=[1,\ldots ,T]'\), \(\varvec{I}_{r}\) denote the \(r\times r\) identity matrix, \(\varvec{J}=[\varvec{0},\varvec{I}_{T-2}]\in \mathbb {R}^{(T-2)\times T}\), and \(\varvec{D}\in \mathbb {R}^{(T-2)\times T}\) be the second-order difference matrix such that \(\varvec{D}\varvec{x}=[\Delta ^{2}x_{3},\ldots ,\Delta ^{2}x_{T}]'\), which is explicitly shown in (11). In addition, let
and accordingly \(\varvec{\Gamma }\) is a \(T\times (T-2)\) matrix. Finally, for a vector \(\varvec{u}=[u_{1},\ldots ,u_{T}]'\), we denote the \(\ell _{p}\)-norm of \(\varvec{u}\) by \(\Vert \varvec{u}\Vert _{p}\), i.e., \(\Vert \varvec{u}\Vert _{p}=(|u_{1}|^{p}+\cdots +|u_{T}|^{p})^{1/p}\).
2.2 Matrix representations of HP and qHP filters
In matrix notation, (1) can be represented as
Likewise, given that \(\rho _{\tau }(u)\) in (3) can be represented as \(0.5|u|+(\tau -0.5)u\), we have
from which (5) can be represented as follows:
Note that when \(\tau =0.5\), (9) reduces to
which is the filtering method that is defined by replacing quadratic loss function of HP filter with absolute loss function and thus it is more robust to outliers than the HP filter.
2.3 Matrices \(\varvec{A}\) and \(\varvec{D}\)
\(\varvec{A}\) in (6) was used by Paige and Trindade (2010) to derive a ridge regression representation of the HP filter. Given that \(|\varvec{A}|=1\), \(\varvec{A}\) is nonsingular. Moreover, from Paige and Trindade (2010), it follows that
Thus, given that \(\varvec{J}\in \mathbb {R}^{(T-2)\times T}\) is a selection matrix, we have \(\varvec{J}\varvec{A}^{-1}=\varvec{D}\), and it immediately follows that
Note that \(\varvec{D}\varvec{\Gamma }=\varvec{I}_{T-2}\), i.e., \(\varvec{\Gamma }\) is a right-inverse matrix of \(\varvec{D}\).
2.4 Solution set of qHP filter
Concerning \(f(\varvec{x})\), we have the following results:
Proposition 2.1
(i) There exists \(\widehat{\varvec{x}}\) such that \(f(\widehat{\varvec{x}})\le f(\varvec{x})\) for any \(\varvec{x}\in \mathbb {R}^{T}\) and (ii) the corresponding solution set is convex.
Proof
See the Appendix. \(\square \)
Remark 2.2
(i) Proposition 2.1(i) indicates that \(\widehat{\varvec{x}}\) is a global minimizer of qHP filter. (ii) MATLAB and R user-defined functions for obtaining \(\widehat{\varvec{x}}\) are given in the Appendix.
2.5 Reparameterization of qHP filter
Let \(\varvec{\theta }=[\theta _{1},\ldots ,\theta _{T}]'\) and \(\widehat{\varvec{\theta }}=[{\widehat{\theta }}_{1},\ldots ,{\widehat{\theta }}_{T}]'\) be T-dimensional column vectors such that \(\varvec{x}=\varvec{A}\varvec{\theta }\) and \(\widehat{\varvec{x}}=\varvec{A}\widehat{\varvec{\theta }}\), respectively. Recall that \(\varvec{A}\) is nonsingular and its first (second) column is \(\varvec{\iota }\) (\(\varvec{z}\)). From (12), we have \(\varvec{D}\varvec{x}=\varvec{D}\varvec{A}\varvec{\theta }=\varvec{J}\varvec{\theta }=[\theta _{3},\ldots ,\theta _{T}]'\in \mathbb {R}^{T-2}\), which leads to
Accordingly, the objective function of qHP filter can be represented with \(\varvec{\theta }\) as follows:
Likewise, we obtain \(f(\widehat{\varvec{x}}) =0.5\Vert \varvec{y}-\varvec{A}\widehat{\varvec{\theta }}\Vert _{1}+(\tau -0.5)\varvec{\iota }'(\varvec{y}-\varvec{A}\widehat{\varvec{\theta }}) +\lambda \Vert \varvec{J}\widehat{\varvec{\theta }}\Vert _{2}^{2}\). Given that \(\widehat{\varvec{x}}\) is a global minimizer, combining the above equations yields the inequality such as
which shows that \(\widehat{\varvec{\theta }}=\varvec{A}^{-1}\widehat{\varvec{x}}\) is a solution of the penalized quantile regression defined by
This is an alternative representation of qHP filter.
3 The main results
The following result answers the question posed in Sect. 1.
Theorem 3.1
Let N and Z denote the proportion of negative and zero entries of the residual vector \(\varvec{y}-\widehat{\varvec{x}}\,(=\varvec{y}-\varvec{A}\widehat{\varvec{\theta }})\). Then, it follows that
Proof
Given that \(\widehat{\varvec{\theta }}\) is a global minimizer of a convex function \(g(\varvec{\theta })\) in (15), \(\varvec{0}\) must belong to the subdifferential of \(g(\varvec{\theta })\) at \(\widehat{\varvec{\theta }}\), that is, there must exist the following \(\widehat{\varvec{v}}=[{\widehat{v}}_{1},\ldots ,{\widehat{v}}_{T}]'\) such that
where
for \(t=1,\ldots ,T\). Then, given that the first column of \(\varvec{A}\) is \(\varvec{\iota }\) and \(\varvec{J}'\varvec{J}=\mathsf {diag}(0,0,1,\ldots ,1)\in \mathbb {R}^{T\times T}\), the first row of (17) is \(-0.5\varvec{\iota }'\widehat{\varvec{v}}-(\tau -0.5)\varvec{\iota }'\varvec{\iota }=0\), from which we have
In addition, by definition of \(\widehat{\varvec{v}}\) in (18), it follows that \(P-N-Z\le \varvec{\iota }'\widehat{\varvec{v}}\le P-N+Z\), where \(P=T-N-Z\). By eliminating P from these inequalities, we obtain
Substituting (19) into (20) yields \(N\le T\tau \le N+Z\). By dividing the inequalities by \(T>0\) and rearranging them, we finally obtain (16). \(\square \)
Remark 3.2
It is notable that the property derives from \(\theta _{1}\) being not penalized in (15). Recall that \(\varvec{J}\varvec{\theta }=[\theta _{3},\ldots ,\theta _{T}]'\).
Next, we provide another property of qHP filter.
Proposition 3.3
Denote the convex solution set of qHP filter by S. If \(\varvec{y}\notin S\), then it follows that
Proof
From Proposition 2.1, \(\widehat{\varvec{x}}\) is a global minimizer of \(f(\varvec{x})\) and it thus follows that \(f(\varvec{y})>f(\widehat{\varvec{x}})\) if \(\varvec{y}\notin S\). In addition, given (9), we have \(f(\varvec{y})=\lambda \Vert \varvec{D}\varvec{y}\Vert _{2}^{2}\). Moreover, given that \(\rho _{\tau }(y_{t}-{\widehat{x}}_{t})\ge 0\) for \(t=1,\ldots ,T\), we have \(f(\widehat{\varvec{x}})\ge \lambda \Vert \varvec{D}\widehat{\varvec{x}}\Vert _{2}^{2}\). By combining these results, we obtain
if \(\varvec{y}\notin S\). Finally, dividing the above inequalities by \(\lambda >0\) leads to (21). \(\square \)
Remark 3.4
(21) corresponds to the property of HP filter presented in Weinert (2007, p. 961) and indicates that \(\widehat{\varvec{x}}\) is smoother than \(\varvec{y}\).
4 An empirical illustration
As an empirical illustration of qHP filter, we estimate several qHP trends of the growth rates of US real GDP and real consumption, which were used in the empirical analysis of Müller and Watson (2018).Footnote 1 To apply qHP filter, we must specify its smoothing parameter \(\lambda \) in (5)/(9). We selected them so that \(\widehat{\varvec{x}}\) may satisfy
For obtaining \(\widehat{\varvec{x}}_{\mathrm {HP}}\) in (22), we must specify \(\psi \) in (1)/(7). We used the following three values of \(\psi \):
They are obtained from
Note that, given the data frequency is quarterly, \(p=80\), for example, corresponds to 20 years. For details about (24), see Yamada (2022, Sec. 4).Footnote 2 The values of \(\lambda \) corresponding to the values of \(\psi \) in (23) are tabulated in Table 1.
Figures 2, 3, 4, and 5 show the results. Figure 2 (resp. 3) shows empirical cumulative distribution function plots of residuals, \(\varvec{y}-\widehat{\varvec{x}}\), cumulative distributions of residuals, \(\varvec{y}-\widehat{\varvec{x}}\), where \(\varvec{y}\) denotes the growth rates of US real GDP (resp. consumption) and \(\widehat{\varvec{x}}\) denotes the trends estimated by qHP filter. From the figures, we can observe that in all cases the ratio of negative residuals almost equals the corresponding value of \(\tau \) and these results are consistent with Theorem 3.1. For example, from the top left panel of Fig. 2, we observe that the ratio of negative residuals corresponding to \(\tau =0.9,0.5,0.1\) are nearly equal to 90%, 50%, 10%, respectively. See three bullets in the panel. Figures 4 and 5 show the trends estimated by qHP filter. From these figures, we may observe the estimated 90% (resp. 10%) qHP trends are roughly decreasing (resp. increasing) over the sample period. As a result, the bands of these quantile trends become narrower. We note that these results are consistent with a stylized fact called the Great Moderation in the US economy, which is defined as the decline in macroeconomic volatility starting in the mid-1980s. Here, we would like to stress that such results are never obtainable from applying HP filter.
5 Conclusion
Theorem 3.1 is the main result of the paper. It states that even though qHP filter is a penalized quantile regression, it satisfies one of the essential properties of quantile regression given by (4). The key point is that \(\theta _{1}\) in (15) is not penalized. The empirical results illustrate Theorem 3.1. As a result, it is shown that qHP filter could be a useful macroeconometric tool.
Notes
We appreciate the kindness of Ulrich K. Müller and Mark W. Watson in providing their dataset and m files.
In addition, an R package mFilter employs (24).
References
Bertsekas DP (1999) Nonlinear programming, 2nd edn. Athena Scientific, Belmont
Brinkhuis J, Tikhomirov V (2005) Optimization: insights and applications. Princeton University Press, Princeton
Cornea-Madeira A (2017) The explicit formula for the Hodrick-Prescott filter in a finite sample. Rev Econ Stat 99(2):314–318
CVX Research, Inc., (2011) CVX: Matlab software for disciplined convex programming, version 2.0. http://cvxr.com/cvx
de Jong RM, Sakarya N (2016) The econometrics of the Hodrick-Prescott filter. Rev Econ Stat 98(2):310–317
Fu A, Narasimhan B, Boyd S (2020) CVXR: an R package for disciplined convex optimization. J Stat Softw 94(14):1–34
Grant M, Boyd S (2008) Graph implementations for nonsmooth convex programs. In: Blondel V, Boyd S, Kimura H (eds) Recent Advances in Learning and Control. Springer, London, pp 95–110
Hamilton JD (2018) Why you should never use the Hodrick-Prescott filter. Rev Econ Stat 100(5):831–843
Hodrick RJ, Prescott EC (1997) Postwar U.S. business cycles: an empirical investigation. J Money Credit Bank 29(1):1–16
Koenker R (2005) Quantile Regression. Cambridge University Press, Cambridge
Koenker R, Bassett G (1978) Regression quantiles. Econometrica 46(1):33–50
Müller U, Watson M (2018) Long-run covariability. Econometrica 86(3):775–804
OECD (2012) OECD system of composite leading indicators. http://www.oecd.org/std/leading-indicators/41629509.pdf
Paige RL, Trindade AA (2010) The Hodrick-Prescott filter: a special case of penalized spline smoothing. Electron J Stat 4:856–874
Phillips PCB, Jin S (2021) Business cycles, trend elimination, and the HP filter. Int Econ Rev 62(2):469–520
Phillips PCB, Shi Z (2021) Boosting: Why you can use the HP filter. Int Econ Rev 62(2):521–570
Sakarya N, de Jong RM (2020) A property of the Hodrick-Prescott filter and its application. Economet Theor 36(5):840–870
Weinert HL (2007) Efficient computation for Whittaker-Henderson smoothing. Comput Stat Data Anal 52(2):959–974
Yamada H (2015) Ridge regression representations of the generalized Hodrick-Prescott filter. J Jpn Stat Soc 45(2):121–128
Yamada H (2018) Several least squares problems related to the Hodrick-Prescott filtering. Commun Stat-Theory Methods 47(5):1022–1027
Yamada H (2018) Why does the trend extracted by the Hodrick-Prescott filtering seem to be more plausible than the linear trend? Appl Econ Lett 25(2):102–105
Yamada H (2020) A smoothing method that looks like the Hodrick-Prescott filter. Economet Theor 36(5):961–981
Yamada H (2022) Trend extraction from economic time series with missing observations by generalized Hodrick-Prescott filters. Economet Theor 38(3):419–453
Yamada H, Jahra FT (2019) An explicit formula for the smoother weights of the Hodrick-Prescott filter. Stud Nonlinear Dyn Econom 23(5):1–10
Funding
This study was funded by the Japan Society for the Promotion of Science (KAKENHI Grant Numbers 16H03606 and 20K20759).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
Hiroshi Yamada declares that he has no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The author is grateful to an anonymous referee and the coordinating editor, Robert M. Kunst, for their valuable comments and suggestions. He also thanks the participants of the 3rd International Conference on Econometrics and Statistics (EcoSta 2019) held at National Chung Hsing University in Taiwan (June 25–27, 2019). The usual caveat applies.
A Appendix
A Appendix
1.1 A.1 Some results for the proof of Proposition 2.1
Lemma A.1
\(f(\varvec{x})\) is (i) coercive, (ii) convex, and (iii) continuous.
Proof
-
(i)
\(\Vert \varvec{D}\varvec{x}\Vert _{2}^{2}\) in \(f(\varvec{x})\) is not coercive. This is because, if \(\varvec{x}\) belongs to the space spanned by \(\varvec{\iota }\) and \(\varvec{z}\), then \(\Vert \varvec{D}\varvec{x}\Vert _{2}^{2}=0\) even when \(\Vert \varvec{x}\Vert _{2}\rightarrow \infty \). On the other hand, \(0.5\Vert \varvec{y}-\varvec{x}\Vert _{1}+(\tau -0.5)\varvec{\iota }'(\varvec{y}-\varvec{x}) =\sum _{t=1}^{T}\rho _{\tau }(y_{t}-x_{t})\) in \(f(\varvec{x})\), where
$$\begin{aligned} \rho _{\tau }(y_{t}-x_{t}) = {\left\{ \begin{array}{ll} \tau |y_{t}-x_{t}| &{} \hbox { if}\ y_{t}\ge x_{t},\\ (1-\tau )|y_{t}-x_{t}| &{} \hbox { if}\ y_{t}<x_{t}, \end{array}\right. } \end{aligned}$$is coercive. It can be proved by combining the following (a) and (b). (a) If \(\Vert \varvec{x}\Vert _{2}=(|x_{1}|^{2}+\cdots +|x_{T}|^{2})^{1/2}\rightarrow \infty \), then at least one of \(|x_{1}|,\ldots ,|x_{T}|\) goes to infinity and (b) from the reverse triangle inequality, we have \(||x_{t}|-|y_{t}||\le |x_{t}-y_{t}|=|y_{t}-x_{t}|\), which leads to \(|y_{t}-x_{t}|\rightarrow \infty \) as \(|x_{t}|\rightarrow \infty \). Consequently, given that \(\Vert \varvec{D}\varvec{x}\Vert _{2}^{2}\ge 0\), we obtain
$$\begin{aligned} f(\varvec{x})\rightarrow \infty \quad (\Vert \varvec{x}\Vert _{2}\rightarrow \infty ). \end{aligned}$$ -
(ii)
Let \(\alpha \in [0,1]\) and \(\varvec{x}_{1},\varvec{x}_{2}\in \mathbb {R}^{T}\) and denote \((1-\alpha )\) by \(\beta \). In addition, let \(f_{1}(\varvec{x})=0.5\Vert \varvec{y}-\varvec{x}\Vert _{1}\), \(f_{2}(\varvec{x})=(\tau -0.5)\varvec{\iota }'(\varvec{y}-\varvec{x})\), and \(f_{3}(\varvec{x})=\lambda \Vert \varvec{D}\varvec{x}\Vert _{2}^{2}\). Accordingly, \(f(\varvec{x})=\sum _{i=1}^{3}f_{i}(\varvec{x})\). (a) From the triangle inequality, it follows that
$$\begin{aligned}&f_{1}(\alpha \varvec{x}_{1}+\beta \varvec{x}_{2}) =0.5\Vert \varvec{y}-(\alpha \varvec{x}_{1}+\beta \varvec{x}_{2})\Vert _{1} =0.5\Vert \alpha (\varvec{y}-\varvec{x}_{1})+\beta (\varvec{y}-\varvec{x}_{2})\Vert _{1}\\&\quad \le \alpha \times 0.5\Vert \varvec{y}-\varvec{x}_{1}\Vert _{1}+\beta \times 0.5\Vert \varvec{y}-\varvec{x}_{2}\Vert _{1} =\alpha f_{1}(\varvec{x}_{1})+\beta f_{1}(\varvec{x}_{2}), \end{aligned}$$which indicates that \(f_{1}(\varvec{x})\) is convex. (b) Given that
$$\begin{aligned}&f_{2}(\alpha \varvec{x}_{1}+\beta \varvec{x}_{2}) =(\tau -0.5)\varvec{\iota }'\{\varvec{y}-(\alpha \varvec{x}_{1}+\beta \varvec{x}_{2})\}\\&\quad =\alpha \times (\tau -0.5)\varvec{\iota }'(\varvec{y}-\varvec{x}_{1})+\beta \times (\tau -0.5)\varvec{\iota }'(\varvec{y}-\varvec{x}_{2}) =\alpha f_{2}(\varvec{x}_{1})+\beta f_{2}(\varvec{x}_{2}), \end{aligned}$$\(f_{2}(\varvec{x})\) is convex. (c) Given that \(\lambda \Vert \varvec{D}\varvec{x}\Vert _{2}^{2}=\varvec{x}'(\lambda \varvec{D}'\varvec{D})\varvec{x}\), where \(\lambda \varvec{D}'\varvec{D}\) is a positive semidefinite matrix, \(f_{3}(\varvec{x})\) is convex (Bertsekas 1999, Proposition B.4(d)), and thus we have \(f_{3}(\alpha \varvec{x}_{1}+\beta \varvec{x}_{2})\le \alpha f_{3}(\varvec{x}_{1})+\beta f_{3}(\varvec{x}_{2})\). Combining (a)–(c), it follows that
$$\begin{aligned} f(\alpha \varvec{x}_{1}+\beta \varvec{x}_{2})&=\sum _{i=1}^{3}f_{i}(\alpha \varvec{x}_{1}+\beta \varvec{x}_{2})\\&\le \sum _{i=1}^{3}\left\{ \alpha f_{i}(\varvec{x}_{1})+\beta f_{i}(\varvec{x}_{2})\right\} =\alpha f(\varvec{x}_{1})+\beta f(\varvec{x}_{2}), \end{aligned}$$which shows that \(f(\varvec{x})\) is convex. (iii) Given that \(f(\varvec{x})\) is convex, it is continuous (Bertsekas 1999, Proposition B.9(a)).
\(\square \)
1.2 A.2 Proof of Proposition 2.1
(i) From Lemma A.1, \(f(\varvec{x})\) is a continuous and coercive function. Thus, from Brinkhuis and Tikhomirov (2005, Corollary 2.7), there exists \(\widehat{\varvec{x}}\) such that \(f(\widehat{\varvec{x}})\le f(\varvec{x})\) for any \(\varvec{x}\in \mathbb {R}^{T}\). (ii) Denote the solution set, \(\{\widehat{\varvec{x}}\in \mathbb {R}^{T}|\,f(\widehat{\varvec{x}})\le f(\varvec{x})\}\), by \(S^{*}\). In addition, let \(\alpha \in [0,1]\) and \(\varvec{x}_{1},\varvec{x}_{2}\in S^{*}\) and denote \((1-\alpha )\) by \(\beta \). Then, given that \(f(\varvec{x})\) is convex and \(f(\varvec{x}_{i})\le f(\varvec{x})\) for \(i=1,2\), it follows that
which indicates that \(\alpha \varvec{x}_{1}+\beta \varvec{x}_{2}\in S^{*}\) and thus \(S^{*}\) is convex.
1.3 A.3 MATLAB user-defined function
We provide a MATLAB user-defined function for obtaining \(\widehat{\varvec{x}}\). We note that it depends on CVX, a package for specifying and solving convex programs (CVX Research, Inc 2011; Grant and Boyd 2008).
1.4 A.4 R user-defined function
We provide an R user-defined function for obtaining \(\widehat{\varvec{x}}\). We note that it depends on CVXR, an R package for specifying and solving convex programs (Fu et al. 2020).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Yamada, H. Quantile regression version of Hodrick–Prescott filter. Empir Econ 64, 1631–1645 (2023). https://doi.org/10.1007/s00181-022-02292-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00181-022-02292-8