An MCMC based Bayesian inference approach to parameter estimation of distributed lag models for forecasting used product returns for remanufacturing

Abstract

Forecasting the quantity and timing of used product returns is one of the major challenges faced by remanufacturers. Distributed lag model has been proposed in recent years to forecast used product returns based on past sales data. However, the forecasting accuracy of a distributed lag model is affected to a large extent by the estimation of the parameters of the lag function. The Bayesian inference approach for parameter estimation which has been proposed by previous studies requires solving the marginal likelihood function which is often difficult even for a slightly complex lag function. In this research, a Markov Chain Monte Carlo (MCMC) based Bayesian inference approach is proposed to estimate the parameters of a lag function which provides an efficient way to sample parameter values from a posterior distribution regardless of the complexity of a lag function. An example case study of forecasting used product returns was undertaken to illustrate the proposed parameter estimation approach which was validated by comparing it with the maximum likelihood estimate (MLE) method. The validation results show that the forecasting accuracy of the number of used product returns based on the parameters estimated by using the proposed MCMC based Bayesian approach is better than that estimated by using the MLE method in terms of mean absolute percent errors and variance of errors.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

References

  1. 1.

    Agrawal S, Singh RK, Murtaza Q (2014) Forecasting product returns for recycling in Indian electronics industry. J Adv Manag Res 11:102–114. https://doi.org/10.1108/JAMR-02-2013-0013

    Article  Google Scholar 

  2. 2.

    Aras N, Boyaci T, Verter V (2004) The effect of categorizing returned products in remanufacturing. In: The effect of categorizing returned products in remanufacturing. Eng, IIE Trans (Institute Ind. https://doi.org/10.1080/07408170490279561

    Google Scholar 

  3. 3.

    Aydin R, Kwong CK, Geda MW, Okudan Kremer GE (2018) Determining the optimal quantity and quality levels of used product returns for remanufacturing under multi-period and uncertain quality of returns. Int J Adv Manuf Technol 94:4401–4414. https://doi.org/10.1007/s00170-017-1141-0

    Article  Google Scholar 

  4. 4.

    Bass FM (1969) A new product growth for model consumer durables. Manag Sci 15:215–227. https://doi.org/10.1287/mnsc.15.5.215

    Article  MATH  Google Scholar 

  5. 5.

    Beerli P (2006) Comparison of Bayesian and maximum-likelihood inference of population genetic parameters. Bioinformatics. 22:341–345. https://doi.org/10.1093/bioinformatics/bti803

    Article  Google Scholar 

  6. 6.

    Behdad S, Williams AS, Thurston D (2012) End-of-life decision making with uncertain product return quantity. J Mech Des 134. https://doi.org/10.1115/1.4007394

  7. 7.

    Carlo M (2004) Markov chain Monte Carlo and Gibbs sampling. Notes. 7:125–131. https://doi.org/10.2174/157015909788848875

    Article  Google Scholar 

  8. 8.

    Chandrasekaran D, Tellis GJ (2007) A critical review of marketing research on diffusion of new products. Rev Mark Res. https://doi.org/10.1108/S1548-6435(2007)0000003006

  9. 9.

    Choi J, Kim S, Chen J, Dannels S (2011) A comparison of maximum likelihood and Bayesian estimation for Polychoric correlation using Monte Carlo simulation. J Educ Behav Stat 36:523–549. https://doi.org/10.3102/1076998610381398

    Article  Google Scholar 

  10. 10.

    Clottey T, Benton WC (2014) Determining core acquisition quantities when products have long return lags. IIE Trans (Institute Ind Eng . doi: https://doi.org/10.1080/0740817X.2014.882531

  11. 11.

    Clottey T, Benton WC, Srivastava R (2012) Forecasting product returns for remanufacturing operations. Decis Sci 43:589–614. https://doi.org/10.1111/j.1540-5915.2012.00362.x

    Article  Google Scholar 

  12. 12.

    Cole D, Mahapatra S, Webster S (2017) A comparison of buyback and trade-in policies to acquire used products for remanufacturing. J Bus Logist 38:217–232. https://doi.org/10.1111/jbl.12159

    Article  Google Scholar 

  13. 13.

    de Brito MP, van der Laan EA (2009) Inventory control with product returns: the impact of imperfect information. Eur J Oper Res 194:85–101. https://doi.org/10.1016/j.ejor.2007.11.063

    Article  MATH  Google Scholar 

  14. 14.

    Franses PH, Oest R Van (2004) On the econometrics of the Koyck model. Work. Pap

  15. 15.

    Gelman A (2006) Prior distribution for variance parameters in hierarchical models. Bayesian Anal 1:515–534. https://doi.org/10.1214/06-BA117A

    MathSciNet  Article  MATH  Google Scholar 

  16. 16.

    Goh TN, Varaprasad N (1986) A statistical methodology for the analysis of the life-cycle of reusable containers. Eng, IIE Trans (Institute Ind. https://doi.org/10.1080/07408178608975328

    Google Scholar 

  17. 17.

    Hanafi J, Kara S, Kaebernick H (2007) Generating fuzzy Coloured petri net forecasting model to predict the return of products. In: IEEE International Symposium on Electronics and the Environment

  18. 18.

    Jiang H, Xie M, Tang LC (2008) Markov chain Monte Carlo methods for parameter estimation of the modified Weibull distribution. J Appl Stat 35:647–658. https://doi.org/10.1080/02664760801920846

    MathSciNet  Article  MATH  Google Scholar 

  19. 19.

    Kelle P, Silver EA (1989) Forecasting the returns of reusable containers. J Oper Manag 8:17–35. https://doi.org/10.1016/S0272-6963(89)80003-8

    Article  Google Scholar 

  20. 20.

    Krapp M, Nebel J, Sahamie R (2013) Forecasting product returns in closed-loop supply chains. Int J Phys Distrib Logist Manag 43:614–637. https://doi.org/10.1108/IJPDLM-03-2012-0078

    Article  MATH  Google Scholar 

  21. 21.

    Liang X, Jin X, Ni J (2014) Forecasting product returns for remanufacturing systems. J Remanufacturing 4. https://doi.org/10.1186/s13243-014-0008-x

  22. 22.

    Marx-Gómez J, Rautenstrauch C, Nürnberger A, Kruse R (2002) Neuro-fuzzy approach to forecast returns of scrapped products to recycling and remanufacturing. Knowledge-Based Syst 15:119–128. https://doi.org/10.1016/S0950-7051(01)00128-9

    Article  Google Scholar 

  23. 23.

    Myung IJ (2003) Tutorial on maximum likelihood estimation. J Math Psychol 47:90–100. https://doi.org/10.1016/S0022-2496(02)00028-7

    MathSciNet  Article  MATH  Google Scholar 

  24. 24.

    Potdar A, Rogers J (2012) Reason-code based model to forecast product returns. Foresight. https://doi.org/10.1108/14636681211222393

  25. 25.

    Toktay LB, van der Laan EA, de Brito MP (2013) Managing product returns: the role of forecasting. In: Reverse Logistics

  26. 26.

    Toktay LB, Wein LM, Zenios SA (2003) Inventory Management of Remanufacturable Products. Manag Sci 46:1412–1426. https://doi.org/10.1287/mnsc.46.11.1412.12082

    Article  MATH  Google Scholar 

Download references

Acknowledgments

The work described in this paper was supported by a PhD studentship (Project account code: RUNJ) from The Hong Kong Polytechnic University.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Mohammed Geda.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

APPENDIX: Derivation of covariance matrix of the error vector of a DLM

APPENDIX: Derivation of covariance matrix of the error vector of a DLM

In this section, the derivation of covariances of the error vector, u ∈ (u3, u4, u5, …uT)′ where ut = εt − 2(1 − q)εt − 1 + (1 − q)2εt − 2 for t = 3,4,5 … T is presented. We assume the error terms, i.e. εt, εt − 1, εt − 1 are normally distributed with zero mean and σ2 variance. Hence, the covariance between any two like terms is equivalent to its variance; i.e. Cov(εt, εt) = Cov(εt − 1, εt − 1) = Cov(εt − 1, εt − 1)= σ2 whereas covariance between unlike terms is equivalent to zero.

The covariances of the error vector denoted by Σu is represented by a (T − 2) × (T − 2) matrix as shown below.

$$ {\sum}_u=\left[\begin{array}{cccccc}\mathrm{Cov}\left({u}_t,{u}_t\right)& \mathrm{Cov}\left({u}_t,{u}_{t+1}\right)& \mathrm{Cov}\left({u}_t,{u}_{t+2}\right)& \cdots & \cdots & \mathrm{Cov}\left({u}_{\mathrm{t}},{u}_T\right)\\ {}\mathrm{Cov}\left({u}_{t+1},{u}_t\right)& \mathrm{Cov}\left({u}_{t+1},{u}_{t+1}\right)& \mathrm{Cov}\left({u}_{t+2},{u}_{t+3}\right)& \cdots & \cdots & \mathrm{Cov}\left({u}_{t+1},{u}_T\right)\\ {}\mathrm{Cov}\left({u}_{t+2},{u}_t\right)& \mathrm{Cov}\left({u}_{t+3},{u}_{t+2}\right)& \mathrm{Cov}\left({u}_{t+2},{u}_{t+2}\right)& \cdots & \cdots & \mathrm{Cov}\left({u}_{t+2},{u}_T\right)\\ {}\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ {}\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ {}\mathrm{Cov}\left({u}_T,{u}_t\right)& \mathrm{Cov}\left({u}_T,{u}_{t+1}\right)& \mathrm{Cov}\left({u}_T,{u}_{t+2}\right)& \cdots & \cdots & \mathrm{Cov}\left({u}_T,{u}_T\right)\end{array}\right] $$

Detailed computation of the elements of Σu is presented below.

$$ {\displaystyle \begin{array}{l}\mathrm{Cov}\left({u}_t,{u}_t\right)=\mathrm{E}\left[\left({u}_t-\mathrm{E}\left({u}_t\right)\right)\left({u}_t-\mathrm{E}\left({u}_t\right)\right)\right]=\mathrm{E}\left[{u_t}^2\right]\\ {}\kern6.25em =\mathrm{E}\left({\left({\varepsilon}_t-2\left(1-q\right){\varepsilon}_{t-1}+{\left(1-q\right)}^2\ {\varepsilon}_{t-2}\right)}^2\right)\kern0.5em \mathrm{for}\ \mathrm{t}=3,4,5\dots \mathrm{T}\\ {}\begin{array}{l}\kern5em =\mathrm{E}\Big({\varepsilon}_t^2-4\left(1-q\right){\varepsilon}_t{\varepsilon}_{t-1}+2{\left(1-q\right)}^2{\varepsilon}_t{\varepsilon}_{t-2}+4{\left(1-q\right)}^2{\left({\varepsilon}_{t-1}\right)}^2-4{\left(1-q\right)}^3{\varepsilon}_{t-1}{\varepsilon}_{t-2}+\\ {}\kern6.5em {\left(1-q\right)}^4{\varepsilon}_{t-2}^2\Big)\\ {}\kern6em =\mathrm{E}\left({\varepsilon}_t^2\right)-\mathbf{4}\left(1-q\right)\boldsymbol{E}\left({\varepsilon}_t{\varepsilon}_{t-1}\right)+2{\left(1-q\right)}^2E\left({\varepsilon}_t{\varepsilon}_{t-2}\right)+4{\left(1-q\right)}^2E\left({\left({\varepsilon}_{t-1}\right)}^2\right)-\\ {}\kern6em 4{\left(1-q\right)}^3\mathrm{E}\left({\varepsilon}_{t-1}{\varepsilon}_{t-2}\right)+{\left(1-q\right)}^4\mathrm{E}\left({\varepsilon}_{t-2}^2\right)\\ {}\kern5em ={\sigma}^2+4{\left(1-q\right)}^2{\sigma}^2+{\left(1-q\right)}^4\ {\sigma}^2\kern0.75em ={\sigma}^2\ \left(1+4{\left(1-q\right)}^2+{\left(1-q\right)}^4\ \right)\end{array}\end{array}} $$
$$ {\displaystyle \begin{array}{l}\begin{array}{l}\mathrm{Cov}\left({u}_t,{u}_{t+1}\right)=\mathrm{E}\left[\left({u}_t-\mathrm{E}\left({u}_t\right)\right)\left({u}_{t+1}-\mathrm{E}\left({u}_{t+1}\right)\right)\right]=\mathrm{E}\left[{u}_t{u}_{t+1}\right]\\ {}\kern6.5em =\mathrm{E}\left[{\varepsilon}_t-2\left(1-q\right){\varepsilon}_{t-1}+{\left(1-q\right)}^2\ {\varepsilon}_{t-2}\ \right)\left({\varepsilon}_{t+1}-2\left(1-q\right){\varepsilon}_t+{\left(1-q\right)}^2\ {\varepsilon}_{t-1}\right]\\ {}\kern6.5em =\mathrm{E}\Big[{\varepsilon}_t{\varepsilon}_{t+1}-2\left(1-q\right){\varepsilon}_t^2+{\left(1-q\right)}^2{\varepsilon}_t{\varepsilon}_{t-1}-2\left(1-q\right)\left({\varepsilon}_{t-1}{\varepsilon}_{t+1}\right)+4{\left(1-q\right)}^2{\varepsilon}_t{\varepsilon}_{t-1}-\\ {}\kern7.75em 2{\left(1-q\right)}^3{\varepsilon}_{t-1}^2+{\left(1-q\right)}^2{\varepsilon}_{t+1}{\varepsilon}_{t-2}-2{\left(1-q\right)}^3\left(\kern0.5em {\varepsilon}_t{\varepsilon}_{t-2}\right)+{\left(1-q\right)}^4{\varepsilon}_{t-1}{\varepsilon}_{t-2}\Big]\end{array}\\ {}\kern6.5em =\mathrm{E}\left({\varepsilon}_t{\varepsilon}_{t+1}\Big)-2\left(1-q\right)E\left({\varepsilon}_t^2\right)+{\left(1-q\right)}^2E\left({\varepsilon}_t{\varepsilon}_{t-1}\right)-2\left(1-q\right)E\left({\varepsilon}_{t-1}{\varepsilon}_{t+1}\right)+4\right)\\ {}\kern6.5em {\left(1-q\right)}^2E\left({\varepsilon}_t{\varepsilon}_{t-1}\right)-2{\left(1-q\right)}^3E\left({\varepsilon}_{t-1}^2\right)+{\left(1-q\right)}^2E\left({\varepsilon}_{t+1}{\varepsilon}_{t-2}\right)-2{\left(1-q\right)}^3E\left(\kern0.5em {\varepsilon}_t{\varepsilon}_{t-2}\right)+\\ {}\kern7.75em {\left(1-q\right)}^4E\left({\varepsilon}_{t-1}{\varepsilon}_{t-2}\right)\\ {}\kern6em =-2\left(1-q\right){\sigma}^2-2{\left(1-q\right)}^3{\sigma}^2=-2{\sigma}^2\left(\left(1-q\right)+{\left(1-q\right)}^3\right)=-2{\sigma}^2\left(1-q\right)\left(1+{\left(1-q\right)}^2\right)\end{array}} $$
$$ {\displaystyle \begin{array}{l}\begin{array}{l}\mathrm{Cov}\left({u}_t,{u}_{t+2}\right)=\mathrm{E}\left({\varepsilon}_t-2\left(1-q\right){\varepsilon}_{t-1}+{\left(1-q\right)}^2\ {\varepsilon}_{t-2}\ \left)\right({\varepsilon}_{t+2}-2\left(1-\mathrm{q}\right){\varepsilon}_{t+1}+{\left(1-q\right)}^2\ {\varepsilon}_t\ \right)\\ {}\kern6.25em =\mathrm{E}\Big({\varepsilon}_t{\varepsilon}_{t+2}-2\left(1-q\right)\ {\varepsilon}_t{\varepsilon}_{t+1}+{\left(1-q\right)}^2{\varepsilon}_t^2-2\left(1-q\right){\varepsilon}_{t-1}{\varepsilon}_{t+2}+4{\left(1-q\right)}^2{\varepsilon}_{t-1}{\varepsilon}_{t+1}+\\ {}\kern6.25em 2{\left(1-q\right)}^3\ {\varepsilon}_t{\varepsilon}_{t-1}+{\left(1-q\right)}^2\ {\varepsilon}_{t-2}{\varepsilon}_{t+2}-2{\left(1-q\right)}^3{\varepsilon}_{t-2}{\varepsilon}_{t+1}+{\left(1-q\right)}^4{\varepsilon}_t{\varepsilon}_{t-2}\ \Big)\end{array}\\ {}\kern6.25em =\mathrm{E}\left({\varepsilon}_t{\varepsilon}_{t+2}\right)-2\left(1-q\right)\mathrm{E}\left({\varepsilon}_t{\varepsilon}_{t+1}\right)+{\left(1-q\right)}^2E\left({\varepsilon}_t^2\right)-2\left(1-q\right)\mathrm{E}\left({\varepsilon}_{t-1}{\varepsilon}_{t+2}\right)+4{\left(1-q\right)}^2\\ {}\kern6.25em E\left({\varepsilon}_{t-1}{\varepsilon}_{t+2}\right)++2{\left(1-q\right)}^3\mathrm{E}\left({\varepsilon}_t{\varepsilon}_{t-1}\right)+{\left(1-q\right)}^2\mathrm{E}\left({\varepsilon}_{t-2}{\varepsilon}_{t+2}\right)-2{\left(1-q\right)}^3E\left({\varepsilon}_{t-2}{\varepsilon}_{t+1}\right)+\\ {}\kern6.25em {\left(1-q\right)}^4E\left({\varepsilon}_t{\varepsilon}_{t-2}\right)\\ {}\kern6.25em ={\left(1-q\right)}^2{\sigma}^2\end{array}} $$
$$ {\displaystyle \begin{array}{l}\begin{array}{l}\mathrm{Cov}\left({u}_t,{u}_{t+3}\right)=\mathrm{E}\left({\varepsilon}_t-2\left(1-q\right){\varepsilon}_{t-1}+{\left(1-q\right)}^2\ {\varepsilon}_{t-2}\ \left)\right({\varepsilon}_{t+3}-2\left(1-q\right){\varepsilon}_{t+2}+{\left(1-q\right)}^2\ {\varepsilon}_{t+1}\ \right)\\ {}\kern6.5em =\mathrm{E}\Big({\varepsilon}_t{\varepsilon}_{t+3}-2\left(1-q\right)\ {\varepsilon}_t{\varepsilon}_{t+2}+{\left(1-q\right)}^2{\varepsilon}_t{\varepsilon}_{t+1}-2\left(1-q\right){\varepsilon}_{t-1}{\varepsilon}_{t+3}+4{\left(1-q\right)}^2\ {\varepsilon}_{t-1}{\varepsilon}_{t+2}-\\ {}\kern6.5em 2{\left(1-q\right)}^3{\varepsilon}_{t-1}{\varepsilon}_{t+1}+{\left(1-q\right)}^2\ {\varepsilon}_{t-2}{\varepsilon}_{t+3}-2{\left(1-q\right)}^3{\varepsilon}_{t-2}{\varepsilon}_{t+2}+{\left(1-q\right)}^4\ {\varepsilon}_{t-2}{\varepsilon}_{t+1}\Big)\end{array}\\ {}\kern6.5em =\mathrm{E}\left({\varepsilon}_t{\varepsilon}_{t+3}\right)-2\left(1-q\right)E\left({\varepsilon}_t{\varepsilon}_{t+2}\right)+{\left(1-q\right)}^2E\left({\varepsilon}_t{\varepsilon}_{t+1}\right)-2\left(1-q\right)E\left({\varepsilon}_{t-1}{\varepsilon}_{t+3}\right)+\\ {}\kern7em 4{\left(1-q\right)}^2\mathrm{E}\left({\varepsilon}_{t-1}{\varepsilon}_{t+2}\right)-2{\left(1-q\right)}^3E\left({\varepsilon}_{t-1}{\varepsilon}_{t+1}\right)+{\left(1-q\right)}^2\mathrm{E}\left({\varepsilon}_{t-2}{\varepsilon}_{t+3}\right)-\\ {}\kern7em 2{\left(1-q\right)}^3E\left({\varepsilon}_{t-2}{\upvarepsilon}_{t+2}\right)+{\left(1-q\right)}^4\ \mathrm{E}\left({\varepsilon}_{t-2}{\varepsilon}_{t+1}\right)\\ {}\kern6.25em =0\end{array}} $$

From the above computation, the values for the rest of non-diagonal entries of the matrix, i.e. Cov(ut, ut + 3), for t = 3,4,5… T will be zeros. Therefore, the covariance matrix Σu can be represented in terms p & q parameters as shown below.

$$ {\sum}_u=\left[\begin{array}{cccccc}1+4{\left(1-\mathrm{q}\right)}^2+{\left(1-\mathrm{q}\right)}^4& -2\left(1-\mathrm{q}\right)\left(1+{\left(1-\mathrm{q}\right)}^2\right)& {\left(1-\mathrm{q}\right)}^2& 0& \cdots & 0\\ {}-2\left(1-\mathrm{q}\right)\left(1+{\left(1-\mathrm{q}\right)}^2\right)& 1+4{\left(1-\mathrm{q}\right)}^2+{\left(1-\mathrm{q}\right)}^4& -2\left(1-\mathrm{q}\right)\left(1+{\left(1-\mathrm{q}\right)}^2\right)& {\left(1-\mathrm{q}\right)}^2& \cdots & 0\\ {}{\left(1-\mathrm{q}\right)}^2& -2\left(1-\mathrm{q}\right)\left(1+{\left(1-\mathrm{q}\right)}^2\right)& 1+4{\left(1-\mathrm{q}\right)}^2+{\left(1-\mathrm{q}\right)}^4& -2\left(1-\mathrm{q}\right)\left(1+{\left(1-\mathrm{q}\right)}^2\right)& \cdots & 0\\ {}0& {\left(1-\mathrm{q}\right)}^2& -2\left(1-\mathrm{q}\right)\left(1+{\left(1-\mathrm{q}\right)}^2\right)& 1+4{\left(1-\mathrm{q}\right)}^2+{\left(1-\mathrm{q}\right)}^4& \cdots & \vdots \\ {}\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ {}0& 0& 0& 0& \cdots & 1+4{\left(1-\mathrm{q}\right)}^2+{\left(1-\mathrm{q}\right)}^4\end{array}\right] $$

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Geda, M., Kwong, C.K. An MCMC based Bayesian inference approach to parameter estimation of distributed lag models for forecasting used product returns for remanufacturing. Jnl Remanufactur (2021). https://doi.org/10.1007/s13243-020-00099-3

Download citation

Keywords

  • Forecasting, product returns
  • Uncertainty
  • Remanufacturing
  • Bayesian inference
  • Markov-chain Monte-Carlo