The Journal of Supercomputing

, Volume 72, Issue 10, pp 3993–4020 | Cite as

Towards implementation of residual-feedback GMDH neural network on parallel GPU memory guided by a regression curve

  • Ricardo Brito
  • Simon FongEmail author
  • Kyungeun Cho
  • Wei Song
  • Raymond Wong
  • Sabah Mohammed
  • Jinan Fiaidhi


GMDH, which stands for Group Method Data Handling, is an evolutionary type of neural network. It has received much attention in the supercomputing research community because of its ability to optimize its internal structure for maximum prediction accuracy. GMDH works by evolving itself from a basic network, expanding its number of neurons and hidden layer until no further performance gain can be obtained. Earlier on, the authors proposed a novel strategy that extends existing GMDH neural network techniques. The new strategy, called residual-feedback, retains and reuses past prediction errors as part of the multivariate sample data that provides relevant multivariate inputs to the GMDH neural networks. This is important because the strength of GMDH, like any neural network, is in predicting outcomes from multivariate data, and it is very noise-tolerant. GMDH is a well-known ensemble type of prediction method that is capable of modeling highly non-linear relations. Maximum accuracy is often achieved by using only the minimum amount of network neurons and simplest layered structure. This paper contributes to the technical design of implementing GMDH on GPU memory where all the weight computations run on parallel GPU memory blocks. It is a first step towards developing complex neural network architecture on GPU with the capability of evolving and expanding its structure to minimally sufficient for obtaining the maximum prediction accuracy based on the given input data.


Artificial neural networks GMDH Parallel execution NVIDIA CUDA GPU 



The authors are thankful for the financial support from the research grant “Rare Event Forecasting and Monitoring in Spatial Wireless Sensor Network Data,” Grant No. MYRG2014-00065-FST, and Grant No. FDCT/126/2014/A3, offered by the University of Macau, and Macau SAR Government.


  1. 1.
    Ivakhnenko AG (1971) Polynomial theory of complex systems. IEEE Trans Syst Man Cybern SMC–1(4):364–378MathSciNetCrossRefGoogle Scholar
  2. 2.
    Yefimenko SM (2012) Optimal Paralleling for Solving Combinatorial Modelling Problems using Graphics Processing Units, The 5th International Workshop on Inductive Modelling IWIM 2012, pp 57–59Google Scholar
  3. 3.
    Sarka D (2009) The ART (and ARIMA) of forecasting, Technical Presentation, Solid Quality Mentors,, Last accessed on April 23
  4. 4.
    Fong S, Nannan Z, Wong RK, Yang X-S (2012) Rare events forecasting using a residual-feedback GMDH neural network. 2012 IEEE 12th international conference on data mining workshops (ICDMW), pp 464–473Google Scholar
  5. 5.
    Ivakhnenko AG (1970) Heuristic self-organization in problems of engineering cybernetics. Automatica 6:207–219CrossRefGoogle Scholar
  6. 6.
    Ivakhnenko AG, Zholnarskiy AA (1992) Estimating the coefficients of polynomials in parametric GMDH algorithms by the improved instrumental variables method. J Autom Inf Sci 25(3):25–32MathSciNetGoogle Scholar
  7. 7.
    Sarychev AP (1984) A multilayer nonlinear harmonic GMDH algorithm for self-organization of predicting models. Sov Autom Control 17(4):90–95zbMATHGoogle Scholar
  8. 8.
    Duffy JJ, Franklin MA (1975) A learning identification algorithm and its application to an environmental system. IEEE Trans Syst Man Cybern SMC–5(2):226–240CrossRefGoogle Scholar
  9. 9.
    Kozubovskiy SF (1986) Determination of the optimal set of lagging arguments for a difference predicting model by correlation analysis. Sov J Autom Inf Sci 19(2):77–79Google Scholar
  10. 10.
    Karnazes PA, Bonnell RD (1982) Systems identification techniques using the group method of data handling. In: Proceedings of the 6th symposium on identification and system parameter estimation, vol 1, Washington DC. International Federation of Automatic Control, Pergamon, pp 713–718Google Scholar
  11. 11.
    Kendall M, Gibbons JD (1990) Rank correlation methods. Edward Arnold, LondonzbMATHGoogle Scholar
  12. 12.
    Loquasto F, Seborg DE (2003) Monitoring model predictive control systems using pattern classification and neural networks. J Ind Eng Chem Res 42:4689–4701CrossRefGoogle Scholar
  13. 13.
    Narasimhana S, Vachhanib P, Rengaswamya R (2008) New nonlinear residual feedback observer for fault diagnosis in nonlinear systems. Automatica 44(9):2222–2229MathSciNetCrossRefGoogle Scholar
  14. 14.
    Lanitis A (2003) Building statistical appearance models using residual information. in Proceedings of 9th Panhellenic Conference in Informatics. Thessaloniki, Nov., pp 547–559Google Scholar
  15. 15.
    Rossen A (2011) On the predictive content of nonlinear transformations of lagged autoregression residuals and time series observations, HWWI Research Papers, No. 113, Hamburg Institute of International Economics, pp 1–24Google Scholar
  16. 16.
    Schetinin V (2003) A learning algorithm for evolving cascade neural networks. J Neural Process Lett 17(1):21–31CrossRefGoogle Scholar
  17. 17.
    Varahrami V (2011) Recognition of good prediction of gold price between MLFF and GMDH neural network. J Econ Int Financ 3(4):204–210Google Scholar
  18. 18.
    LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRefGoogle Scholar
  19. 19.
    Qin Yechena, Langari Rezab, Lianga Gu (2015) A new modeling algorithm based on ANFIS and GMDH. J Intell Fuzzy Syst 29(4):1321–1329CrossRefGoogle Scholar
  20. 20.
    Wang Lipo, Quek Hou Chai, Tee Keng Hoe, Zhou Nina, Wan Chunru (2005) Optimal Size of a Feedforward Neural Network: How Much does it Matter?, Joint International Conference on Autonomic and Autonomous Systems and International Conference on Networking and Services, pp 6–9 Date 23–28 Oct. (2005)Google Scholar
  21. 21.
    Brito Ricardo, Fong Simon, Cho Kyungeun, Song Wei, Wong Raymond, Mohammed Sabah, Fiaidhi Jinan (2016) GPU-enabled back-propagation artificial neural network for digit recognition in parallel, The Journal of Supercomputing, Springer, February 2016, pp 1–19Google Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Ricardo Brito
    • 1
  • Simon Fong
    • 1
    Email author
  • Kyungeun Cho
    • 2
  • Wei Song
    • 3
  • Raymond Wong
    • 4
  • Sabah Mohammed
    • 5
  • Jinan Fiaidhi
    • 5
  1. 1.Department of Computer and Information ScienceUniversity of MacauTaipa, Macau SARPeople’s Republic of China
  2. 2.Department of Computer and Multimedia EngineeringDongguk UniversitySeoulKorea
  3. 3.College of Information EngineeringNorth China University of TechnologyBeijingChina
  4. 4.School of Computer Science and EngineeringUniversity of New South WalesSydneyAustralia
  5. 5.Department of Computer ScienceLakehead UniversityThunder BayCanada

Personalised recommendations