Machine Vision and Applications

, Volume 21, Issue 5, pp 613–626 | Cite as

Assessment of the influence of adaptive components in trainable surface inspection systems

  • Christian EitzingerEmail author
  • W. Heidl
  • E. Lughofer
  • S. Raiser
  • J.E. Smith
  • M.A. Tahir
  • D. Sannen
  • H. Van Brussel
Special Issue


In this paper, we present a framework for the classification of images in surface inspection tasks and address several key aspects of the processing chain from the original image to the final classification result. A major contribution of this paper is a quantitative assessment of how incorporating adaptivity into the feature calculation, the feature pre-processing, and into the classifiers themselves, influences the final image classification performance. Hereby, results achieved on a range of artificial and real-world test data from applications in printing, die-casting, metal processing and food production are presented.


Feature Vector Feature Selection Object Feature Inspection System Feature Calculation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Campbell, J.G., Hashim, A.A., Murtagh, F.: Flaw detection in woven textiles using space-dependent fourier transform. In: Owens, F.J. (ed.) ISSC ’97, Irish Signals and Systems Conference, pp. 241–252 (1997)Google Scholar
  2. 2.
    Cohen F.S., Fan Z., Attali S.: Automated inspection of textile fabrics using textural models. IEEE Trans. Pattern Anal. Mach. Intell. 13(8), 803–808 (1991)CrossRefGoogle Scholar
  3. 3.
    Ozdemir, S., Baykut, A., Meylani, R., Erçil, A., Ertüzün, A.: Comparative evaluation of texture analysis algorithms for defect inspection of textile products. In: Proceedings of International Conference on Pattern Recognition, pp. 1738–1741 (1998)Google Scholar
  4. 4.
    Schael M.: Texture fault detection using invariant textural features. In: Radig, B., Florczyk, S. (eds) Proceedings of DAGM 2001, Pattern Recognition. LNCS, vol. 2191, pp. 17–24. Springer, Berlin (2001)Google Scholar
  5. 5.
    Kim C.W., Koivo A.J.: Hierarchical classification of surface defects on dusty wood boards. Pattern Recogn. Lett. 15, 713–721 (1994)CrossRefGoogle Scholar
  6. 6.
    Iivarinen, J., Visa, A.: An adaptive texture and shape based defect classification. In: Proceedings of International Conference on Pattern Recognition, vol. 1, pp. 117–123 (1998)Google Scholar
  7. 7.
    Serafim, A.: Segmentation of natural images based on multiresolution pyramids linking: Application to leather defects detection. In: Proceedings of International Conference on Pattern Recognition, Kobe, Japan, pp. 41–44 (1992)Google Scholar
  8. 8.
    Caleb-Solly P., Smith J.: Adaptive surface inspection via interactive evolution. Image Vis. Comput. 25(7), 1058–1072 (2007)CrossRefGoogle Scholar
  9. 9.
    Peters S., Koenig A.: Optimized texture operators for the automated design of image analysis systems: Non-linear and oriented kernels vs. gray value co-occurrence matrices. Int. J. Hybrid Intell. Syst. 4, 185–202 (2007)zbMATHGoogle Scholar
  10. 10.
    Condurache, A.: A two-stage-classifier for defect classification in optical media inspection. In: Proceedings of the 16th International Conference on Pattern Recognition (ICPR’02), vol. 4, pp. 373–376 (2002)Google Scholar
  11. 11.
    Yang X., Pang G.K.H., Yung N.H.C.: Robust fabric defect detection and classification using multiple adaptive wavelets. IEE Proc. Vis. Image Signal Process. 152, 715–723 (2005)CrossRefGoogle Scholar
  12. 12.
    Chen, C., Lai, S., Lee, W., Lin, C., Ku, T., Chen, C., Chung, Y.: Hausdorff distance based pcb inspection system with defect classification. In: Proceedings of SPIE. The International Society for Optical Engineering, vol. 6000, pp. 1–12 (2005)Google Scholar
  13. 13.
    Dash M., Liu H.: Feature selection for classification. Int. J. Intell. Data Anal. 1, 131–156 (1997)CrossRefGoogle Scholar
  14. 14.
    Costanza C.M., Afifi A.A.: Comparison of stopping rules in forward stepwise discriminant analysis. J. Am. Stat. Assoc. 74, 777–785 (1979)zbMATHCrossRefGoogle Scholar
  15. 15.
    Hand D.J.: Discrimination and classification. Wiley Series in Probability and Mathematical Statistics. Wiley, Chichester (1981)Google Scholar
  16. 16.
    Rao C.R.: Linear Statistical Inference and Its Applications. Wiley, New York (1965)zbMATHGoogle Scholar
  17. 17.
    Kohavi R., John G.: Wrappers for feature subset selection. Artif. Intell. 97(1–2), 273–324 (1997)zbMATHCrossRefGoogle Scholar
  18. 18.
    Kononenko, I.: Estimating attributes: Analysis and extensions of relief. In: Proceedings of ECML-94, Catania, Sicily, Springer Verlag, pp. 171–182 (1994)Google Scholar
  19. 19.
    Reisert, M., Burkhardt, H.: Feature selection for retrieval purposes. In: Proceedings of the ICIAR’06, vol. 1, pp. 661–672. Pavoa do Varzim, Portugal (2006)Google Scholar
  20. 20.
    Cardie, C.: Using decision trees to improve case-based learning. In: Proceedings of 10th International Conference on Machine Learning, pp. 25–32 (1993)Google Scholar
  21. 21.
    Narendra P., Fukunaga K.: A branch and bound algorithm for feature subset selection. IEEE Trans. Comput. 26(9), 917–922 (1977)zbMATHCrossRefGoogle Scholar
  22. 22.
    Guyon I., Elisseeff A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)zbMATHCrossRefGoogle Scholar
  23. 23.
    Molina, L.C., Belanche, L., Nebot, A.: Feature selection algorithms: a survey and experimental evaluation. In: ICDM ’02: Proceedings of the 2002 IEEE International Conference on Data Mining, Maebashi City, Japan, pp. 306–311 (2002)Google Scholar
  24. 24.
    Chen, H.T., Liu, T.L., Fuh, C.S.: Probabilistic tracking with adaptive feature selection. In: 17th International Conference on Pattern Recognition (ICPR’04), vol. 2, pp. 736–739 (2004)Google Scholar
  25. 25.
    Collins, R., Liu, Y.: On-line selection of discriminative tracking features. In: Proceedings of the 2003 International Conference of Computer Vision (ICCV ’03), pp. 346–352 (2003)Google Scholar
  26. 26.
    Azimi-Sadjadi, M.R.D.Y., Dobeck, G.J.: Adaptive feature mapping for underwater target classification. In: IJCNN ’99. International Joint Conference on Neural Networks, vol. 5, pp. 3221–3224 (1999)Google Scholar
  27. 27.
    Krishnapuram B., Hartemink A.J., Carin L., Figueiredo M.A.T.: A Bayesian approach to joint feature selection and classifier design. IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1105–1111 (2004)CrossRefGoogle Scholar
  28. 28.
    Duda R.O., Hart P.E.: Pattern Classification and Scene Analysis. Wiley, New York (1973)zbMATHGoogle Scholar
  29. 29.
    Eitzinger C., Gmainer M., Heidl W., Lughofer E.: Increasing classification performance with adaptive features. In: Gasteratos, A., Vincze, M., Tsotsos, J. (eds) Proceedings of ICVS 2008. LNCS, vol. 5008, pp. 445–453. Springer, Santorini Island (2008)Google Scholar
  30. 30.
    Pekalska, E.: Dissimilarity representations in pattern recognition, concepts, theory and applications. Ph.D. thesis. Delft University of Technology, Delft (2005)Google Scholar
  31. 31.
    El-Naqa I., Yang Y., Wernick M.N., Galatsanos N.P., Nishikawa R.M.: A support vector machine approach for detection of microcalcifications. IEEE Trans. Med. Imaging 21(12), 1552–1563 (2002)CrossRefGoogle Scholar
  32. 32.
    Nixon, M., Aquado, A.: Feature Extraction and Image Processing. Newnes (2002)Google Scholar
  33. 33.
    Pernkopf F.: Detection of surface defects on raw steel blocks using Bayesian network classifiers. Pattern Anal. Appl. 7(3), 333–342 (2004)MathSciNetGoogle Scholar
  34. 34.
    Wolpert D.H.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)CrossRefMathSciNetGoogle Scholar
  35. 35.
    Quinlan J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, Menlo Park (1993)Google Scholar
  36. 36.
    Breiman L., Friedman J., Stone C.J., Olshen R.A.: Classification and Regression Trees. Chapman and Hall, Boca Raton (1993)Google Scholar
  37. 37.
    Schölkopf B., Smola A.J.: Learning with Kernels—Support Vector Machines, Regularization, Optimization and Beyond. MIT Press, London (2002)Google Scholar
  38. 38.
    Vapnik V.: Statistical Learning Theory. Wiley, New York (1998)zbMATHGoogle Scholar
  39. 39.
    Breiman L.: Bagging predictors. Mach. Learn. 2(24), 123–140 (1996)Google Scholar
  40. 40.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Thirteenth International Conference on Machine Learning, pp. 148–156 (1996)Google Scholar
  41. 41.
    Wasserman P.D.: Advanced Methods in Neural Computing. Van Nostrand Reinhold, New York (1993)zbMATHGoogle Scholar
  42. 42.
    Wu X., Kumar V., Quinlan J.R., Ghosh J., Yang Q., Motoda H., McLachlan G.J., Ng A., Liu B., Yu P.S., Zhou Z., Steinbach M., Hand D.J., Steinberg D.: Top 10 algorithms in data mining. Knowl. Inf. Syst. 14(1), 1–37 (2008)CrossRefGoogle Scholar
  43. 43.
    Tahir, M.A., Smith, J.E.: Improving nearest neighbor classifier using tabu search and ensemble distance metrics. In: Proceedings of the Sixth IEEE International Conference on Data Mining, pp. 1086–1090. IEEE Computer Society, Hong Kong (2006)Google Scholar
  44. 44.
    Tahir M.A., Smith J.E., Caleb-Solly P.: A novel feature selection based semi-supervised method for image classification. In: Gasteratos, A., Vincze, M., Tsotsos, J. (eds) Proceedings of ICVS 2008. LNCS, vol. 5008, pp. 484–493. Springer, Santorini Island (2008)Google Scholar
  45. 45.
    Stone M.: Cross-validatory choice and assessment of statistical predictions. J. R. Stat. Soc. 36, 111–147 (1974)zbMATHGoogle Scholar
  46. 46.
    Grunditz, C.H., Walder, M., Spaanenburg, L.: Constructing a neural system for surface inspection. In: Proceedings of the International Joint Conference on Neural Networks 2005, pp. 1881–1886. Budapest, Hungary (2004)Google Scholar
  47. 47.
    Gayubo, F., Gonzalez, J.L., Fuente, E.D.L., Miguel, F., Peran, J.R.: On-line machine vision system for detect split defects in sheet-metal forming processes. In: Proceedings of the 18th International Conference on Pattern Recognition, pp. 723–726. IEEE Computer Society, Los Alamitos (2006)Google Scholar
  48. 48.
    Awad, M., Wang, L., Chin, Y., Khan, L., Chen, G., Chebil, F.: A framework for image classification. In: Proceedings of the 2006 IEEE Southwest Symposium on Image Analysis and Interpretation, pp. 134–138 (2006)Google Scholar
  49. 49.
    Koulamis, A.D., Doulamis, N.D., Kollias, S.D.: Retrainable neural networks for image analysis and classification. In: Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, pp. 3558–3563, IEEE Press, Piscataway (1997)Google Scholar
  50. 50.
    Lughofer E.: Extensions of vector quantization for incremental clustering. Pattern Recogn. 41(3), 995–1011 (2008)zbMATHCrossRefGoogle Scholar
  51. 51.
    Lughofer, E., Angelov, P., Zhou, X.: Evolving single- and multi-model fuzzy classifiers with FLEXFIS-Class. In: Proceedings of FUZZ-IEEE 2007, pp. 363–368, London (2007)Google Scholar
  52. 52.
    Kruse R., Gebhart J., Klawonn F.: Fuzzy Systeme. B.G. Teubner, Stuttgart (1993)Google Scholar
  53. 53.
    Lughofer E.: FLEXFIS: A robust incremental learning approach for evolving TS fuzzy models. IEEE Trans. Fuzzy Syst. (special issue on evolving fuzzy systems) 16(6), 1393–1410 (2008)Google Scholar
  54. 54.
    Mitchell T.: Machine Learning. McGraw-Hill, New York (1997)zbMATHGoogle Scholar
  55. 55.
    Kuncheva L.I., Whitaker C.J., Shipp C.A., Duin R.P.W.: Limits on the majority vote accuracy in classifier fusion. Pattern Anal. Appl. 6(1), 22–31 (2003)zbMATHCrossRefMathSciNetGoogle Scholar
  56. 56.
    Cho S.B., Kim J.H.: Combining multiple neural networks by fuzzy integral for robust classification. IEEE Trans. Syst. Man Cybern. 25(2), 380–384 (1995)CrossRefGoogle Scholar
  57. 57.
    Kuncheva L.I., Bezdek J.C., Duin R.P.W.: Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recogn. 34(2), 299–314 (2001)zbMATHCrossRefGoogle Scholar
  58. 58.
    Rogova G.L.: Combining the results of several neural network classifiers. Neural Netw. 7(5), 777–781 (1994)CrossRefGoogle Scholar
  59. 59.
    Sannen D., Van Brussel H., Nuttin M.: Classifier fusion using discounted Dempster–Shafer combination. In: Perner, P. (eds) Machine Learning and Data Mining in Pattern Recognition. Fifth International Conference, MLDM 2007, Poster Proceedings, pp. 216–230. IBaI Publishing, Leipzig (2007)Google Scholar
  60. 60.
    Kuncheva L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley, New York (2004)zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2009

Authors and Affiliations

  • Christian Eitzinger
    • 1
    Email author
  • W. Heidl
    • 1
  • E. Lughofer
    • 2
  • S. Raiser
    • 2
  • J.E. Smith
    • 3
  • M.A. Tahir
    • 3
  • D. Sannen
    • 4
  • H. Van Brussel
    • 4
  1. 1.Profactor GmbHSteyrAustria
  2. 2.Johannes Kepler UniversityLinzAustria
  3. 3.University of the West of EnglandBristolUK
  4. 4.Katholieke Universiteit LeuvenLeuvenBelgium

Personalised recommendations