Skip to main content

Fuzzy Choquet Integration of Deep Convolutional Neural Networks for Remote Sensing

  • Chapter
  • First Online:
Computational Intelligence for Pattern Recognition

Part of the book series: Studies in Computational Intelligence ((SCI,volume 777))

Abstract

What deep learning lacks at the moment is the heterogeneous and dynamic capabilities of the human system. In part, this is because a single architecture is not currently capable of the level of modeling and representation of the complex human system. Therefore, a heterogeneous set of pathways from sensory stimulus to cognitive function needs to be developed in a richer computational model. Herein, we explore the learning of multiple pathways–as different deep neural network architectures–coupled with appropriate data/information fusion. Specifically, we explore the advantage of data-driven optimization of fusing different deep nets–GoogleNet, CaffeNet and ResNet–at a per class (neuron) or shared weight (single data fusion across classes) fashion. In addition, we explore indices that tell us the importance of each network, how they interact and what aggregation was learned. Experiments are provided in the context of remote sensing on the UC Merced and WHU-RS19 data sets. In particular, we show that fusion is the top performer, each network is needed across the various target classes, and unique aggregations (i.e., not common operators) are learned.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    If \(\mu (X)<1\), properties like idempotency and boundedness are not guaranteed.

  2. 2.

    Due to the maximum (t-conorm) and minimum operators (t-norm), the Sugeno FI does not actually generate any possible number between the minimum and maximum of the inputs. Instead, it selects one of the FM or input values, i.e., at most one of \(2^{N}+N\) values.

  3. 3.

    The ChI is used frequently for various reasons; e.g., it is differentiable [62], for an additive (probability) measure it recovers the Lebesgue integral, it yields a wider spectrum of values between the minimum and maximum (versus the discrete and relatively small number of values that the Sugeno FI selects from), etc.

  4. 4.

    Go to www.derektanderson.com/FuzzyLibrary and www.github.com/scottgs/fi_library.

References

  1. W.S. McCulloch, W. Pitts, A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5(4), 115–133 (1943)

    Article  MathSciNet  Google Scholar 

  2. L.A. Zadeh, Fuzzy sets. Inf. Control 8(3), 338–353 (1965)

    Article  Google Scholar 

  3. J.H. Holland, Adaptation in Natural and Artificial Systems, 1992 (Ann Arbor, University of Michigan Press, MI, 1975)

    Google Scholar 

  4. R. Collobert, J. Weston, A unified architecture for natural language processing: deep neural networks with multitask learning, in Proceedings of the 25th International Conference on Machine Learning (ACM, New York, 2008), pp. 160–167

    Google Scholar 

  5. R. Socher, C.C. Lin, C. Manning, A.Y. Ng, Parsing natural scenes and natural language with recursive neural networks, in Proceedings of the 28th International Conference on Machine Learning (ICML-11) (2011), pp. 129–136

    Google Scholar 

  6. K. Fukushima, S. Miyake, Neocognitron: a self-organizing neural network model for a mechanism of visual pattern recognition, in Competition and Cooperation in Neural Nets (Springer, Berlin, 1982), pp. 267–285

    Chapter  Google Scholar 

  7. A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems (2012), pp. 1097–1105

    Google Scholar 

  8. D. Ciregan, U. Meier, J. Schmidhuber, Multi-column deep neural networks for image classification, in 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, New York, 2012), pp. 3642–3649

    Google Scholar 

  9. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, Rethinking the inception architecture for computer vision, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2818–2826

    Google Scholar 

  10. D.C. Ciresan, U. Meier, L.M. Gambardella, J. Schmidhuber, Deep, big, simple neural nets for handwritten digit recognition. Neural Comput. 22(12), 3207–3220 (2010)

    Article  Google Scholar 

  11. C. Bentes, D. Velotto, S. Lehner, Target classification in oceanographic sar images with deep neural networks: architecture and initial results, in 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) IEEE, New York, 2015), pp. 3703–3706

    Google Scholar 

  12. W. Huang, L. Xiao, Z. Wei, H. Liu, S. Tang, A new pan-sharpening method with deep neural networks. IEEE Geosci. Remote Sens. Lett. 12(5), 1037–1041 (2015)

    Article  Google Scholar 

  13. X. Chen, S. Xiang, C.L. Liu, C.H. Pan, Vehicle detection in satellite images by hybrid deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 11(10), 1797–1801 (2014)

    Article  Google Scholar 

  14. J. Yue, W. Zhao, S. Mao, H. Liu, Spectral-spatial classification of hyperspectral images using deep convolutional neural networks. Remote Sens. Lett. 6(6), 468–477 (2015)

    Article  Google Scholar 

  15. A.N. Steinberg, C.L. Bowman, F.E. White, Revisions to the JDL data fusion model, in Handbook of Data Fusion (1999)

    Google Scholar 

  16. M.D. Zeiler, R. Fergus, Visualizing and understanding convolutional networks, in European Conference on Computer Vision (Springer, Berlin, 2014), pp. 818–833

    Google Scholar 

  17. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9

    Google Scholar 

  18. G.E. Hinton, R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  Google Scholar 

  19. P. Vincent, H. Larochelle, Y. Bengio, P.-A. Manzagol, Extracting and composing robust features with denoising autoencoders, in Proceedings of the 25th International Conference on Machine learning (ACM, New York, 2008), pp. 1096–1103

    Google Scholar 

  20. M. Chen, Z. Xu, K. Weinberger, F. Sha, Marginalized denoising autoencoders for domain adaptation (2012). arXiv preprint arXiv:1206.4683

  21. Q. Fu, X. Yu, X. Wei, Z. Xue, Semi-supervised classification of hyperspectral imagery based on stacked autoencoders, in Eighth International Conference on Digital Image Processing (ICDIP 2016), 100332B-100332B. International Society for Optics and Photonics (2016)

    Google Scholar 

  22. J. Geng, J. Fan, H. Wang, X. Ma, B. Li, F. Chen, High-resolution sar image classification via deep convolutional autoencoders. IEEE Geosci. Remote Sens. Lett. 12(11), 2351–2355 (2015)

    Article  Google Scholar 

  23. G.E. Hinton, Deep belief networks. Scholarpedia 4(5), 5947 (2009)

    Google Scholar 

  24. H. Lee, R. Grosse, R. Ranganath, A.Y. Ng, Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations, in Proceedings of the 26th Annual International Conference on Machine Learning (ACM, New York, 2009), pp. 609–616

    Google Scholar 

  25. T. Mikolov, M. Karafiát, L. Burget, J. Cernock‘y, S. Khudanpur, Recurrent neural network based language model, in Interspeech, vol. 2 (2010), 3 p

    Google Scholar 

  26. K. Funahashi, Y. Nakamura, Approximation of dynamical systems by continuous time recurrent neural networks. Neural Netw. 6(6), 801–806 (1993)

    Article  Google Scholar 

  27. S. Rajurkar, N.K. Verma, Developing deep fuzzy network with takagi sugeno fuzzy inference system, in 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) (2017), pp. 1–6. https://doi.org/10.1109/FUZZ-IEEE.2017.8015718

  28. L. Xu, J.S. Ren, C. Liu, J. Jia, Deep convolutional neural network for image deconvolution, in Advances in Neural Information Processing Systems (2014), pp. 1790–1798

    Google Scholar 

  29. M.D. Zeiler, D. Krishnan, G.W. Taylor, R. Fergus, Deconvolutional networks, in 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, New York, 2010), pp. 2528–2535

    Google Scholar 

  30. M.D. Zeiler, G.W. Taylor, R. Fergus, Adaptive deconvolutional networks for mid and high level feature learning, in 2011 IEEE International Conference on Computer Vision (ICCV) (IEEE, New York, 2011), pp. 2018–2025

    Google Scholar 

  31. Y. Won, P.D. Gader, P.C. Coffield, Morphological shared-weight networks with applications to automatic target recognition. IEEE Trans. Neural Netw. 8(5), 1195–1203 (1997)

    Article  Google Scholar 

  32. X. Jin, C.H. Davis, Vehicle detection from high-resolution satellite imagery using morphological shared-weight neural networks. Image Vis. Comput. 25(9), 1422–1431 (2007)

    Article  Google Scholar 

  33. K. Simonyan, A. Zisserman, Very Deep Convolutional Networks for Large-scale Image Recognition (2014). arXiv preprint arXiv:1409.1556

  34. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G.S. Corrado, A. Davis, J. Dean, M. Devin, et al., Tensorflow: Large-scale Machine Learning on Heterogeneous Distributed Systems (2016). arXiv preprint arXiv:1603.04467

  35. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, T. Darrell, Caffe: convolutional architecture for fast feature embedding, in Proceedings of the 22nd ACM International Conference on Multimedia (ACM, New York, 2014), pp. 675–678

    Google Scholar 

  36. A. Vedaldi, K. Lenc, Matconvnet: convolutional neural networks for matlab, in Proceedings of the 23rd ACM International Conference on Multimedia (ACM, New York, 2015), pp. 689–692

    Google Scholar 

  37. J. Yosinski, J. Clune, Y. Bengio, H. Lipson, How transferable are features in deep neural networks? in Advances in Neural Information Processing Systems (2014), pp. 3320–3328

    Google Scholar 

  38. N. Srivastava, G.E. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  39. I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (MIT Press, 2016)

    Google Scholar 

  40. S. Ioffe, C. Szegedy, Batch normalization: accelerating deep network training by reducing internal covariate shift, in International Conference on Machine Learning (2015), pp. 448–456

    Google Scholar 

  41. L. Brown, Deep Learning with GPUs, http://www.nvidia.com/content/events/geoInt2015/

  42. L. Bottou, Stochastic gradient learning in neural networks. Proc. Neuro-Names 91(8) (1991)

    Google Scholar 

  43. B.T. Polyak, Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)

    Article  Google Scholar 

  44. I. Sutskever, J. Martens, G. Dahl, G. Hinton, On the importance of initialization and momentum in deep learning, in International Conference on Machine Learning (2013), pp. 1139–1147

    Google Scholar 

  45. J. Duchi, E. Hazan, Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12(Jul), 2121–2159 (2011)

    Google Scholar 

  46. T. Tieleman, G. Hinton, Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude. Coursera: Neural Netw. Mach. Learn. 4(2), 26–31 (2012)

    Google Scholar 

  47. D. Kingma, J. Ba, Adam: a method for stochastic optimization, in 3rd International Conference for Learning Representations (2015)

    Google Scholar 

  48. J.E. Ball, D.T. Anderson, C.S. Chan, A comprehensive survey of deep learning in remote sensing: theories, tools and challenges for the community. J. Appl. Remote Sens. (2017)

    Google Scholar 

  49. S.K. Pal, S. Mitra, Neuro-fuzzy Pattern Recognition: Methods in Soft Computing (Wiley Inc, New Jersey, 1999)

    Google Scholar 

  50. J.M. Keller, D.J. Hunt, Incorporating fuzzy membership functions into the perceptron algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 6, 693–699 (1985)

    Article  Google Scholar 

  51. R.R. Yager, Applications and extensions of owa aggregations. Int. J. Man Mach. Stud. 37(1), 103–122 (1992)

    Article  Google Scholar 

  52. R.R. Yager, On ordered weighted averaging aggregation operators in multicriteria decisionmaking. IEEE Trans. Syst. Man Cybern. 18(1), 183–190 (1988)

    Article  MathSciNet  Google Scholar 

  53. C. Sung-Bae, Fuzzy aggregation of modular neural networks with ordered weighted averaging operators. English. Int. J. Approx. Reas. 13(4), 359–375 (1995)

    Article  Google Scholar 

  54. S.B. Cho, J.H. Kim, Combining multiple neural networks by fuzzy integral for robust classification. IEEE Trans. Syst. Man Cybern. 25(2), 380–384 (1995)

    Article  Google Scholar 

  55. G.J. Scott, R.A. Marcum, C.H. Davis, T.W. Nivin, Fusion of deep convolutional neural networks for land cover classification of high-resolution imagery. IEEE Geosci. Remote Sens. Lett. (2017)

    Google Scholar 

  56. S.R. Price, B. Murray, L. Hu, D.T. Anderson, T.C. Havens, R.H. Luke, J.M. Keller, Multiple kernel based feature and decision level fusion of IECO individuals for explosive hazard detection in flir imagery, in SPIE, vol. 9823 (2016), pp. 98231G-98231G-11. https://doi.org/10.1117/12.2223297

  57. R.E. Smith, D.T. Anderson, A. Zare, J.E. Ball, B. Alvey, J.R. Fairley, S.E. Howington, Genetic programming based Choquet integral for multi-source fusion, in IEEE International Conference on Fuzzy Systems (FUZZ-IEEE (2017)

    Google Scholar 

  58. R.E. Smith, D.T. Anerson, J.E. Ball, A. Zare, B. Alvey, Aggregation of Choquet integrals in GPR and EMI for handheld platform-based explosive hazard detection, in Proceedings of the SPIE 10182, Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XXII (2017)

    Google Scholar 

  59. H. Tahani, J. Keller, Information fusion in computer vision using the fuzzy integral. IEEE Trans. Syst. Man Cybern. 20, 733–741 (1990)

    Article  Google Scholar 

  60. M. Grabisch, J.-M. Nicolas, Classification by fuzzy integral: performance and tests. Fuzzy Sets Syst. 65(2–3), 255–271 (1994)

    Article  MathSciNet  Google Scholar 

  61. M. Grabisch, M. Sugeno, Multi-attribute classification using fuzzy integral, in IEEE International Conference on Fuzzy Systems, 1992 (IEEE, New York, 1992), pp. 47–54

    Google Scholar 

  62. A. Mendez-Vazquez, P. Gader, J.M. Keller, K. Chamberlin, Minimum classification error training for Choquet integrals with applications to landmine detection. IEEE Trans. Fuzzy Syst. 16(1), 225–238 (2008). https://doi.org/10.1109/TFUZZ.2007.902024. ISSN: 1063-6706

    Article  Google Scholar 

  63. J.M. Keller, P. Gader, H. Tahani, J. Chiang, M. Mohamed, Advances in fuzzy integration for pattern recognition. Fuzzy Sets Syst. 65(2–3), 273–283 (1994)

    Article  MathSciNet  Google Scholar 

  64. P.D. Gader, J.M. Keller, B.N. Nelson, Recognition technology for the detection of buried land mines 9(1), 31–43 (2001)

    Google Scholar 

  65. G.J. Scott, D.T. Anderson, Importance-weighted multi-scale texture and shape descriptor for object recognition in satellite imagery, in 2012 IEEE International Geoscience and Remote Sensing Symposium (2012), pp. 79–82. https://doi.org/10.1109/IGARSS.2012.6351632

  66. M. Grabisch, The application of fuzzy integrals in multicriteria decision making. Eur. J. Oper. Res. 89(3), 445–456 (1996)

    Article  Google Scholar 

  67. C. Labreuche, Construction of a Choquet integral and the value functions without any commensurateness assumption in multi-criteria decision making, in EUSFLAT Conference (2011), pp. 90–97

    Google Scholar 

  68. D.T. Anderson, P. Elmore, F. Petry, T.C. Havens, Fuzzy Choquet integration of homogeneous possibility and probability distributions. Inf. Sci. 363, 24–39, (2016). https://doi.org/10.1016/j.ins.2016.04.043. http://www.sciencedirect.com/science/article/pii/S0020025516302961. ISSN: 0020-0255

    Article  Google Scholar 

  69. D.T. Anderson, T.C. Havens, C. Wagner, J.M. Keller, M.F. Anderson, D.J. Wescott, Extension of the fuzzy integral for general fuzzy set-valued information 22(6), 1625–1639, (2014). https://doi.org/10.1109/TFUZZ.2014.2302479. ISSN: 1063-6706

  70. M. Anderson, D.T. Anderson, D.J. Wescott, Estimation of adult skeletal age-at-death using the sugeno fuzzy integral. Am. J. Phys. Anthropol. 142(1), 30–41 (2010)

    Google Scholar 

  71. L. Tomlin, D.T. Anderson, C. Wagner, T.C. Havens, J.M. Keller, Fuzzy integral for rule aggregation in fuzzy inference systems (Springer International Publishing, Berlin, 2016), pp. 78–90. https://doi.org/10.1007/978-3-319-40596-4_8

    Google Scholar 

  72. A.J. Pinar, J. Rice, L. Hu, D.T. Anderson, T.C. Havens, Efficient multiple kernel classification using feature and decision level fusion. PP(99), 1 (2016). ISSN: 1063-6706. https://doi.org/10.1109/TFUZZ.2016.2633372

  73. A. Pinar, T.C. Havens, D.T. Anderson, L. Hu, Feature and decision level fusion using multiple kernel learning and fuzzy integrals, in 2015 IEEE International Conference on Fuzzy Systems (FUZZIEEE) (2015), pp. 1–7. https://doi.org/10.1109/FUZZ-IEEE.2015.7337934

  74. L. Hu, D.T. Anderson, T.C. Havens, J.M. Keller, Efficient and scalable nonlinear multiple kernel aggregation using the choquet integral, in Information Processing and Management of Uncertainty in Knowledge-Based Systems: 15th International Conference, IPMU, Montpellier, France, July 15–19, 2014, Proceedings. Part I (Springer International Publishing, Berlin, 2014), pp. 206–215

    Google Scholar 

  75. L. Hu, D.T. Anderson, T.C. Havens, Multiple kernel aggregation using fuzzy integrals, in 2013 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) (2013), pp. 1–7. https://doi.org/10.1109/FUZZ-IEEE.2013.6622312

  76. X. Du, A. Zare, J.M. Keller, D.T. Anderson, Multiple instance Choquet integral for classifier fusion, in 2016 IEEE Congress on Evolutionary Computation (CEC) (2016), pp. 1054–1061. https://doi.org/10.1109/CEC.2016.7743905

  77. M. Al Boni, D.T. Anderson, R.L. King, Hybrid measure of agreement and expertise for ontology matching in lieu of a reference ontology. Int. J. Intell. Syst. 31(5), 502–525 (2016). https://doi.org/10.1002/int.21792. ISSN: 1098-111X

    Article  Google Scholar 

  78. M.A. Islam, D.T. Anderson, F. Petry, D. Smith, P. Elmore, The fuzzy integral for missing data, in 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) (2017), pp. 1–8. https://doi.org/10.1109/FUZZ-IEEE.2017.8015475

  79. M. Sugeno, Theory of fuzzy integrals and its applications. Ph.D. thesis, Tokyo Institute of Technology, (1974)

    Google Scholar 

  80. M.A. Islam, D.T. Anderson, A.J. Pinar, T.C. Havens, Data-driven compression and efficient learning of the Choquet Integral. IEEE Trans. Fuzzy Syst. PP(99), 1 (2017). https://doi.org/10.1109/TFUZZ.2017.2755002. ISSN: 1063-6706

  81. J.M. Keller, J. Osborn, Training the fuzzy integral. Int. J. Approx. Reas. 15(1), 1–24 (1996)

    Article  Google Scholar 

  82. D.T. Anderson, S.R. Price, T.C. Havens, Regularization-based learning of the Choquet integral, in 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) (2014), pp. 2519–2526. https://doi.org/10.1109/FUZZ-IEEE.2014.6891630

  83. A.J. Pinar, D.T. Anderson, T.C. Havens, A. Zare, T. Adeyeba, Measures of the shapley index for learning lower complexity fuzzy integrals. Granul. Comput. 1–17 (2017)

    Google Scholar 

  84. T. Murofushi, S. Soneda, Techniques for reading fuzzy measures (iii): interaction index, in 9th Fuzzy System Symposium (Sapporo, Japan, 1993)

    Google Scholar 

  85. M. Grabisch, M. Roubens, An axiomatic approach to the concept of interaction among players in cooperative games. Int. J. Game Theory 28(4), 547–565 (1999)

    Article  MathSciNet  Google Scholar 

  86. M. Grabisch, An axiomatization of the shapley value and interaction index for games on lattices, in SCIS-ISIS (2004)

    Google Scholar 

  87. S.R. Price, D.T. Anderson, C. Wagner, T.C. Havens, J.M. Keller, Indices for introspection on the Choquet integral, in Advance Trends in Soft Computing (Springer, Berlin, 2014), pp. 261–271

    MATH  Google Scholar 

  88. K. He, X. Zhang, S. Ren, J. Sun, Deep Residual Learning for Image Recognition (2015). arXiv preprint arXiv:1512.03385

  89. G.J. Scott, M.R. England, W.A. Starms, R.A. Marcum, C.H. Davis, Training deep convolutional neural networks for land-cover classification of high-resolution imagery. IEEE Geosci. Remote Sens. Lett. 14(4), 549–553 (2017)

    Article  Google Scholar 

  90. S.D. Newsam, UC Merced Land Use Dataset (2010), http://vision.ucmerced.edu/datasets/landuse.html

  91. Y. Yang, S. Newsam, Bag-of-visual-words and spatial extensions for land-use classification, in ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM GIS) (2010), 666 p

    Google Scholar 

  92. C. Chen, B. Zhang, H. Su, W. Li, L. Wang, Land-use scene classification using multi-scale completed local binary patterns. Signal Image Video Proc. 10(4), 745–752 (2016)

    Article  Google Scholar 

  93. D. Dai, W. Yang, Satellite image classification via two-layer sparse coding with biased image representation. IEEE Geosci. Remote Sens. Lett. 8(1), 173–176 (2011)

    Article  Google Scholar 

  94. D.T. Anderson, M. Islam, R. King, N.H. Younan, J.R. Fairley, S. Howington, F. Petry, P. Elmore, A. Zare, Binary fuzzy measures and Choquet integration for multi-source fusion, in 6th International Conference on Military Technologies (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Derek T. Anderson .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Anderson, D.T., Scott, G.J., Islam, M., Murray, B., Marcum, R. (2018). Fuzzy Choquet Integration of Deep Convolutional Neural Networks for Remote Sensing. In: Pedrycz, W., Chen, SM. (eds) Computational Intelligence for Pattern Recognition. Studies in Computational Intelligence, vol 777. Springer, Cham. https://doi.org/10.1007/978-3-319-89629-8_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-89629-8_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-89628-1

  • Online ISBN: 978-3-319-89629-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics