Advertisement

Methodologies for Continuous, Life-Long Machine Learning for AI Systems

  • James A. Crowder
  • John Carbone
  • Shelli Friess
Chapter

Abstract

Current machine learning architectures, strategies, and methods are typically static and non-interactive, making them incapable of adapting to changing and/or heterogeneous data environments, either in real-time, or in near-real-time. Typically, in real-time applications, large amounts of disparate data must be processed, learned from, and actionable intelligence provided in terms of recognition of evolving activities. Applications like Rapid Situational Awareness (RSA) used for support of critical systems (e.g., battlefield management and control) require critical analytical assessment and decision support by automatically processing massive and increasing amounts of data to provide recognition of evolving events, alerts, and providing actionable intelligence to operators and analysts (Crowder, Machine learning: Intuition (concept learning) in hybrid neural systems. NSA Technical Paper-CON-SP-0014-2002-06, Fort Meade, 2002; and Crowder, Cognitive systems for data fusion. In: Proceedings of the 2005 PSTN Processing Technology Conference, Ft. Wayne, 2005).

In this chapter, we prescribe potential methods and strategies for continuously adapting, life-long machine learning within a self-learning and self-evaluation environment to enhance real-time/near-real-time support for mission critical systems. We describe the notion of continuous adaptation, which requires an augmented paradigm for enhancing traditional probabilistic machine learning. Specifically, systems which must more aptly operate in harsh/soft unknown environments witt the need of a priori statistically trained neural networks nor fully developed learning rules for situations that have never been thought of yet. This leads to a hypothesis requiring new machine learning processes, in which abductive learning is applied. We utilize varying unsupervised/self-supervised learning techniques, statistical/fuzzy models for entities, relationships, and descriptor extraction. We also involved topic and group discover as well as abductive inference algorithms and abductive inference algorithms to expand system aperture in order to envision what outlying factors could have also caused current observations. Once extended plausible explanations are found, we will show how a system uses the aforementioned implements to potentially learn about new or modified causal relationships and extend, reinterpret, or create new situational driven memories. One major issue continuous learning causes, which has been discussed throughout the book, is how do you test a system that continually changes and, hopefully, improves over time.

Keywords

Life-long machine learning Artificial neurogenesis Implicit learning Explicit learning Knowledgebase 

References

  1. 1.
    Crowder, J. A. (2010). The continuously recombinant genetic, neural fiber network. Proceedings of the AIAA Infotech@Aerospace-2010, Atlanta, GA.Google Scholar
  2. 2.
    Crowder, J., & Carbone, J. (2011). Recombinant knowledge relativity threads for contextual knowledge storage. Proceedings of the 12th International Conference on Artificial Intelligence, Las Vegas, NV.Google Scholar
  3. 3.
    Stadler, M. (1997). Distinguishing implicit and explicit learning. Psychonomic Bulletin & Review, 4(1), 5–62.CrossRefGoogle Scholar
  4. 4.
    Crowder, J. A. (2010). Flexible object architectures for hybrid neural processing systems. Proceedings of the 11th International Conference on Artificial Intelligence, Las Vegas, NV.Google Scholar
  5. 5.
    Crowder, J. (2016). AI inferences utilizing occam abduction. Proceedings of the 2016 North American Fuzzy Information Processing Symposium, University of Texas, El Paso.Google Scholar
  6. 6.
    Crowder, J., & Carbone, J. (2017). Abductive artificial intelligence learning models. Proceedings of the 2017 International Conference on Artificial Intelligence, Las Vegas, NV.Google Scholar
  7. 7.
    Carbone, J. N., & Crowder, J. (2011). Transdisciplinary synthesis and cognition frameworks. Proceedings of the Society for Design and Process Science Conference 2011, Jeju Island, South Korea.Google Scholar
  8. 8.
    Crowder, J. (2005). Cognitive systems for data fusion. Proceedings of the 2005 PSTN Processing Technology Conference, Ft. Wayne, IN.Google Scholar
  9. 9.
    Jahanshahi, W. (2007). The striatum and probabilistic implicit sequence learning. Brain Research, 1137, 117–130.CrossRefGoogle Scholar
  10. 10.
    Crowder, J. A. (2002). Machine learning: Intuition (concept learning) in hybrid neural systems. NSA Technical Paper-CON-SP-0014-2002-06, Fort Meade, MD.Google Scholar
  11. 11.
    Franklin, S. (2005). Cognitive robots: Perceptual associative memory and learning. Proceedings of the 2005 IEEE International Workshop on Robot and Human Interaction.Google Scholar
  12. 12.
    Crowder, J. (2004). Multi-sensor fusion utilizing dynamic entropy and fuzzy systems. Proceedings of the Processing Systems Technology Conference, Tucson, AZ.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • James A. Crowder
    • 1
  • John Carbone
    • 2
  • Shelli Friess
    • 3
  1. 1.Colorado Engineering Inc.Colorado SpringsUSA
  2. 2.ForcepointAustinUSA
  3. 3.Walden UniversityMinneapolisUSA

Personalised recommendations