, Volume 29, Issue 4, pp 309–333 | Cite as

Markovian processes with identifiable states: General considerations and application to all-or-none learning

  • James G. Greeno
  • Theodore E. Steiner


It often happens that a theory specifies some variables or states which cannot be identified completely in an experiment. When this happens, there are important questions as to whether the experiment is relevant to certain assumptions of the theory. Some of these questions are taken up in the present article, where a method is developed for describing the implications of a theory for an experiment. The method consists of constructing a second theory with all of its states identifiable in the outcome-space of the experiment. The method can be applied (i.e., an equivalent identifiable theory exists) whenever a theory specifies a probability function on the sample-space of possible outcomes of the experiment. An interesting relationship between lumpability of states and recurrent events plays an important role in the development of the identifiable theory. An identifiable theory of an experiment can be used to investigate relationships among different theories of the experiment. As an example, an identifiable theory of all-or-none learning is developed, and it is shown that a large class of all-or-none theories are equivalent for experiments in which a task is learned to a strict criterion.


Public Policy Present Article Statistical Theory Markovian Process Large Class 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Bower, G. H. Application of a model to paired-associate learning.Psychometrika, 1961,26, 255–280.Google Scholar
  2. [2]
    Burke, C. J. and Rosenblatt, M. A Markovian function of a Markov chain.Ann. math. Statist., 1958,29, 1112–1122.Google Scholar
  3. [3]
    Bush, R. R. and Mosteller, F.Stochastic models for learning. New York: Wiley, 1955.Google Scholar
  4. [4]
    Estes, W. K. Component and pattern models with Markovian interpretations. In R. R. Bush and W. K. Estes (Eds.),Studies in mathematical learning theory. Stanford: Stanford Univ. Press, 1959. Pp. 9–52.Google Scholar
  5. [5]
    Estes, W. K. Learning theory and the new mental chemistry.Psychol. Rev., 1960,67, 207–223.Google Scholar
  6. [6]
    Estes, W. K. and Suppes, P. Foundations of linear models. In R. R. Bush and W. K. Estes (Eds.),Studies in mathematical learning theory. Stanford: Stanford Univ. Press, 1959, Pp. 137–179.Google Scholar
  7. [7]
    Feller, W.An introduction to probability theory and its applications. (2nd ed.) New York: Wiley, 1950.Google Scholar
  8. [8]
    Kemeny, J. G. and Snell, J. L.Finite Markov chains. Princeton: Van Nostrand, 1960.Google Scholar
  9. [9]
    Kraemer, H. C. Point estimation in learning models.J. math. Psychol., 1964,1, 28–53.Google Scholar
  10. [10]
    Restle, F. The selection of strategies in cue learning.Psychol. Rev., 1962,69, 329–343.Google Scholar
  11. [11]
    Sternberg, S. Stochastic learning theory. In R. D. Luce, R. R. Bush, and E. Galanter (Eds.),Handbook of mathematical psychology, Vol. II. New York: Wiley, 1963. Pp. 1–120.Google Scholar
  12. [12]
    Suppes, P. and Zinnes, J. L. Basic measurement theory. In R. D. Luce, R. R. Bush, and E. Galanter (Eds.),Handbook of mathematical psychology, Vol. I. New York: Wiley, 1963. Pp. 1–76.Google Scholar

Copyright information

© Psychometric Society 1964

Authors and Affiliations

  • James G. Greeno
    • 1
  • Theodore E. Steiner
    • 1
  1. 1.Indiana UniversityUSA

Personalised recommendations