# Markovian processes with identifiable states: General considerations and application to all-or-none learning

- 73 Downloads
- 68 Citations

## Abstract

It often happens that a theory specifies some variables or states which cannot be identified completely in an experiment. When this happens, there are important questions as to whether the experiment is relevant to certain assumptions of the theory. Some of these questions are taken up in the present article, where a method is developed for describing the implications of a theory for an experiment. The method consists of constructing a second theory with all of its states identifiable in the outcome-space of the experiment. The method can be applied (i.e., an equivalent identifiable theory exists) whenever a theory specifies a probability function on the sample-space of possible outcomes of the experiment. An interesting relationship between lumpability of states and recurrent events plays an important role in the development of the identifiable theory. An identifiable theory of an experiment can be used to investigate relationships among different theories of the experiment. As an example, an identifiable theory of all-or-none learning is developed, and it is shown that a large class of all-or-none theories are equivalent for experiments in which a task is learned to a strict criterion.

## Keywords

Public Policy Present Article Statistical Theory Markovian Process Large Class## Preview

Unable to display preview. Download preview PDF.

## References

- [1]Bower, G. H. Application of a model to paired-associate learning.
*Psychometrika*, 1961,**26**, 255–280.Google Scholar - [2]Burke, C. J. and Rosenblatt, M. A Markovian function of a Markov chain.
*Ann. math. Statist.*, 1958,**29**, 1112–1122.Google Scholar - [3]Bush, R. R. and Mosteller, F.
*Stochastic models for learning*. New York: Wiley, 1955.Google Scholar - [4]Estes, W. K. Component and pattern models with Markovian interpretations. In R. R. Bush and W. K. Estes (Eds.),
*Studies in mathematical learning theory*. Stanford: Stanford Univ. Press, 1959. Pp. 9–52.Google Scholar - [5]Estes, W. K. Learning theory and the new mental chemistry.
*Psychol. Rev.*, 1960,**67**, 207–223.Google Scholar - [6]Estes, W. K. and Suppes, P. Foundations of linear models. In R. R. Bush and W. K. Estes (Eds.),
*Studies in mathematical learning theory*. Stanford: Stanford Univ. Press, 1959, Pp. 137–179.Google Scholar - [7]Feller, W.
*An introduction to probability theory and its applications*. (2nd ed.) New York: Wiley, 1950.Google Scholar - [8]Kemeny, J. G. and Snell, J. L.
*Finite Markov chains*. Princeton: Van Nostrand, 1960.Google Scholar - [9]
- [10]Restle, F. The selection of strategies in cue learning.
*Psychol. Rev.*, 1962,**69**, 329–343.Google Scholar - [11]Sternberg, S. Stochastic learning theory. In R. D. Luce, R. R. Bush, and E. Galanter (Eds.),
*Handbook of mathematical psychology*, Vol. II. New York: Wiley, 1963. Pp. 1–120.Google Scholar - [12]Suppes, P. and Zinnes, J. L. Basic measurement theory. In R. D. Luce, R. R. Bush, and E. Galanter (Eds.),
*Handbook of mathematical psychology*, Vol. I. New York: Wiley, 1963. Pp. 1–76.Google Scholar