Abstract
In this paper we introduce a measure for the rate of generation of information about the failure time of a system using mutual information measure. In the case of a system with multi-components what is really measured is the interaction between one component of the system and the rest. Our definition of measure is only a slight variation of existing classical definitions in the information theory literature. We study properties of our proposed measure and calculate information for several hypothetical systems.
Similar content being viewed by others
References
Barlow, R. and Proschan, F. (1981). Statistical Theory of Reliability and Life Testing: Probability Models, To begin with, Silver Spring, MD.
Blahut, R. E. (1987). Principles and Practice of Information Theory, Addison-Wesley, Massachusetts.
Gelfand, I. M., Kolmogorov, A. N. and Jaglom, A. M. (1956). On the general definitions of an amount of information. Dokl. Akad. Nauk SSSR, 111, 745–748.
Kullback, S. (1968). Information Theory and Statistics, Wiley, New York.
Pinsker, M. S. (1964). Information and Information Stability of Random Variables and Processes, Holden-Day, San Francisco.
Shannon, C. E. (1948). A mathematical theory of communication, Bell System Technical Journal, 27, 279–423, 623–656.
Shannon, C. E. (1957). Certain results in coding theory for noisy channels, Inform. and Control, 1, 6–25.
Author information
Authors and Affiliations
Additional information
This research was partially supported by the U.S. Air Force Office of Scientific Research Grant AFSOR-89-0402.
About this article
Cite this article
Ebrahimi, N. Information theory and the failure time of a system. Ann Inst Stat Math 44, 463–477 (1992). https://doi.org/10.1007/BF00050699
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF00050699