Introduction to Ray Solomonoff 85th Memorial Conference

  • David L. Dowe
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7070)

Abstract

This piece is an introduction to the proceedings of the Ray Solomonoff 85th memorial conference, paying tribute to the works and life of Ray Solomonoff, and mentioning other papers from the conference.

Keywords

Ray Solomonoff Solomonoff Solomonoff memorial Solomonoff theory of prediction algorithmic probability ALP convergence completeness algorithmic information theory AIT Kolmogorov complexity non-parametrics training sequences technological singularity realization of artificial intelligence dangers 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Akaike, H.: Statistical prediction information. Ann. Inst. Statist. Math. 22, 203–217 (1970)MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Akaike, H.: Information theory and an extension of the maximum likelihood principle. In: Petrov, B.N., Csaki, F. (eds.) Proceedings of the 2nd International Symposium on Information Theory, pp. 267–281 (1973)Google Scholar
  3. 3.
    Akaike, H.: Factor Analysis and AIC. Psychometrika 52(3), 317–332 (1987)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Amir, A., Amin, A.H.M., Khan, A.: Developing machine intelligence within P2P networks using a distributed associative memory. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 439–443. Springer, Heidelberg (2013)Google Scholar
  5. 5.
    Balduzzi, D.: Falsification and future performance. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 65–78. Springer, Heidelberg (2013)Google Scholar
  6. 6.
    Barmpalias, G., Dowe, D.L.: Universality probability of a prefix-free machine. Philosophical Transactions of the Royal Society A [Mathematical, Physical & Engineering Sciences] (Phil Trans. A) 370, 3488–3511 (2012)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Barron, A.R., Cover, T.M.: Minimum complexity density estimation. IEEE Transactions on Information Theory 37, 1034–1054 (1991)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Baxter, R.A., Oliver, J.J.: MDL and MML: Similarities and differences. Technical report TR 94/207, Dept. of Computer Science, Monash University, Clayton, Victoria 3168, Australia (1995)Google Scholar
  9. 9.
    Bergen, M.S., Bishop, W.B., Buchanan, B.L., Dilworth, R.P., Ackerlind, E., Solomonoff, R.J., et al.: Part n-circuit theory; information theory. In: IEEE International Convention Record, p. 293. Institute of Electrical and Electronics Engineers, U.S.A. (1957)Google Scholar
  10. 10.
    Boulton, D.M.: Numerical classification based on an information measure. Master’s thesis, M.Sc. thesis, Basser Computing Dept., University of Sydney, Sydney, Australia (1970)Google Scholar
  11. 11.
    Boulton, D.M.: The Information Measure Criterion for Intrinsic Classification. PhD thesis, Dept. Computer Science, Monash University, Clayton, Australia (August 1975)Google Scholar
  12. 12.
    Boulton, D.M., Wallace, C.S.: The information content of a multistate distribution. J. Theor. Biol. 23, 269–278 (1969)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Boulton, D.M., Wallace, C.S.: A program for numerical classification. Computer Journal 13(1), 63–69 (February 1970)Google Scholar
  14. 14.
    Boulton, D.M., Wallace, C.S.: A comparison between information measure classification. In: Proc. of the Australian & New Zealand Association for the Advancement of Science (ANZAAS) Congress (August 1973) (abstract)Google Scholar
  15. 15.
    Boulton, D.M., Wallace, C.S.: An information measure for hierarchic classification. Computer Journal 16(3), 254–261 (1973)CrossRefGoogle Scholar
  16. 16.
    Boulton, D.M., Wallace, C.S.: Occupancy of a rectangular array. Computer Journal 16(1), 57–63 (1973)CrossRefMATHGoogle Scholar
  17. 17.
    Boulton, D.M., Wallace, C.S.: An information measure for single link classification. Computer Journal 18(3), 236–238 (1975)CrossRefMATHGoogle Scholar
  18. 18.
    Brennan, M.H.: Data processing in the early cosmic ray experiments in Sydney. Computer Journal 51(5), 561–565 (2008); Christopher Stewart WALLACE (1933-2004) memorial special issueGoogle Scholar
  19. 19.
    Brennan, M.H., Millar, D.D., Wallace, C.S.: Air showers of size greater than 105 particles - (1) core location and shower size determination. Nature 182, 905–911 (October 4, 1958)Google Scholar
  20. 20.
    Campbell, D.: The Semimeasure Property of Algorithmic Probability - “Feature” or “Bug”? In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 79–90. Springer, Heidelberg (2013)Google Scholar
  21. 21.
    Chaitin, G.J.: On the length of programs for computing finite sequences. Journal of the Association for Computing Machinery 13, 547–569 (1966)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Chaitin, G.J.: On the simplicity and speed of programs for computing infinite sets of natural numbers. Journal of the Association for Computing Machinery 16(3), 407–422 (1969)MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Chaitin, G.J.: Randomness and Mathematical Proof. Scientific American 232(5), 47–52 (May 1975)Google Scholar
  24. 24.
    Chaitin, G.J.: Godel’s theorem and information. International J. of Theoretical Physics 21(12), 941–954 (1982)MathSciNetCrossRefMATHGoogle Scholar
  25. 25.
    Comley, J.W., Dowe, D.L.: General Bayesian networks and asymmetric languages. In: Proc. Hawaii International Conference on Statistics and Related Fields, June 5-8 (2003)Google Scholar
  26. 26.
    Comley, J.W., Dowe, D.L.: Minimum message length and generalized Bayesian nets with asymmetric languages. In: Grünwald, P., Pitt, M.A., Myung, I.J. (eds.) Advances in Minimum Description Length: Theory and Applications (MDL Handbook), ch. 11, pp. 265–294. M.I.T. Press (April 2005) ISBN 0-262-07262-9; Final camera-ready copy submitted in October 2003. [Originally submitted with title: “Minimum Message Length, MDL and Generalised Bayesian Networks with Asymmetric Languages”.]Google Scholar
  27. 27.
    Balduzzi, D.: Falsification and future performance. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 65–78. Springer, Heidelberg (2013)Google Scholar
  28. 28.
    Dale, P.E.R., Dale, M.B., Dowe, D.L., Knight, J.M., Lemckert, C.J., Low Choy, D.C., Sheaves, M.J., Sporne, I.: A conceptual model for integrating physical geography research and coastal wetland management, with an Australian example. Progress in Physical Geography 34(5), 605–624 (October 2010)Google Scholar
  29. 29.
    Dean, Thomas, Boddy: An analysis of time-dependent planning. In: Proc. 7th National Conference on Artificial Intelligence, pp. 49–54 (1998)Google Scholar
  30. 30.
    Dessalles, J.-L.: Algorithmic simplicity and relevance. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 119–130. Springer, Heidelberg (2013)Google Scholar
  31. 31.
    Dowe, D.L.: Discussion following “Hedging predictions in machine learning, A. Gammerman and V. Vovk”. Computer Journal 2(50), 167–168 (2007)Google Scholar
  32. 32.
    Dowe, D.L.: Foreword re C. S. Wallace. Computer Journal 51(5), 523–560 (2008); Christopher Stewart WALLACE (1933-2004) memorial special issueGoogle Scholar
  33. 33.
    Dowe, D.L.: Minimum Message Length and statistically consistent invariant (objective?) Bayesian probabilistic inference - from (medical) “evidence”. Social Epistemology 22(4), 433–460 (2008)CrossRefGoogle Scholar
  34. 34.
    Dowe, D.L.: MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness. In: Bandyopadhyay, P.S., Forster, M.R. (eds.) Handbook of the Philosophy of Science. Philosophy of Statistics, vol. 7, pp. 901–982. Elsevier (2011)Google Scholar
  35. 35.
    Dowe, D.L., Baxter, R.A., Oliver, J.J., Wallace, C.S.: Point estimation using the Kullback-Leibler loss function and MML. In: Wu, X., Kotagiri, R., Korb, K. (eds.) PAKDD 1998. LNCS (LNAI), vol. 1394, pp. 87–95. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  36. 36.
    Dowe, D.L., Farr, G.E., Hurst, A.J., Lentin, K.L.: Information-theoretic football tipping. Technical report TR 96/297, Dept. of Computer Science, Monash University, Clayton, Victoria 3168, Australia (1996)Google Scholar
  37. 37.
    Dowe, D.L., Gardner, S., Oppy, G.R.: Bayes not bust! Why simplicity is no problem for Bayesians. British Journal for the Philosophy of Science 58(4), 709–754 (2007)MathSciNetCrossRefMATHGoogle Scholar
  38. 38.
    Dowe, D.L., Hajek, A.R.: A computational extension to the Turing test. In: Proceedings of the 4th Conference of the Australasian Cognitive Science Society, Newcastle, NSW, Australia (September 1997)Google Scholar
  39. 39.
    Dowe, D.L., Hajek, A.R.: A computational extension to the Turing test. Technical Report 97/322, Dept. Computer Science, Monash University, Australia 3168 (October 1997)Google Scholar
  40. 40.
    Dowe, D.L., Hajek, A.R.: A non-behavioural, computational extension to the Turing test. In: Proceedings of the International Conference on Computational Intelligence & Multimedia Applications (ICCIMA 1998), Gippsland, Australia, pp. 101–106 (February 1998)Google Scholar
  41. 41.
    Dowe, D.L., Hernández-Orallo, J.: I.Q. tests are not for machines, yet. Intelligence 40(2), 77–81 (March 2012)Google Scholar
  42. 42.
    Dowe, D.L., Hernández-Orallo, J., Das, P.K.: Compression and intelligence: Social environments and communication. In: Schmidhuber, J., Thórisson, K.R., Looks, M. (eds.) AGI 2011. LNCS, vol. 6830, pp. 204–211. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  43. 43.
    Dowe, D.L., Krusel, N.: A decision tree model of bushfire activity. Technical report TR 93/190, Dept. of Computer Science, Monash University, Clayton, Vic. 3800, Australia (September 1993)Google Scholar
  44. 44.
    Dowe, D.L., Lentin, K.L.: Information-theoretic footy-tipping competition - Monash. Computer Science Association Newsletter (Australia), 55–57 (December 1995)Google Scholar
  45. 45.
    Edgoose, T., Allison, L.: MML Markov classification of sequential data. Stats. and Comp. 9(4), 269–278 (1999)CrossRefGoogle Scholar
  46. 46.
    Edwards, R.T., Dowe, D.L.: Single factor analysis in MML mixture modelling. In: Wu, X., Kotagiri, R., Korb, K.B. (eds.) PAKDD 1998. LNCS, vol. 1394, pp. 96–109. Springer, Heidelberg (April 1998)Google Scholar
  47. 47.
    Ellison, T.M.: Categorisation as topographic mapping between uncorrelated spaces. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 131–141. Springer, Heidelberg (2013)Google Scholar
  48. 48.
    Evans, T.: A heuristic program of solving geometric analogy problems. PhD thesis, Mass. Inst. Tech., Cambridge, Mass., U.S.A. (1963) Also available from AF Cambridge Research Lab, Hanscom AFB, Bedford, Mass., U.S.A.: Data Sciences Lab., Phys. and Math. Sci. Res. Paper 64, Project 4641 (1963)Google Scholar
  49. 49.
    Evans, T.: A heuristic program to solve geometric-analogy problems. In: Proc. SJCC, vol. 25, pp. 327–339 (1965)Google Scholar
  50. 50.
    Da Silva Filho, R.I., da Rocha, R.L.A., Guiraldelli, R.H.G.: Learning in the limit: A mutational and adaptive approach. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 106–118. Springer, Heidelberg (2013)Google Scholar
  51. 51.
    Fitzgibbon, L.J., Dowe, D.L., Allison, L.: Univariate polynomial inference by Monte Carlo message length approximation. In: Proceedings of the 19th International Conference on Machine Learning (ICML 2002), pp. 147–154. Morgan Kaufmann (2002)Google Scholar
  52. 52.
    Fitzgibbon, L.J., Dowe, D.L., Vahid, F.: Minimum message length autoregressive model order selection. In: Proc. Int. Conf. on Intelligent Sensors and Information Processing, Chennai, India, pp. 439–444 (January 2004)Google Scholar
  53. 53.
    Freivalds, R.: Algorithmic information theory and computational complexity. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 142–154. Springer, Heidelberg (2013)Google Scholar
  54. 54.
    Fresco, N.: A critical survey of some competing accounts of concrete digital computation. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 155–173. Springer, Heidelberg (2013)Google Scholar
  55. 55.
    Good, I.J.: Rational decisions. J. Roy. Statist. Soc. (B) 14(1), 107–114 (1952)Google Scholar
  56. 56.
    Good, I.J.: Speculations concerning the first ultraintelligent machine. Advances in Computers 6, 31–88 (1965)CrossRefGoogle Scholar
  57. 57.
    Hall, J.S.: Further reflections on the timescale of AI. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 174–183. Springer, Heidelberg (2013)Google Scholar
  58. 58.
    Hernández-Orallo, J.: Beyond the Turing test. Journal of Logic, Language and Information 9(4), 447–466 (2000)MathSciNetCrossRefMATHGoogle Scholar
  59. 59.
    Hernández-Orallo, J., Dowe, D.L.: Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence Journal 174(18), 1508–1539 (2010)CrossRefGoogle Scholar
  60. 60.
    Hernández-Orallo, J., Dowe, D.L.: Potential Properties of Turing Machines. Technical report 2012/271, Clayton School of I.T., Monash University, Clayton, Vic. 3168, Australia, 22 pp. (August 3, 2012)Google Scholar
  61. 61.
    Hernández-Orallo, J., Dowe, D.L.: On Potential Cognitive Abilities in the Machine Kingdom. Minds and Machines 23, 179–210 (2013), http://dx.doi.org/10.1007/s11023-012-9299-6 CrossRefGoogle Scholar
  62. 62.
    Hernández-Orallo, J., Dowe, D.L., España-Cubillo, S., Hernández-Lloreda, M.V., Insa-Cabrera, J.: On more realistic environment distributions for defining, evaluating and developing intelligence. In: Schmidhuber, J., Thórisson, K.R., Looks, M. (eds.) AGI 2011. LNCS, vol. 6830, pp. 82–91. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  63. 63.
    Hernández-Orallo, J., Dowe, D.L., Hernández-Lloreda, M.V.: Universal Psychometrics: Measuring Cognitive Abilities in the Machine Kingdom. Accepted to Cognitive Systems Research (See also Technical report 2012/267, Clayton School of I.T., Monash University)Google Scholar
  64. 64.
    Hernandez-Orallo, J., Minaya-Collado, N.: A formal definition of intelligence based on an intensional variant of Kolmogorov complexity. In: Proceedings of the International Symposium of Engineering of Intelligent Systems, pp. 146–163. ICSC Press (1998)Google Scholar
  65. 65.
    Hope, L.R., Korb, K.: Bayesian information reward. In: McKay, B., Slaney, J.K. (eds.) AI 2002. LNCS (LNAI), vol. 2557, pp. 272–283. Springer, Heidelberg (2002)Google Scholar
  66. 66.
    Horning, J.: A procedure for grammatical inference. In: Proc. IFIP Congress, Amsterdam, North Holland, vol. 71, Amsterdam, North HollandGoogle Scholar
  67. 67.
    Hu, B., Rakthanmanon, T., Hao, Y., Evans, S., Lonardi, S., Keogh, E.: Towards discovering the intrinsic cardinality and dimensionality of time series using MDL. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 184–197. Springer, Heidelberg (2013)Google Scholar
  68. 68.
    Hutter, M.: New Error Bounds for Solomonoff Prediction. J. Comput. Syst. Sci. 62(4), 653–667 (2001)MathSciNetCrossRefMATHGoogle Scholar
  69. 69.
    Insa-Cabrera, J., Dowe, D.L., España-Cubillo, S., Hernández-Lloreda, M.V., Hernández-Orallo, J.: Comparing humans and AI agents. In: Schmidhuber, J., Thórisson, K.R., Looks, M. (eds.) AGI 2011. LNCS, vol. 6830, pp. 122–132. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  70. 70.
    Insa-Cabrera, J., Dowe, D.L., Hernández-Orallo, J.: Evaluating a reinforcement learning algorithm with a general intelligence test. In: Lozano, J.A., Gámez, J.A., Moreno, J.A. (eds.) CAEPIA 2011. LNCS, vol. 7023, pp. 1–11. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  71. 71.
    Jankowski, N.: Complexity measures for meta-learning and their optimality. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 198–210. Springer, Heidelberg (2013)Google Scholar
  72. 72.
    Jeffreys, H.: An invariant form for the prior probability in estimation problems. Proc. of the Royal Soc. of London A 186, 453–454 (1946)MathSciNetCrossRefMATHGoogle Scholar
  73. 73.
    Langdon Jr., G.G.: An introduction to arithmetic coding. IBM Journal of Research and Development 28(2), 135–149 (1984)MathSciNetCrossRefMATHGoogle Scholar
  74. 74.
    Langdon Jr., G.G., Rissanen, J.J.: A simple general binary source code. IEEE Transactions on Information Theory 28(5), 800–803 (1982)CrossRefMATHGoogle Scholar
  75. 75.
    King, P.A.: Design of a conscious machine. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 211–222. Springer, Heidelberg (2013)Google Scholar
  76. 76.
    Kolmogorov, A.N.: Three approaches to the quantitative definition of information. Problems of Information Transmission 1, 4–7 (1965)MathSciNetGoogle Scholar
  77. 77.
    Kolmogorov, A.N.: Logical basis for information theory and probability theory. IEEE Transactions on Information Theory 14, 662–664 (1968)MathSciNetCrossRefMATHGoogle Scholar
  78. 78.
    Lattimore, T., Hutter, M.: No free lunch versus occam’s razor in supervised learning. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 223–235. Springer, Heidelberg (2013)Google Scholar
  79. 79.
    Legg, S., Hutter, M.: Universal intelligence: A definition of machine intelligence. Minds and Machines 17(4), 391–444 (November 2007)Google Scholar
  80. 80.
    Legg, S., Veness, J.: An approximation of the universal intelligence measure. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 236–249. Springer, Heidelberg (2013)Google Scholar
  81. 81.
    Levin, L.A.: Universal sequential search problems. Problems of Information Transmission 9(3), 265–266 (1973)Google Scholar
  82. 82.
    Levin, L.A.: Universal heuristics: How do humans solve “Unsolvable” problems? In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 53–54. Springer, Heidelberg (2013)Google Scholar
  83. 83.
    Lewis, D.K., Shelby-Richardson, J.: Scriven on human unpredictability. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition 17(5), 69–74 (1966)CrossRefGoogle Scholar
  84. 84.
    Li, M.: Partial match distance. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 55–64. Springer, Heidelberg (2013)Google Scholar
  85. 85.
    Li, M., Vitányi, P.M.B.: An Introduction to Kolmogorov Complexity and its applications. Springer (1997)Google Scholar
  86. 86.
    Mahoney, M.: Text compression as a test for artificial intelligence. In: Proc. National Conf. on Artificial Intelligence, U.S.A., p. 970. AAAI / John Wiley & Sons (1999)Google Scholar
  87. 87.
    Makalic, E., Allison, L.: MMLD inference of multilayer perceptrons. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 261–272. Springer, Heidelberg (2013)Google Scholar
  88. 88.
    Makalic, E., Schmidt, D.F.: Minimum message length analysis of the behrens–fisher problem. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 250–260. Springer, Heidelberg (2013)Google Scholar
  89. 89.
    Miyabe, K.: An optimal superfarthingale and its convergence over a computable topological space. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 273–284. Springer, Heidelberg (2013)Google Scholar
  90. 90.
    Molloy, S.B., Albrecht, D.W., Dowe, D.L., Ting, K.M.: Model-Based Clustering of Sequential Data. In: Proceedings of the 5th Annual Hawaii International Conference on Statistics, Mathematics and Related Fields (January 2006)Google Scholar
  91. 91.
    Needham, S.L., Dowe, D.L.: Message length as an effective Ockham’s razor in decision tree induction. In: Proc. 8th Int. Workshop on Artif. Intelligence and Statistics (AI+STATS 2001), pp. 253–260 (January 2001)Google Scholar
  92. 92.
    Özkural, E.: Diverse consequences of algorithmic probability. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 285–298. Springer, Heidelberg (2013)Google Scholar
  93. 93.
    van Heerden, P.J.: A general theory of prediction. Technical report, Polaroid Corp., Cambridge 39, Massachusetts, U.S.A., Privately circulated report (1963)Google Scholar
  94. 94.
    Paul, W.J., Solomonoff, R.J.: Autonomous theory building systems. Neural Networks and Adaptive Learning, Schloss Reisenberg, Knowledge Processing and its Applications Series (1990)Google Scholar
  95. 95.
    Paul, W.J., Solomonoff, R.J.: Autonomous theory building systems. Annals of Operations Research 55(1), 179–193 (1995)CrossRefMATHGoogle Scholar
  96. 96.
    Pelckmans, K.: An adaptive compression algorithm in a deterministic world. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 299–305. Springer, Heidelberg (2013)Google Scholar
  97. 97.
    Pérez-Ariza, C.B., Nicholson, A.E., Korb, K.B., Mascaro, S., Hu, C.H.: Causal discovery of dynamic Bayesian networks. In: Thielscher, M., Zhang, D. (eds.) AI 2012. LNCS, vol. 7691, pp. 902–913. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  98. 98.
    Petersen, S.: Toward an algorithmic metaphysics. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 306–317. Springer, Heidelberg (2013)Google Scholar
  99. 99.
    Rissanen, J.J.: Generalized Kraft inequality and arithmetic coding. IBM J. Res. Develop. 20(3), 198–203 (1976)MathSciNetCrossRefMATHGoogle Scholar
  100. 100.
    Rissanen, J.J.: Modeling by shortest data description. Automatica 14, 465–471 (1978)CrossRefMATHGoogle Scholar
  101. 101.
    Rissanen, J.J.: Information and Complexity in Statistical Modeling. Information Science and Statistics. Springer (2007)Google Scholar
  102. 102.
    Rissanen, J.J., Langdon Jr., G.G.: Arithmetic coding. IBM Journal of Research and Development 23(2), 149–162 (1979)MathSciNetCrossRefMATHGoogle Scholar
  103. 103.
    Rzepka, R., Muramoto, K., Araki, K.: Limiting context by using the web to minimize conceptual jump size. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 318–326. Springer, Heidelberg (2013)Google Scholar
  104. 104.
    Sanghi, P., Dowe, D.L.: A computer program capable of passing I.Q. tests. In: 4th International Conference on Cognitive Science (and 7th Australasian Society for Cognitive Science Conference), Univ. of NSW, Sydney, Australia, vol. 2, pp. 570–575 (July 2003)Google Scholar
  105. 105.
    Schmidhuber, J.: Optimal ordered problem solver. Technical report TR IDSIA-12-02, IDSIA, Lugano, Switzerland, (July 31, 2002), http://www.idsia.ch/~juergen/oops.html
  106. 106.
    Schmidt, D.F.: Minimum Message Length Inference of Autoregressive Moving Average Models. PhD thesis, Faculty of Information Technology, Monash University (2008)Google Scholar
  107. 107.
    Schmidt, D.F.: Minimum message length order selection and parameter estimation of moving average models. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 327–338. Springer, Heidelberg (2013)Google Scholar
  108. 108.
    Schwartz, J., Solomonoff, R.J.: Photoelectric chopper for guided missiles. Electronics (November 1954)Google Scholar
  109. 109.
    Schwarz, G.: Estimating dimension of a model. Ann. Stat. 6, 461–464 (1978)CrossRefMATHGoogle Scholar
  110. 110.
    Scriven, M.: An essential unpredictability in human behavior. In: Wolman, B.B., Nagel, E. (eds.) Scientific Psychology: Principles and Approaches, pp. 411–425. Basic Books (Perseus Books) (1965)Google Scholar
  111. 111.
    Shannon, C.E.: A mathematical theory of communication. The Bell System Technical Journal 27, 379–423 (July 1948), 623–656 (October 1948)Google Scholar
  112. 112.
    Silvescu, A., Honavar, V.: Abstraction super-structuring normal forms: Towards a theory of structural induction. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 339–350. Springer, Heidelberg (2013)Google Scholar
  113. 113.
    Solomonoff, A.: Locating a discontinuity in a piecewise-smooth periodic function using bayes estimation. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 351–365. Springer, Heidelberg (2013)Google Scholar
  114. 114.
    Solomonoff, G.: Ray solomonoff and the new probability. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 37–52. Springer, Heidelberg (2013)Google Scholar
  115. 115.
    Solomonoff, R.J.: An exact method for the computation of the connectivity of random nets. Bulletin of Mathematical Biophysics 14(2), 153–157 (1952)MathSciNetCrossRefGoogle Scholar
  116. 116.
    Solomonoff, R.J.: An optically driven airborne chopper. In: Proceedings of the 3rd Typhoon Symposium, p. 205 (1953)Google Scholar
  117. 117.
    Solomonoff, R.J.: Effects of Heisenberg’s principle on channel capacity. Proceedings of the I.R.E. 43, 484 (April 1955)Google Scholar
  118. 118.
    Solomonoff, R.J.: An inductive inference machine. Dartmouth Summer Research Project on Artificial Intelligence, A privately circulated report (August 1956)Google Scholar
  119. 119.
    Solomonoff, R.J.: An inductive inference machine. In: IRE Convention Record, Section on Information Theory, Part 2, pp. 56–62 (1957)Google Scholar
  120. 120.
    Solomonoff, R.J.: The mechanization of linguistic learning. In: Proceedings of the Second International Congress on Cybernetics, Namur, Belgium, pp. 180–193 (May 1958)Google Scholar
  121. 121.
    Solomonoff, R.J.: Utility evaluation. Publication VI23 30, Zator Co. and Air Force Office of Scientific Research, U.S.A. (April 1958)Google Scholar
  122. 122.
    Solomonoff, R.J.: A new method for discovering the grammars of phrase structure languages. In: Proceedings of the International Conference on Information Processing. UNESCO, Paris, France (1959)Google Scholar
  123. 123.
    Solomonoff, R.J.: A progress report on machines to learn to translate languages and retrieve information. In: Advances in Documentation and Library Science, Vol. III, Part 2 (Reprint from Proceedings of International Conference for Standards on a Common Language for Machine Searching and Translation 1959), vol. III, pp. 941–953. Interscience Publishers (September/October 1959)Google Scholar
  124. 124.
    Solomonoff, R.J.: Progress report: Research on inductive inference for the year ending 31 March 1959. Technical Report ZTB-130, Zator Co. and Air Force Office of Scientific Research, U.S.A. (May 1959)Google Scholar
  125. 125.
    Solomonoff, R.J.: A preliminary report on a general theory of inductive inference. Technical Report V-131, Zator Co. and Air Force Office of Scientific Research, Cambridge, Mass., U.S.A. (February 1960)Google Scholar
  126. 126.
    Solomonoff, R.J.: A preliminary report on a general theory of inductive inference (revision of Report V-131). Technical Report ZTB-138, Zator Co. and Air Force Office of Scientific Research, Cambridge, Mass., U.S.A. (November 1960)Google Scholar
  127. 127.
    Solomonoff, R.J.: A coding method for inductive inference. Technical Report ZTB-140, Zator Co. [and perhaps Rockford Research Co.] (Prepared for Air Force Office of Scientific Research, Air Research and Development Command, U.S. Air Force), Cambridge, Mass., U.S.A. (April 1961)Google Scholar
  128. 128.
    Solomonoff, R.J.: Progress report: Research in inductive inference for the period 1 April 1959 to 30 November 1960. Technical Report ZTB 139, Rockford Research Co. and Air Force Office of Scientific Research, U.S.A. (January 1961)Google Scholar
  129. 129.
    Solomonoff, R.J.: Comments on Dr. S. Watanabe’s paper. Synthese 14(2), 97–100 (September 1962)Google Scholar
  130. 130.
    Solomonoff, R.J.: An inductive inference code employing definitions. Technical Report ZTB-141, Zator Co. [and perhaps Rockford Research Co.] (Prepared for Air Force Office of Scientific Research, Air Research and Development Command, U.S. Air Force), Cambridge, Mass., U.S.A. (April 1962)Google Scholar
  131. 131.
    Solomonoff, R.J.: Training sequences for mechanized induction. In: Yovits, M., Jacobi, Goldstein (eds.) Self-Organizing Systems, pp. 425–434. Spartan Books (1962)Google Scholar
  132. 132.
    Solomonoff, R.J.: A formal theory of inductive inference. Information and Control 7, 1–22, 224–254 (1964)Google Scholar
  133. 133.
    Solomonoff, R.J.: A formal theory of inductive inference: Part I. Information and Control 7(1), 1–22 (March 1964)Google Scholar
  134. 134.
    Solomonoff, R.J.: A formal theory of inductive inference: Part II. Information and Control 7(2), 224–254 (June 1964)Google Scholar
  135. 135.
    Solomonoff, R.J.: Some recent work in artificial intelligence. Proceedings of the IEEE 54(12), 1687–1697 (December 1966)Google Scholar
  136. 136.
    Solomonoff, R.J.: Inductive inference research status, spring, 1967. Technical Report RTB 154, Rockford Research Co. and Air Force Office of Scientific Research, 140 1/2 Mt, Auburn St., Cambridge, Mass., U.S.A. (July 1967)Google Scholar
  137. 137.
    Solomonoff, R.J.: The search for artificial intelligence. Electronics and Power 14(1), 8–11 (January 1968)Google Scholar
  138. 138.
    Solomonoff, R.J.: The adequacy of complexity models of induction. In: Logic, Methodology and Philosophy of Science: Proceedings of the Fifth International Congress, London, Ontario, Canada, pp. 19–20 (September 1975) (Section VI)Google Scholar
  139. 139.
    Solomonoff, R.J.: Inductive inference theory - a unified approach to problems in pattern recognition and artificial intelligence. In: Proceedings of the Fourth International Joint Conference on Artificial Intelligence, Tbilisi, Georgia, U.S.S.R, vol. 1, pp. 274–280 (September 1975), http://world.std.com/~rjs/pubs.html, http://world.std.com/~rjs/tblisi75.pdf
  140. 140.
    Solomonoff, R.J.: Complexity-based induction systems: Comparisons and convergence theorems. IEEE Transaction on Information Theory, IT-24(4), 422–432 (1978)Google Scholar
  141. 141.
    Solomonoff, R.J.: Perfect training sequences and the costs of corruption — a progress report on inductive inference research. Technical report, Oxbridge Research, Cambridge, MA, U.S.A. (August 1982)Google Scholar
  142. 142.
    Solomonoff, R.J.: Optimum sequential search. Technical report, Oxbridge Research, Cambridge, Mass., U.S.A. (June 1984)Google Scholar
  143. 143.
    Solomonoff, R.J.: The time scale of artificial intelligence; reflections on social effects. Human Systems Management 5, 149–153 (1985)Google Scholar
  144. 144.
    Solomonoff, R.J.: Two kinds of complexity. Technical report, Oxbridge Research, Cambridge, Mass., U.S.A. (1985)Google Scholar
  145. 145.
    Solomonoff, R.J.: The application of algorithmic probability to problems in artificial intelligence. In: Kanal, L.N., Lemmer, J.F. (eds.) Uncertainty in Artificial Intelligence, pp. 473–491. Elsevier Science Publishers B.V. (1986); Also in: Kochen, M., Hastings, H.M.: Advances in Cognitive Science. AAAS Selected Symposia Series, pp. 210–227. AAAS, Washington, D.C. (1988)Google Scholar
  146. 146.
    Solomonoff, R.J.: A system for incremental learning based on algorithmic probability. In: Proceedings of the Sixth Israeli Conference on Artificial Intelligence, Computer Vision and Pattern Recognition, Tel Aviv, Israel, pp. 515–527 (December 1989)Google Scholar
  147. 147.
    Solomonoff, R.J.: Does algorithmic probability solve the problem of induction? In: Dowe, D.L., Korb, K.B., Oliver, J.J. (eds.) Proceedings of the Information, Statistics and Induction in Science (ISIS) Conference, Melbourne, Australia, pp. 7–8. World Scientific (August 1996) ISBN 981-02-2824-4Google Scholar
  148. 148.
    Solomonoff, R.J.: The discovery of algorithmic probability. Journal of Computer and System Sciences 55(1), 73–88 (1997)MathSciNetCrossRefMATHGoogle Scholar
  149. 149.
    Solomonoff, R.J.: Does algorithmic probability solve the problem of induction? Report, Oxbridge Research, P.O.B. 400404, Cambridge, Mass. 02140, U.S.A. (1997), http://world.std.com/~rjs/isis96.pdf
  150. 150.
    Solomonoff, R.J.: Two kinds of probabilistic induction. Computer Journal 42(4), 256–259 (1999); Special Issue on Kolmogorov ComplexityGoogle Scholar
  151. 151.
    Solomonoff, R.J.: Progress in incremental machine learning. In: NIPS Workshop on Universal Learning Algorithms and Optimal Search, Whistler, BC, Canada. NIPS (2002)Google Scholar
  152. 152.
    Solomonoff, R.J.: Progress in incremental machine learning (Preliminary report for NIPS 2002 workshop on universal learners and optimal search). Technical report, Technical Report IDSIA-16-03, IDSIA, Lugano, Switzerland (2003); Given at NIPS Conference, Whistler, B.C., Canada (December 14, 2002)Google Scholar
  153. 153.
    Solomonoff, R.J.: The universal distribution and machine learning. The Computer Journal 46(6), 598–601 (2003); Inaugural Kolmogorov Lecture, CLRC, Royal Holloway, University of London, England, U.K. (February 27, 2003)Google Scholar
  154. 154.
    Solomonoff, R.J.: Algorithmic probability, AI and NKS (given at Midwest NKS Conference, U.S.A.) (October 2005), http://world.std.com/~rjs/lects.html; also www.cs.indiana.edu/~dgerman/2005midwestNKSconference/keynotes/ray-j-solomonoff.ram
  155. 155.
    Solomonoff, R.J.: Lecture 1: Algorithmic probability (given at M.I.T., Cambridge, Ma., U.S.A.) (2005), http://world.std.com/~rjs/lects.html
  156. 156.
    Solomonoff, R.J.: Lecture 2: Applications of algorithmic probability. (given at M.I.T., Cambridge, Ma., U.S.A.) (2005), http://world.std.com/~rjs/lects.html
  157. 157.
    Solomonoff, R.J.: Machine learning - past and future, Dartmouth, N.H., U.S.A., (July 13-15, 2006); Lecture given in 2006 at AI@50, The Dartmouth A. I. Conference: The Next Fifty Years. (Revision August 11, 2009)Google Scholar
  158. 158.
    Solomonoff, R.J.: Incomputability in games, wars and economics — inductive inference in hostile environments. Logic, Computability and Randomness, page 19 (2007)Google Scholar
  159. 159.
    Solomonoff, R.J.: The probability of “undefined” (non-converging) output in generating the universal probability distribution. Information Processing Letters 106(6), 238–240 (2007)MathSciNetCrossRefGoogle Scholar
  160. 160.
    Solomonoff, R.J.: Three kinds of probabilistic induction: Universal distributions and convergence theorems. Computer Journal 51(5), 566–570 (2008); Christopher Stewart WALLACE (1933-2004) Memorial Special IssueGoogle Scholar
  161. 161.
    Solomonoff, R.J.: Algorithmic probability: Theory and applications. In: Dehmer, M., Emmert-Streib, F. (eds.) Information Theory and Statistical Learning. Springer Science and Business Media, pp. 1–23. Springer, N.Y., U.S.A. (2009)Google Scholar
  162. 162.
    Solomonoff, R.J.: Algorithmic probability, heuristic programming and AGI. In: Proceedings of the Third Conference on Artificial General Intelligence, AGI 2010, Lugano, Switzerland, pp. 251–257. IDSIA (March 2010)Google Scholar
  163. 163.
    Solomonoff, R.J.: Algorithmic Probability – Its Discovery – Its Properties and Application to Strong AI, pp. 149–157. World Scientific Publishing Company (2011)Google Scholar
  164. 164.
    Solomonoff, R.J., Rapoport, A.: Structure of random nets. In: Proc. Int. Cong. Mathematicians, Providence, R.I., U.S.A., pp. 674–675. American Mathematical Society (1950)Google Scholar
  165. 165.
    Solomonoff, R.J., Rapoport, A.: Connectivity of random nets. Bulletin of Mathematical Biophysics 13(2), 107–117 (1951)MathSciNetCrossRefGoogle Scholar
  166. 166.
    Solomonoff, R.J., Saleeby, E.G.: On the application of algorithmic probability to autoregressive models. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 366–385. Springer, Heidelberg (2013)Google Scholar
  167. 167.
    Strannegard, C., Amirghasemi, M., Ulfsbacker, S.: An anthropomorphic method for number sequence problems. In: Cognitive Systems Research (in press, 2013), doi:10.1016/j.cogsys.2012.05.003Google Scholar
  168. 168.
    Sunehag, P., Hutter, M.: Principles of solomonoff induction and AIXI. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 386–398. Springer, Heidelberg (2013)Google Scholar
  169. 169.
    Suzuki, J.: MDL/Bayesian criteria based on universal coding/Measure. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 399–410. Springer, Heidelberg (2013)Google Scholar
  170. 170.
    Takahashi, H.: Algorithmic analogies to Kamae-Weiss theorem on normal numbers. In: Proceedings of Solomonoff 85th Memorial Conference. Springer (2013)Google Scholar
  171. 171.
    Tan, P.J., Dowe, D.L.: Decision forests with oblique decision trees. In: Gelbukh, A., Reyes-Garcia, C.A. (eds.) MICAI 2006. LNCS (LNAI), vol. 4293, pp. 593–603. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  172. 172.
    Turing, A.M.: On computable numbers, with an application to the Entscheidungsproblem. Proc. London Math. Soc. 2 42, 230–265 (1936)MathSciNetGoogle Scholar
  173. 173.
    Turing, A.M.: Computing machinery and intelligence. Mind 59, 433–460 (1950)MathSciNetCrossRefGoogle Scholar
  174. 174.
    Ulam, S.: Tribute to John von Neumann. Bull. American Mathematical Soc. 64(3), 1–49 (1958)MathSciNetCrossRefGoogle Scholar
  175. 175.
    Veness, J., Ng, K.S., Hutter, M., Uther, W., Silver, D.: A Monte-Carlo AIXI Approximation. J. Artificial Intelligence Research 40, 95–142 (2011)MathSciNetMATHGoogle Scholar
  176. 176.
    Vinge, V.: Technological singularity. In: VISION-21 Symposium Sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, vol. 30, p. 31 (March 1993)Google Scholar
  177. 177.
    Visser, G., Dale, P.E.R., Dowe, D.L., Ndoen, E., Dale, M.B., Sipe, N.: A novel approach for modeling malaria incidence using complex categorical household data: The minimum message length (MML) method applied to Indonesian data. Computational Ecology and Software 2(3), 140–159 (2012)Google Scholar
  178. 178.
    Visser, G., Dowe, D.L.: Minimum message length clustering of spatially-correlated data with varying inter-class penalties. In: Proc. 6th IEEE International Conf. on Computer and Information Science (ICIS) 2007, pp. 17–22 (July 2007)Google Scholar
  179. 179.
    Visser, G., Dowe, D.L., Uotila, J.P.: Enhancing MML clustering using context data with climate applications. In: Nicholson, A., Li, X. (eds.) AI 2009. LNCS, vol. 5866, pp. 350–359. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  180. 180.
    Wallace, C.S.: Digital computers. In: Butler, S.T., Messel, H. (eds.) Atoms to Andromeda, pp. 215–245. Shakespeare-Head, Sydney (1966)Google Scholar
  181. 181.
    Wallace, C.S.: An improved program for classification. In: Proc. of the 9th Australian Computer Science Conference (ACSC-9), pp. 357–366 (February 1986); Published as Proc. of ACSC-9, vol. 8(1)Google Scholar
  182. 182.
    Wallace, C.S.: Classification by minimum-message-length encoding. In: Akl, S.G., Fiala, F., Koczkodaj, W.W. (eds.) Advances in Computing and Information - ICCI 1990. LNCS, vol. 468, pp. 72–81. Springer, Heidelberg (1990)Google Scholar
  183. 183.
    Wallace, C.S.: Classification by minimum-message-length inference. In: Working Notes AAAI Spring Symposium Series, Stanford Uni., Calif., U.S.A., pp. 65–69 (1990)Google Scholar
  184. 184.
    Wallace, C.S.: False oracles and SMML estimators. In: Dowe, D.L., Korb, K.B., Oliver, J.J. (eds.) Proceedings of the Information, Statistics and Induction in Science (ISIS) Conference, Melbourne, Australia, pp. 304–316. World Scientific (August 1996) ISBN 981-02-2824-4; Was previously Tech. Rept. 89/128, Dept. Comp. Sci., Monash Univ., Australia (June 1989)Google Scholar
  185. 185.
    Wallace, C.S.: Intrinsic classification of spatially correlated data. Computer Journal 41(8), 602–611 (1998)CrossRefMATHGoogle Scholar
  186. 186.
    Wallace, C.S.: The MIT Encyclopedia of the Cognitive Sciences (MITECS), chapter Minimum description length (major review), pp. 550–551. The MIT Press, London (1999) ISBN: 0-262-73124-XGoogle Scholar
  187. 187.
    Wallace, C.S.: Statistical and Inductive Inference by Minimum Message Length. Springer (May 2005)Google Scholar
  188. 188.
    Wallace, C.S., Boulton, D.M.: An information measure for classification. Computer J. 11(2), 185–194 (1968)CrossRefMATHGoogle Scholar
  189. 189.
    Wallace, C.S., Boulton, D.M.: An invariant Bayes method for point estimation. Classification Society Bulletin 3(3), 11–34 (1975)Google Scholar
  190. 190.
    Wallace, C.S., Dowe, D.L.: Intrinsic classification by MML - the Snob program. In: Proc. 7th Australian Joint Conf. on Artificial Intelligence, pp. 37–44. World Scientific (November 1994)Google Scholar
  191. 191.
    Wallace, C.S., Dowe, D.L.: Minimum message length and Kolmogorov complexity. Computer J. 42(4), 270–283 (1999)CrossRefMATHGoogle Scholar
  192. 192.
    Wallace, C.S., Dowe, D.L.: Refinements of MDL and MML coding. Computer Journal 42(4), 330–337 (1999)CrossRefMATHGoogle Scholar
  193. 193.
    Wallace, C.S., Dowe, D.L.: Rejoinder. Computer Journal 42(4), 345–347 (1999)CrossRefMATHGoogle Scholar
  194. 194.
    Wallace, C.S., Dowe, D.L.: MML clustering of multi-state, Poisson, von Mises circular and Gaussian distributions. Statistics and Computing 10, 73–83 (January 2000)Google Scholar
  195. 195.
    Wallace, C.S., Freeman, P.R.: Estimation and inference by compact coding. Journal of the Royal Statistical Society Series B 49(3), 240–252 (1987); See also Discussion on pp. 252-265Google Scholar
  196. 196.
    Wallace, C.S., Georgeff, M.P.: A general objective for inductive inference. Technical Report #83/32, Department of Computer Science, Monash University, Clayton, Australia, Reissued in June 1984 as TR No. 44 (March 1983)Google Scholar
  197. 197.
    Wallace, C.S., Patrick, J.D.: Coding decision trees. Machine Learning 11, 7–22 (1993)CrossRefMATHGoogle Scholar
  198. 198.
    Webb, G.I., Boughton, J., Zheng, F., Ting, K.M., Salem, H.: Learning by extrapolation from marginal to full-multivariate probability distributions: Decreasingly naive Bayesian classification. Machine Learning 86(2), 233–272 (2012)MathSciNetCrossRefMATHGoogle Scholar
  199. 199.
    Wei Xing, Croft, W.B.: LDA-based document models for ad-hoc retrieval. In: Proc. 29th ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2006, New York, NY, USA, pp. 178–185 (2006)Google Scholar
  200. 200.
    Wood, I., Sunehag, P., Hutter, M. (Non-)Equivalence of universal priors. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 417–425. Springer, Heidelberg (2013)Google Scholar
  201. 201.
    Woodward, J., Swan, J.: A syntactic approach to prediction. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS (LNAI), vol. 7070, pp. 426–438. Springer, Heidelberg (2013)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • David L. Dowe
    • 1
  1. 1.Computer Science and Software Engineering, Clayton School of Information TechnologyMonash UniversityAustralia

Personalised recommendations