Technology, Knowledge and Learning

, Volume 22, Issue 3, pp 427–442 | Cite as

Designing for Data with Ask Dr. Discovery: Design Approaches for Facilitating Museum Evaluation with Real-Time Data Mining

  • Brian C. NelsonEmail author
  • Cassie Bowman
  • Judd Bowman
Original research


Ask Dr. Discovery is an NSF-funded study addressing the need for ongoing, large-scale museum evaluation while investigating new ways to encourage museum visitors to engage deeply with museum content. To realize these aims, we are developing and implementing a mobile app with two parts: (1) a front-end virtual scientist called Dr. Discovery (Dr. D) for use by museum visitors that doubles as an unobtrusive data-gatherer and (2) a back-end analytics portal to be mined by museum staff, evaluators, and researchers. With the aid of our museum partners, we are developing this app to function as a platform for STEM informal education, research, and data-driven decision-making by museum staff. The Dr. D app has been designed to engage museum visitors, while connecting with an analytic system to make sense, in real time, of the large amounts of data produced by visitors’ use of the app. The analytic system helps museum staff access and interpret ongoing evaluation data, regardless of experience or museum resources, informing the practice of professionals at the front lines of informal STEM education in diverse communities. The design of the Dr. D app incorporates open-source analytic tools that make the gathering and interpretation of contextual information from visitors’ app use accessible to museum staff and educators, building their capacity for using data in their day-to-day work. The same tools are being used by our research team to probe questions about informal learning and motivation, effective application of large datasets for museum evaluation, and ways to encourage and understand use of mobile virtual experiences. In this paper, we describe our theory-based design of the Dr. D app and data analytics and describe findings from initial user testing with our museum partners.


Informal STEM learning Museum evaluation Data-mining Instructional design 


  1. Adams, M. (2012). Where have we been? What has changed? And where do we need to go next? Journal of Museum Education, 37(2), 25–36.CrossRefGoogle Scholar
  2. Association of Science-Technology Centers [ASTC]. (2011). 2011 Science Center and Museum Statistics. %20Center%20Statistics.pdf.
  3. Baylor, A. L., & Plant, E. A. (2005). Pedagogical agents as social models for engineering: The influence of appearance on female choice. In C. K. Looi, G. McCalla, B. Bredeweg, & J. Breuker (Eds.), Artificial intelligence in education: Supporting learning through intelligent and socially informed technology, 125 (pp. 65–72). Amsterdam: IOS Press.Google Scholar
  4. Bengel, J., Gauch, S., Mittur, E. & Vijayaraghavan, R. (2004). ChatTrack: Chat room topic detection using classification. In 2nd Symposium on Intelligence and Security Informatics ( pp. 266–277).Google Scholar
  5. Bickmore, T., Pfeifer, L., & Schulman, D. (2011). Relational agents improve engagement and learning in science museum visitors. In Proceedings of the 10th International Conference on Intelligent Virtual Agents, IVA’11 (pp. 55–67). Berlin, Heidelberg: Springer-Verlag.Google Scholar
  6. Cockburn, A., & McKenzie, B. (2001). What do web users do? An empirical analysis of web use. International Journal of Human Computer Studies, 54(6), 903–922.CrossRefGoogle Scholar
  7. Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6), 391–407.CrossRefGoogle Scholar
  8. Diamond, J., Luke, J. J., & Uttal, D. H. (2009). Practical evaluation guide: Tool for museums and other informal educational settings. Lanham, MD: AltaMira Press.Google Scholar
  9. Falk, J. H., & Dierking, L. D. (2008). Enhancing visitor interaction and learning with mobile technologies. In L. Tallon & K. Walker (Eds.), Digital technologies and the museum experience: Handheld guides and other media (pp. 19–34). Lanham: AltaMira Press.Google Scholar
  10. Frechette, C., & Moreno, R. (2010). The roles of animated Pedagogical agent’s presence and nonverbal communication in multimedia learning environments. Journal of Media Psychology, 22, 61–72.CrossRefGoogle Scholar
  11. Freeman, A., Adams Becker, S., Cummins, M., McKelroy, E., Giesinger, C., & Yuhnke, B. (2016). NMC horizon report: 2016: Museum edition. Austin, Texas: The New Media Consortium.Google Scholar
  12. Ingram, A., & Northcote, M. (1999). Using web server logs in evaluating instructional web sites. Journal of Educational Technology Systems, 28(2), 137–157.CrossRefGoogle Scholar
  13. Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2015). NMC horizon report: 2015 Museum edition. Austin, Texas: The New Media Consortium.Google Scholar
  14. Johnson, L., Adams, S., & Witchey, H. (2011). The NMC horizon report: 2011 Museum edition. Austin, Texas: The New Media Consortium.Google Scholar
  15. Katz, J. E., LaBar, W., & Lynch, E. (2010). Creativity and technology: Social media, mobiles and museums. Edinburgh: MuseumsEtc.Google Scholar
  16. Kolenda, T., Hansen, L. K., & Larsen, J. (2001). Signal detection using ICA: Application to chat room topic spotting. In Third International Conference on Independent Component Analysis and Blind Source Separation (pp. 540–545).Google Scholar
  17. Lane, H. C., Cahill, C., Foutz, S., Auerbach, D., Noren, D., Lussenhop, C., et al. (2013). The effects of a pedagogical agent for informal science education on learner behaviors and self-efficacy. Artificial Intelligence in Education, 7926, 309–318.CrossRefGoogle Scholar
  18. National Research Council. (2009). Committee on Learning Science in Informal Environments. In P. Bell, B. Lewenstein, A. W. Shouse, & M. A. Feder (Eds.), Learning Science in Informal Environments: People, Places, and Pursuits. Washington, DC: Board on Science Education, Center for Education. Division of Behavioral and Social Sciences and Education, The National Academies Press.Google Scholar
  19. Pew Research Center. (2013). Smartphone ownership2013 Update. Pew Research Center: Washington DC.
  20. Pew Research Center. (2015), The Smartphone Difference. Pew Research Center: Washington, DC.
  21. Pew Research Center (2016). Smartphone ownership and Internet usage continues to climb in emerging economies.
  22. Pew Research Center (2017). Mobile fact sheet.
  23. Proctor, N. (2015). Mobile in museums: From interpretation to conversation. The International Handbooks of Museum Studies, 4(22), 499–525.CrossRefGoogle Scholar
  24. Røtne, O., & Kaptelinin, V. (2013). Design choices and museum experience: A design-based study of a mobile museum app. In Human Computer Interaction (pp. 9–13). Springer International Publishing.Google Scholar
  25. Smithsonian Institution. (2004). The evaluation of museum educational programs: a national perspective. Washington, DC: Office of Policy and Analysis, Smithsonian Institution.Google Scholar
  26. Tallon, L. (2012). Museums & Mobile in 2012: An analysis of the Museums & Mobile Survey 2012 Responses. Pocket-Proof & Learning Times.
  27. Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2007). The politeness effect: Pedagogical Agents and learning outcomes. International Journal of Human-Computer Studies, 66(2), 98–112.CrossRefGoogle Scholar
  28. Yee, N., Bailenson, J. N., & Rickertsen, K. (2007). A meta-analysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces. In Conference on human factors in computing systems, proceedings of the SIGCHI conference on human factors in computing systems (p. 110).Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2017

Authors and Affiliations

  1. 1.Arizona State UniversityTempeUSA

Personalised recommendations