Accident Compensation Corporation. (2018). Improving the claim registration and approval process. Version 1.0. 4 July 2018.
AI Now. (2018). Litigating algorithms: Challenging government use of algorithmic decision systems. New York: AI Now Institute.
Aletras, N., Tsarapatsanis, D., Preotiuc-Pietro, D., & Lampos, V. (2016). Predicting judicial decisions of the European Court of Human Rights: A natural language processing perspective. PeerJ Computer Science,2(93), 1–19.
Amoroso, N., La Rocca, M., Bruno, S., Maggipinto, T., Monaco, A., Bellotti, R., Tangaro, S., the Alzheimer’s Disease Neuroimaging Initiative. (2017). Brain structural connectivity atrophy in Alzheimer’s disease. arXiv:1709.02369v1.
Bagheri, N., & Jamieson, G. A. (2004). Considering subjective trust and monitoring behavior in assessing automation-induced “complacency”. In D. A. Vicenzi, M. Mouloua, & O. A. Hancock (Eds.), Human performance, situation awareness, and automation: Current research and trends (pp. 54–59). Mahwah, NJ: Erlbaum.
Bainbridge, L. (1983). Ironies of automation. Automatica,19(6), 775–779.
Banks, V. A., Erikssona, A., O’Donoghue, J., & Stanton, N. A. (2018a). Is partially automated driving a bad idea? Observations from an on-road study. Applied Ergonomics,68, 138–145.
Banks, V. A., Plant, K. L., & Stanton, N. A. (2018b). Driver error or designer error: Using the perceptual cycle model to explore the circumstances surrounding the fatal Tesla crash on 7th May 2016. Safety Science,108, 278–285.
Baxter, G., Rooksby, J., Wang, Y., & Khajeh-Hosseini, A. (2012). The ironies of automation…still going strong at 30? In E. C. C. E. Conf (Ed.), Proc (pp. 65–71). Aug.: Edinburgh.
Blomberg, T., Bales, W., Mann, K., Meldrum, R., Nedelec, J. (2010). Validation of the COMPAS risk assessment classification instrument. Center for Criminology and Public Policy Research College of Criminology and Criminal Justice Florida State University. https://arxiv.org/pdf/1311.2901.pdf.
Brynjolfsson, E., & McAfee, A. (2017). Machine platform crowd: Harnessing our digital future. New York: Norton.
Bygrave, L. A. (2017). Hardwiring privacy. In R. Brownsword, E. Scotford, & K. Yeung (Eds.), The Oxford handbook of law, regulation, and technology (pp. 754–775). New York: Oxford University Press.
Cebon, D. (2015). Responses to autonomous vehicles. Ingenia,62, 10.
Cummings, M. L. (2004). Automation bias in intelligent time critical decision support systems. AIAA Intelligent Systems Technical Conf. https://doi.org/10.2514/6.2004-6313.
Cunningham, M., Regan, M. (2018). Automated vehicles may encourage a new breed of distracted drivers. The Conversation, Sep. 25.
Damaška, M. R. (1997). Evidence law adrift. New Haven: Yale University Press.
Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011). Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences,108(17), 6889–6892.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2016). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science,64(3), 1155–1170.
Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances,4, 1–5.
Edwards, E., & Lees, F. P. (Eds.). (1974). The human operator in process control. London: Taylor and Francis.
Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a “right to an explanation” is probably not the remedy you are looking for. Duke Law and Technology Review,16(1), 18–84.
Endsley, M. R. (2017). From here to autonomy: Lessons learned from human–automation research. Human Factors,59(1), 5–27.
Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St Martin’s Press.
Fildes, R., Goodwin, P., Lawrence, M., & Nikolopoulos, K. (2009). Effective forecasting and judgmental adjustments: An empirical evaluation and strategies for improvement in supply-chain planning. International Journal of Forecasting,25, 3–23.
Fitts, P. M. (1951). Human engineering for an effective air navigation and traffic control system. Washington D.C.: National Research Council.
Greenlee, E. T., DeLucia, P. R., & Newton, D. C. (2018). Driver vigilance in automated vehicles: Hazard detection failures are a matter of time. Human Factors,60(4), 465–476.
Hatvany, J., & Guedj, R. A. (1982). Man-machine interaction in computer-aided design systems., Proceedings IFAC/IFIP/IFORS/IEA Conferernce Analysis, design and evaluation of man-machine systems Oxford: Pergamon Press.
House of Lords Select Committee on Artificial Intelligence. (2018). AI in the UK: Ready, willing and able?
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2017). Ethically aligned design (version 2). https://ethicsinaction.ieee.org.
Johannsen, G. (1982). Man-machine systems: Introduction and background. Proceedings of IFAC/IFIP/IFORS/IEA Conference on Analysis, design and evaluation of man-machine systems, Baden-Baden, Sept. Oxford: Pergamon Press.
Kelley, C. R. (1968). Manual and automatic control. New York: Wiley.
Larson, J., Mattu, S., Kirchner, L., Angwin, J. (2016). How we analyzed the COMPAS recidivism algorithm. ProPublica.org May 23, 2016.
Leinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2018). Human decisions and machine predictions. Quarterly Journal of Economics,2018, 237–293.
Margulies, F., & Zemanek, H. (1982). Man’s role in man-machine systems. Proceedings IFAC/IFIP/IFORS/IEA Conference Analysis, design and evaluation of man-machine systems. Oxford: Pergamon Press.
Marks, A., Bowling, B., & Keenan, C. (2017). Automated justice? Technology, crime, and social control. In R. Brownsword, E. Scotford, & K. Yeung (Eds.), The Oxford handbook of law, regulation, and technology (pp. 705–730). New York: Oxford University Press.
Meister, D. (1999). The history of human factors and ergonomics. Mahwah, NJ: Erlbaum.
Molloy, R., & Parasuraman, R. (1996). Monitoring an automated system for a single failure: Vigilance and task complexity effects. Human Factors,38, 311–322.
Moray, N. (Ed.). (1979). Mental workload: Its theory and measurement. New York: Plenum Press.
Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood-Cliffs, NJ: Prentice Hall.
Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors,52(3), 381–410.
Pazouki, K., Forbes, N., Norman, R. A., & Woodward, M. D. (2018). Investigation on the impact of human–automation interaction in maritime operations. Ocean Engineering,153, 297–304.
Pohl, J. (2008). Cognitive elements of human decision making. In G. Phillips-Wren, N. Ichalkaranje, & L. C. Jain (Eds.), Intelligent decision making: An AI-based approach (pp. 41–76). Berlin: Springer.
Rouse, W. B. (1981). Human–computer interaction in the control of dynamic systems. ACM Computing Surveys,13, 71–99.
Rouse, W. B. (1982). Models of human problem solving: Detection, diagnosis, and compensation for system failures., Proceedings of IFAC/IFIP/IFORS/IEA conference Analysis, design and evaluation of man-machine systems Oxford: Pergamon Press.
Santoni de Sio, F., & van den Hoven, J. (2018). Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI,5, 15. https://doi.org/10.3389/frobt.2018.00015.
Sheridan, T. B., & Ferrell, W. R. (1974). Man-machine systems: Information, control, and decision models of human performance. Cambridge, MA: MIT Press.
Skitka, L. J., Mosier, K. L., & Burdick, M. (2000). Accountability and automation bias. International Journal of Human–Computer Studies,52, 701–717.
Society of Automotive Engineers. (2016). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. J3016_201609. Warrendale: SAE International.
Stanton, N. A. (2015). Responses to autonomous vehicles. Ingenia,62, 9.
Stanton, N. A. (2016). Distributed situation awareness. Theoretical Issues in Ergonomics Science,17(1), 1–7.
Stanton, N. A., & Marsden, P. (1996). From fly-by-wire to drive-by-wire: Safety implications of vehicle automation. Safety Science,24(1), 35–49.
Strauch, B. (2018). Ironies of automation: Still unresolved after all these years. IEEE Transactions on Human–Machine Systems,48(5), 419–433.
Villani, C. (2018). For a meaningful artificial intelligence: Towards a French and European strategy. https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf.
Walker, G. H., Stanton, N. A., & Salmon, P. M. (2015). Human factors in automotive engineering and technology. Surrey: Ashgate.
Wickens, C. D., & Kessel, C. (1979). The effect of participatory mode and task workload on the detection of dynamic system failures. IEEE Transactions Systems Man Cybernetics,9(1), 24–31.
Wiener, E. L., & Curry, R. E. (1980). Flight-deck automation: Promises and problems. Ergonomics,23(10), 995–1011.
Williges, R. C., & Williges, B. H. (1982). Human–computer dialogue design considerations., Proceedings IFAC/IFIP/IFORS/IEA Conference Analysis, design and evaluation of man-machine systems Oxford: Pergamon Press.
Zerilli, J. (2017). Multiple realization and the commensurability of taxonomies. Synthese,196(8), 3337–3353.
Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2018). Transparency in algorithmic and human decision-making: Is there a double standard? Philosophy and Technology,32(4), 661–683.