Abstract
Some of the concerns people have about AI are: its misuses, effect on unemployment, and its potential for dehumanising. Contrary to what most people believe and fear, AI can lead to respect for the enormous power and complexity of the human mind. It is potentially very dangerous for users in the public domain to impute much more inferential power to computer systems, which look common-sensical, than they actually have. No matter how impressive AI programs may be, we must be aware of their limitations and should not abrogate human responsibility to such programs.
Similar content being viewed by others
Bibliography
Borrow, D.G. & P. Hayes (1985), ‘Artificial Intelligence-where are we?’,Artificial Intelligence, (Silver Jubilee Volume), 25, 375–416.
Boden, M.A. (1984) ‘Artificial Intelligence and social forecasting’,J. Mathematical Sociology, 9, 341–356.
Boden, M.A. (Feb. 1984), ‘The impacts of Artificial Intelligence’,Futures: The Journal of Forecasting and Planning, 1, 60–70, (Reprinted in T. Forester, (ed.), (1985),The Information Technology Revolution, Blackwell, Oxford, 95–103.)
Bundy, A. & R. Clutterbuck (1985), ‘Raising the standards of AI products’, inProceedings, Ninth Joint Conference on Artificial Intelligence, Los Angeles, 1289–1294.
McCarthy, J. (1980), ‘Circumscription — a form of non-monotonic reasoning’,Artificial Intelligence, 13, 27–40.
Yazdani, M. & A. Narayanan (eds.) (1984),Artificial Intelligence: Human Effects, Ellis Horwood, Chichester.
Author information
Authors and Affiliations
Additional information
This is an edited transcript of a talk given to theAI for Society Conference, Brighton Polytechnic.
Rights and permissions
About this article
Cite this article
Boden, M. Artificial intelligence: Cannibal or missionary?. AI & Soc 1, 17–23 (1987). https://doi.org/10.1007/BF01905886
Issue Date:
DOI: https://doi.org/10.1007/BF01905886