Abstract
This chapter aims to indicate some of the potential benefits and dangers inherent in the technology of Knowledge-Based Systems (KBSs) and to suggest courses of action that would both minimize risk and promote social benefit in future developments. KBSs offer cheap and ready access to knowledge, efficient management and organization of information, conversion of information into usable knowledge to guide human action, knowledge and advice relevant to the context in which it is needed. There are, however, potential dangers if a user is unfamiliar with the assumptions underlying KBS structures of reasoning and representing knowledge: the danger of a program becoming incomprehensible or unreliable in an emergency: the possibility of placing too much faith in the abilities of KBSs and thereby losing the capacity for critical evaluation of its judgments: the inevitable uncertainty or inexactitude programmed into KBSs: the economic implications at an uneven distribution of new technology: and the likelihood of misinterpretation or technical failure when autonomous. systems are automatically activating machinery (e. g. military/defence systems). The chapter argues for the regulation of KBS construction, particularly for those systems designed for public or “non-expert” use. Suggestions include programming into each system a clear qualification of answers and requests for data, setting up a workable Code of Conduct for AI practitioners, and constant supervision of the assumptions and activities of each computerized system. Since KBSs work by consulting an internal model of the world (supplied to them by a human programmer), their decisions must be challenged or ignored when necessary and always treated with an appropriate degree of scepticism
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
British Computer Society Code of Conduct (1981) BCS Handbook No. 5., British Computer Society Publications, London
Council for Science and Society (1989) Benefits and risks of knowledge-based systems. Oxford University Press.
Defense Advanced Projects Agency (1983). Strategic computing, new-generation computing technology: a strategic plan for its development: and application to critical problems in defense, pp 3–5.
Lighthill, J (1972). Artificial intelligence, a paper symposium. Report to SRC. HMSO, London.
Michie, D and Johnston, R (1984). The creative computer: machine intelligence and human knowledge. Penguin, Harmondsworth.
Sloman, A (1984) Towards a computational theory of mind. In: Narayan A and Yazdani M. (eds). Artificial intelligence human effects. Ellis Horwood Chichester.
Whitby, B (1998) AI: A Handbook of professionalism. Ellis Horwood, Chichester
Wiig, KM (1992) Commercial applications. In: Berkold, T (ed). New Technology and Employment. North-Holland, Amsterdam. (In press)
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1992 Springer-Verlag London Limited
About this chapter
Cite this chapter
Sharples, M. (1992). Controlling the Application of Knowledge-Based Systems. In: Göranzon, B., Florin, M. (eds) Skill and Education: Reflection and Experience. Artificial Intelligence and Society. Springer, London. https://doi.org/10.1007/978-1-4471-1983-8_10
Download citation
DOI: https://doi.org/10.1007/978-1-4471-1983-8_10
Publisher Name: Springer, London
Print ISBN: 978-3-540-19758-4
Online ISBN: 978-1-4471-1983-8
eBook Packages: Springer Book Archive