Blending Descriptive and Numeric Analysis in Human Reliability Design
Scenario based design allows for the early elicitation of requirements and can be helpful in the design phase of system development. It is typical for cycles of iteration to be used to refine a design so that it more closely meets its requirements. Such refinements are in terms of the original requirements specification and any new requirements that have been identified. However, not all defined requirements are equally essential. Although descriptive methods for scenario analysis can be used to highlight new requirements, it can be difficult to evaluate the impact of these new requirements.
In this paper, we exemplify this problem and investigate how numeric methods can be used to highlight the impact of consequences identified by descriptive scenario analysis. An example from the context of human reliability analysis is presented.
KeywordsHuman Error Design Issue Expert Judgement Descriptive Method Bayesian Belief Network
Unable to display preview. Download preview PDF.
- 1.Peter Ayton. How bad is human judgement? In G. Wright and P. Goodwin, editors, Forecasting with Judgement. Wiley, Chichester, England, 1998.Google Scholar
- 2.A. M. Dearden and M. D. Harrison. Impact and the design of the human-machine interface. In Eleventh Annual Conference on Computer Assurance: Compass’96, pages 161–170. IEEE, 1996.Google Scholar
- 3.Bob Fields, Michael Harrison, and Peter Wright. THEA: Human error analysis for requirements definition. Technical Report YCS-97-294, The University of York, Department of Computer Science, 1997. UK.Google Scholar
- 5.Erik Hollnagel. Human Reliability Analysis: Context and Control. Computers and People Series. Academic Press, London, 1993.Google Scholar
- 6.Erik Hollnagel. Cognitive Reliability and Error Analysis Method (CREAM). Elsevier Science Ltd, Oxford, UK, 1998.Google Scholar
- 7.Barry Kirwan. A Guide to Practical Human Reliability Assessment. Taylor and Francis, London, 1994.Google Scholar
- 8.William M. Newman and Michael G. Lamming. Interactive System Design. Addison-Wesley, Harlow, UK, 1995.Google Scholar
- 9.D. A. Norman. The Psychology of Everyday Things. Basic Books, 1988.Google Scholar
- 10.Steven Pocock, Michael Harrison, Peter Wright, and Paul Johnson. THEA-a technique for human error assessment early in design. In Michitaka Hirose, editor, Human-Computer Interaction: INTERACT’01, pages 247–254. IOS Press, 2001.Google Scholar
- 12.James Reason. Human Error. Cambridge University Press, Cambridge, 1990.Google Scholar
- 13.Shamus P. Smith and Michael D. Harrison. Augmenting descriptive scenario analysis for improvements in human reliability design. In Gary B. Lamont, editor, Applied Computing 2002: Proceedings of the 2002 ACM Symposium on Applied Computing, pages 739–743, New York, 2002. ACM.Google Scholar
- 14.Shamus P. Smith and Michael D. Harrison. Improving hazard classification through the reuse of descriptive arguments. In Cristina Gacek, editor, Software Reuse: Methods, Techniques, and Tools, volume 2319 of Lecture Notes in Computer Science (LNCS), pages 255–268, Berlin Heidelberg New York, 2002. Springer.CrossRefGoogle Scholar
- 15.Ian Sommerville. Software Engineering. Addison-Wesley, Harlow, England, fifth edition, 1995. au16._R. J. Wieringa. Requirements engineering: Frameworks for understanding. John Wiley and Sons, Chichester, England, 1996.Google Scholar
- 17.J. C. Williams. HEART-a proposed method for assessing and reducing human error. In 9th Advances in Reliability Technology Symposium. University of Bradford, 1986.Google Scholar