Comparison of MCDA Paradigms
The underlying concepts of MAUT, SMART, AHP, preference cones, ZAPROS, and outranking methods are compared. Learning systems are considered. The learning view is that decision makers initially do not fully understand all of the criteria that are important. Therefore, rather than uncovering an underlying utility function, what must be uncovered are the full ramifications involved in selecting one alternative over another. This paradigm can involve an evolutionary problem, where criteria can be added or discarded during the analysis. Methods are also reviewed with respect to their psychological validity in generating input data. Past experiments conducted by the authors are reviewed, with conclusions drawn relative to subject comfort in using each method. Subjects typically make errors, in that they have inconsistent ratings of scores across systems, and will occasionally have reversal of relative importance of criteria across systems. This emphasizes the need to be careful of input in decision models, and strengthens the argument for more robust input information. Furthermore, systems based on the same model have been found to yield different results for some. In a study exposing both US and Russian students were compared. Each group found it more comfortable to use systems developed within their own culture.
The concept that seems most attractive is that the analysis needs to focus on the decision maker learning about tradeoffs. A major problem with utility based and outranking methods is that decision makers might consider a wide variety of criteria, but both practicality and mathematics show that only a relatively small set of criteria are really going to matter. Learning methods allow decision makers to focus on these critical criteria and their tradeoffs.
KeywordsDecision Maker Analytic Hierarchy Process Logical Decision MCDM Method Multiattribute Utility
Unable to display preview. Download preview PDF.
- Belton, V. and Gear, T. (1983) On a Short-Coming of Saaty’s Method of Analytic Hierarchies, Omega 11:3, 228–230.Google Scholar
- Dyer, J. S. (1990) Remarks on the Analytic Hierarchy Process, Management Science 36:3, 249–258.Google Scholar
- Dyer, J.S. and Sarin, R.K. (1979)Measurable Value Functions. Operations Research 27, 810–822.Google Scholar
- Keeney, R.L., and Raiffa, H. (1976) Decisions with Multiple Objectives: Preferences and Value Tradeoffs. John Wiley & Sons, New York.Google Scholar
- Korhonen, P., Wallenius, J., and Zionts, S. (1984) Solving the Discrete Multiple Criteria Problem Using Convex Cones. Management Science 30:11, 1336–1345.Google Scholar
- Larichev, O.I., and Moshkovich, H.M. (1991) ZAPROS: A Method and System for Ordering Multiattribute Alternatives on the Base of a Decision-Maker’s Preferences. All-Union Research Institute for Systems Studies, Moscow.Google Scholar
- Loin, V., Stewart, T.J., and Zionts, S. (1992) An Aspiration-Level Interactive Model for Multiple Criteria Decision Making. Computers and Operations Research 19:7, 671–681.Google Scholar
- Roy, B. (1968) Classement et choix en presence de critères multiples. RIRO 8, 57–75.Google Scholar
- Saaty, T.L. (1986) Axiomatic Foundations of the Analytic Hierarchy Process, Management Science 32:7, 841–855.Google Scholar
- Watson, S. R. and Freeling, A. N. S. (1982) Assessing Attribute Weights, Omega 10:9, 582–583Google Scholar