Psychometrika

, Volume 70, Issue 2, pp 253–270 | Cite as

Optimal Least-Squares Unidimensional Scaling: Improved Branch-and-Bound Procedures and Comparison to Dynamic Programming

Article

Abstract

There are two well-known methods for obtaining a guaranteed globally optimal solution to the problem of least-squares unidimensional scaling of a symmetric dissimilarity matrix: (a) dynamic programming, and (b) branch-and-bound. Dynamic programming is generally more efficient than branch-and-bound, but the former is limited to matrices with approximately 26 or fewer objects because of computer memory limitations. We present some new branch-and-bound procedures that improve computational efficiency, and enable guaranteed globally optimal solutions to be obtained for matrices with up to 35 objects. Experimental tests were conducted to compare the relative performances of the new procedures, a previously published branch-and-bound algorithm, and a dynamic programming solution strategy. These experiments, which included both synthetic and empirical dissimilarity matrices, yielded the following findings: (a) the new branch-and-bound procedures were often drastically more efficient than the previously published branch-and-bound algorithm, (b) when computationally feasible, the dynamic programming approach was more efficient than each of the branch-and-bound procedures, and (c) the new branch-and-bound procedures require minimal computer memory and can provide optimal solutions for matrices that are too large for dynamic programming implementation.

Keywords

combinatorial data analysis least-squares unidimensional scaling branch-and-bound dynamic programming 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Arabie, P., & Hubert, L.J. (1992). Combinatorial data analysis. Annual Review of Psychology, 43, 169–203.CrossRefGoogle Scholar
  2. Baker, F.B., & Hubert, L.J. (1977). Applications of combinatorial programming to data analysis: Seriation using asymmetric proximity measures. British Journal of Mathematical and Statistical Psychology, 30, 154–164.Google Scholar
  3. Brusco, M.J. (2002a). A branch-and-bound algorithm for fitting anti-Robinson structures to symmetric dissimilarity matrices. Psychometrika, 67, 459–471.MathSciNetGoogle Scholar
  4. Brusco, M.J. (2002b). Identifying a reordering of the rows and columns of multiple proximity matrices using multiobjective programming. Journal of Mathematical Psychology, 46, 731–745.CrossRefGoogle Scholar
  5. Brusco, M.J., & Stahl, S. (2000). Using quadratic assignment methods to generate initial permutations for least-squares unidimensional scaling of symmetric proximity matrices. Journal of Classification, 17, 197–223.CrossRefMathSciNetGoogle Scholar
  6. Brusco, M.J., & Stahl, S. (2001). An interactive approach to multiobjective combinatorial data analysis. Psychometrika, 66, 5–24.MathSciNetGoogle Scholar
  7. DeCani, J.S. (1972). A branch and bound algorithm for maximum likelihood paired comparison ranking by linear programming. Biometrika, 59, 131–135.Google Scholar
  8. Defays, D. (1978). A short note on a method of seriation. British Journal of Mathematical and Statistical Psychology, 31, 49–53.Google Scholar
  9. de Leeuw, J., & Heiser, W.J. (1977). Convergence of correction-matrix algorithms for multidimensional scaling. In J. C. Lingoes (Ed.), Geometric representations of relational data: Readings in multidimensional scaling (pp. 735–752). Ann Arbor Michigan: Mathesis Press.Google Scholar
  10. De Soete, G., Hubert, L., & Arabie, P. (1988). The comparative performance of simulated annealing on two problems of combinatorial data analysis. In E. Diday (Ed.), Data analysis and informatics, vol. 5 (pp. 489–496). Amsterdam: North Holland.Google Scholar
  11. Flueck, J.A., & Korsh, J.F. (1974). A branch search algorithm for maximum likelihood paired comparison ranking. Biometrika, 61, 621–626.Google Scholar
  12. Groenen, P.J.F. (1993). The Majorization Approach to Multidimensional Scaling: Some Problems and Extensions. Leiden, Netherlands: DSWO Press.Google Scholar
  13. Groenen, P.J.F., & Heiser, W.J. (1996). The tunneling method for global optimization in multidimensional scaling. Psychometrika, 61, 529–550.Google Scholar
  14. Groenen, P.J.F., Heiser, W.J., & Meulman, J. J. (1999). Global optimization of least-squares multidimensional scaling by distance smoothing. Journal of Classification, 16, 225–254.CrossRefGoogle Scholar
  15. Hubert, L.J. (1976). Seriation using asymmetric proximity measures. British Journal of Mathematical and Statistical Psychology, 29, 32–52.Google Scholar
  16. Hubert, L.J., & Arabie, P. (1986). Unidimensional scaling and combinatorial optimization. In J. de Leeuw, W. Heiser, J. Meulman, and F. Critchley (Eds.), Multidimensional Data Analysis (pp. 181–196). Leiden, Netherlands: DSWO Press.Google Scholar
  17. Hubert, L., & Arabie, P. (1994). The analysis of proximity matrices through sums of matrices having (anti-) Robinson forms. British Journal of Mathematical and Statistical Psychology, 47, 1–40.Google Scholar
  18. Hubert, L., Arabie, P., & Meulman, J. (1997). Linear and circular unidimensional scaling for symmetric proximity matrices. British Journal of Mathematical and Statistical Psychology, 50, 253–284.MathSciNetGoogle Scholar
  19. Hubert, L., Arabie, P., & Meulman, J. (1998). Graph-theoretic representations for proximity matrices through strongly anti-Robinson or circular strongly anti-Robinson matrices. Psychometrika, 63, 341–358.Google Scholar
  20. Hubert, L., Arabie, P., & Meulman, J. (2001). Combinatorial Data Analysis: Optimization by Dynamic Programming. Philadelphia: Society for Industrial and Applied Mathematics.Google Scholar
  21. Hubert, L., Arabie, P., & Meulman, J. (2002). Linear unidimensional scaling in the L2-Norm: Basic optimization methods using MATLAB. Journal of Classification, 19, 303–328.CrossRefMathSciNetGoogle Scholar
  22. Hubert, L.J., & Baker, F.B. (1977). The comparison and fitting of given classification schemes. Journal of Mathematical Psychology, 16, 233–253.CrossRefGoogle Scholar
  23. Hubert, L.J., & Golledge, R.G. (1981). Matrix reorganization and dynamic programming: Applications to paired comparisons and unidimensional seriation. Psychometrika, 46, 429–441.Google Scholar
  24. Lawler, E.L. (1964). A comment on minimum feedback arc sets. IEEE Transactions on Circuit Theory, 11, 296–297.Google Scholar
  25. Manning, S.K., & Shofner, E. (1991). Similarity ratings and confusability of lipread consonants compared with similarity ratings of auditory and orthographic stimuli. American Journal of Psychology, 104, 587–604.PubMedGoogle Scholar
  26. Morgan, B.J.T., Chambers, S.M., & Morton, J. (1973). Acoustic confusion of digits in memory and recognition. Perception & Psychophysics, 14, 375–383.Google Scholar
  27. Pliner, V. (1996). Metric unidimensional scaling and global optimization. Journal of Classification, 13, 3–18.CrossRefGoogle Scholar
  28. Robinson, W.S. (1951). A method for chronologically ordering archaeological deposits. American Antiquity, 16, 293–301.Google Scholar
  29. Ross, B.H., & Murphy, G.L. (1999). Food for thought: Cross-classification and category organization in a complex real-world domain. Cognitive Psychology, 38, 495–553.CrossRefPubMedGoogle Scholar
  30. Rothkopf, E.Z. (1957). A measure of stimulus similarity and errors in some paired-associate learning tasks. Journal of Experimental Psychology, 53, 94-101.PubMedGoogle Scholar
  31. Younger, D.H. (1963). Minimum feedback arc sets for a directed graph. IEEE Transactions on Circuit Theory, 10, 238-245.Google Scholar

Copyright information

© The Psychometric Society 2005

Authors and Affiliations

  1. 1.Florida State UniversityTallahasseeUSA
  2. 2.TallahasseeUSA
  3. 3.Department of Marketing, College of BusinessFlorida State UniversityTallahasseeUSA

Personalised recommendations