Skip to main content
Log in

Silence is golden: team problem solving and communication costs

  • Original Paper
  • Published:
Experimental Economics Aims and scope Submit manuscript

“Silence is golden …”

—The Four Seasons

Abstract

We conduct experiments comparing the performance of individuals and teams of four subjects in solving two rather different tasks. The first involves nonograms, logic puzzles that require a series of incremental steps to solve. The second task uses CRT-type questions, which require a single, specific insight. Contrary to the existing literature, team performance in both tasks is statistically indistinguishable from that of individuals when communication is costless. If a tiny message cost is imposed, team performance improves and becomes statistically better than that of individuals, although still worse than previous research on teams would have suggested. To the best of our knowledge, our study is the first to find that imposing friction on communication leads to more effective performance in teams. Message costs reduce the quantity of messages but increase the quality, specifically the mix of good and bad suggestions. The improved quality of communication with message costs allows teams to out-perform individuals. Our analysis suggests that organizations would do better by identifying an able individual to perform an intellective task rather than using teams; prediction exercises indicate that this will not harm, and will generally increase, performance relative to a team, and only requires the cost of paying one individual rather than many.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Note that this is not the same as the truth-wins benchmark (Lorge and Solomon 1955) discussed in Sect. 2.

  2. If p is the probability that an individual solves the puzzle, the probability that an n-person team solves the puzzle under the truth-wins benchmark is 1 −  (1 − p)n. For example, if each person is 50% likely to solve a puzzle and outcomes are uncorrelated across individuals, the likelihood that a team solves the problem is 75% with two people, 87.5% with three people, etc.

  3. In this vein, Isopi et al. (2011) find that a “group-discussion” stage leads to worse performance in a task with no demonstrably correct solution, as measured by the number of correct answers to questions about paintings. Casari et al. (2016) report a similar result for one of their treatments.

  4. This is analogous to the well-known example of a weak-link production technology due to Kremer (1993). There is a large experimental literature on the coordination problem faced by groups playing weak-link games. See Cooper and Weber (2019) for a recent survey.

  5. We ran earlier sessions where the lowest-performing participants in a session were assigned the leader role. A summary of the results including the sessions with biased role selection is available from the authors upon request. Including these data complicates the analysis but has little effect on our conclusions.

  6. The subjects were told that they could use the chat box to “advise each other.” The only specific restrictions we put on communication were telling them not to identify themselves and to avoid offensive language.

  7. Due to a software error, data for the final question had to be dropped in the first seven team sessions.

  8. This implicitly allows for free-riding, as v does not eliminate the possibility that other team members will solve the logic problem.

  9. This measure equals zero if the nonogram is solved correctly, and is otherwise the negative of the number of cells that are incorrect when time expired. The negative value is used to make certain that higher values imply that individuals do better in Stage 1.

  10. Using the average number of incorrect cells gives similar results to using the number of puzzles correctly solved, but uses more information about performance since it differentiates between subjects who almost solved a puzzle and those who missed badly. If both controls are included, the regression puts all of the weight on the number of incorrect cells (indicating that the other measure adds no useful information).

  11. The estimates for Team-Cost are slightly different for nonograms and CRT questions, but this difference is not statistically significant (p = 0.807).

  12. For nonograms, the estimate for Team-Cost is 0.158 with a standard error of 0.064 (p = 0.022) with low-ability followers and 0.226 with a standard error of 0.083 (p = 0.033) with high-ability followers. For the CRT problems, the equivalent figures are 0.091 (standard error of 0.114, p = 0.408) and 0.208 (standard error of 0.070, p = 0.003).

  13. For nonograms, the estimate for Team-Cost is 0.192 (standard error of 0.075, p = 0.013) with low-ability leaders and 0.052 (standard error of 0.049, p = 0.293) with high-ability leaders. For the CRT problems, the equivalent figures are 0.242 (standard error of 0.117, p = 0.022) and 0.070 (standard error of 0.082, p = 0.396).

  14. We focus on messages sent from followers, since our hypotheses are based on insights (or errors) being passed from followers to leaders; leaders don’t need to communicate their insights since they control what gets entered for the group. We briefly discuss leader messages at the end of this subsection.

  15. For nonograms, a suggestion was correct if it called for filling in a cell that should have been marked in the correct solution or unmarking a cell that should not have been marked in the correct solution. Incorrect suggestions are defined in an analogous fashion.

  16. The formula for % decrease is 1 − (Cost/No Cost).

  17. The strength of the estimated congestion effect for the nonogram data varies depending on precisely what specification is used. In some specifications that do not allow for non-linearity, the estimate for the raw number of messages is weakly significant. More to the point, the positive estimate for the CRT data strongly indicates that there is not a pure congestion effect.

  18. This measure approximates the non-linear effect of message quality on performance with a linear relationship based on the estimated marginal effects. We have constructed similar measures that exclude the number of messages per minute. This has little effect on our qualitative conclusions.

  19. Note that this is not the same as the truth-wins benchmark.

  20. The results are weaker for the CRT questions because the relationship between Stage 1 and Stage 2 performance is weaker, making it less valuable to pull the best performer from Stage 1.

References

  • Arechar, A., Kraft-Todd, G., & Rand, D. (2017). Turking overtime: How participant characteristics and behavior vary over time and day on Amazon Mechanical Turk. Journal of the Economic Science Association, 3(1), 1–11.

    Article  Google Scholar 

  • Baron, J., Scott, S., Fincher, K., & Metz, S. (2015). Why does the Cognitive Reflection Test (sometimes) predict utilitarian moral judgment (and other things)? Journal of Applied Research in Memory and Cognition, 4, 265–284.

    Article  Google Scholar 

  • Ben-Ner, A., & Putterman, L. (2009). Trust, communication, and contracts: An experiment. Journal of Economic Behavior & Organization, 70(1–2), 106–121.

    Article  Google Scholar 

  • Ben-Ner, A., Putterman, L., & Ren, T. (2011). Lavish returns on cheap talk: Non-binding communication in a trust experiment. Journal of Socio-Economics, 40, 1–13.

    Article  Google Scholar 

  • Blume, A., Kriss, P., & Weber, R. (2016a). Pre-play communication with forgone costly messages: Experimental evidence on forward induction. CESIFO Working Paper No. 5958.

  • Blume, A., Kriss, P., & Weber, R. (2016b). Coordination with decentralized costly communication. Journal of Economic Behavior & Organization.

  • Brandts, J., Charness, G., & Ellman, M. (2016). Let’s talk: How communication affects contract design. Journal of the European Economic Association, 14(4), 943–974.

    Article  Google Scholar 

  • Casari, M., Zhang, J., & Jackson, C. (2016). Same process, different outcomes: Group performance in an acquiring a company experiment. Experimental Economics, 19(4), 764–791.

    Article  Google Scholar 

  • Cason, T. N., & Mui, V. L. (2014). Coordinating resistance through communication and repeated interaction. The Economic Journal, 124(574), F226–F256.

    Article  Google Scholar 

  • Charness, G., & Dufwenberg, M. (2006). Promises and partnership. Econometrica, 74, 1579–1601.

    Article  Google Scholar 

  • Charness, G., & Dufwenberg, M. (2011). Participation. American Economic Review, 101, 1211–1237.

    Article  Google Scholar 

  • Charness, G., Karni, E., & Levin, D. (2007). Individual and group decision making under risk: An experimental study of Bayesian updating and violations of first-order stochastic dominance. Journal of Risk and Uncertainty, 35, 129–148.

    Article  Google Scholar 

  • Charness, G., Karni, E., & Levin, D. (2010). On the conjunction fallacy in probability judgment: New experimental evidence regarding Linda. Games and Economic Behavior, 68, 551–556.

    Article  Google Scholar 

  • Charness, G., & Sutter, M. (2012). Groups make better self-interested decisions. Journal of Economic Perspectives, 26, 157–176.

    Article  Google Scholar 

  • Cooper, D., & Kagel, J. (2005). Are two heads better than one? Team versus individual play in signaling games. American Economic Review, 95, 477–509.

    Article  Google Scholar 

  • Cooper, D., & Kagel, J. (2009). The role of context and team play in cross-game learning. Journal of the European Economic Association, 7(5), 1101–1139.

    Article  Google Scholar 

  • Cooper, D., & Kagel, J. (2016). A failure to communicate: An experimental investigation of the effects of advice on strategic play. European Economic Review, 82, 24–45.

    Article  Google Scholar 

  • Cooper, D., & Kagel, J. (2018). Learning and contagion in teams. Working paper. Department of Economics, Florida State University.

  • Cooper, D., & Kühn, K.-U. (2014). Communication, renegotiation, and the scope for collusion. American Economic Journal: Microeconomics, 6(2), 247–278.

    Google Scholar 

  • Cooper, D., & Sutter, M. (2018). Endogenous role assignment and team performance. International Economic Review, 59(3), 1547–1569.

    Article  Google Scholar 

  • Cooper, D., & Weber, R. (2019). Recent advances in experimental coordination games. In M. Capra, R. Croson, M. Rigdon, & T. Rosenblatt (Eds.), The handbook of experimental economic game theory. Cheltenham: Edward Elgar Publishing.

    Google Scholar 

  • Davis, J. H. (1992). Some compelling intuitions about group consensus decisions, theoretical and empirical research, and interpersonal aggregation phenomena: Selected examples, 1950–1990. Organizational Behavior and Human Decision Processes, 52, 3–38.

    Article  Google Scholar 

  • Fischbacher, U. (2007). z-Tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10, 171–178.

    Article  Google Scholar 

  • Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42.

    Article  Google Scholar 

  • Greiner, B. (2015). Subject pool recruitment procedures: Organizing experiments with ORSEE. Journal of the Economic Science Association, 1(1), 114–125.

    Article  Google Scholar 

  • Isaac, R. M., & Walker, J. M. (1988). Communication and free-riding behavior: The voluntary contribution mechanism. Economic Inquiry, 26(4), 585–608.

    Article  Google Scholar 

  • Isopi, A., Nosenzo, D., & Starmer, C. (2011). Does consultation improve decision-making? Theory and Decision, 77(3), 377–388.

    Article  Google Scholar 

  • Kremer, M. (1993). The O-ring theory of economic development. Quarterly Journal of Economics, 108(3), 551–575.

    Article  Google Scholar 

  • Lorge, I., & Solomon, H. (1955). Two models of group behavior in the solution of eureka-type problems. Psychometrika, 20, I39–I148.

    Article  Google Scholar 

  • Oldrati, V., Patricelli, J., Colombo, B., & Antonietti, A. (2016). The role of dorsolateral prefrontal cortex in inhibition mechanism: A study on cognitive reflection test and similar tasks through neuromodulation. Neuropsychologia, 91, 499–508.

    Article  Google Scholar 

  • Primi, C., Morsanyi, K., Chiesi, F., Donati, M., & Hamilton, J. (2016). The development and testing of a new version of the cognitive reflection test applying Item Response Theory (IRT). Journal of Behavioral Decision Making, 29(5), 453–469.

    Article  Google Scholar 

  • Shaw, M. (1932). A comparison of individuals and small groups in the rational solution of complex problems. American Journal of Psychology, 44, 491–504.

    Article  Google Scholar 

  • Thomson, K., & Oppenheimer, D. (2016). Investigating an alternate form of the cognitive reflection test. Judgment and Decision Making, 11(1), 99–113.

    Google Scholar 

  • Toplak, M., West, R., & Stanovich, K. (2014). Assessing miserly information processing: An expansion of the Cognitive Reflection Test. Thinking & Reasoning, 20, 147–168.

    Article  Google Scholar 

  • Wilson, A. (2014). Costly communication in groups: Theory and an experiment. Mimeo.

Download references

Acknowledgements

We would like to thank the NSF (SES-0924772 and SES-1227298) for financial support. We thank Faisal Ajam, Francisco Avalos, Isaac Bjork, Alex Borkin, Daniëlle Bovenberg, Phil Brookins, Erich Cromwell, Anthony Duong, Madeline Kardos, Alex Lenk, Ellis Magee, Karly Osborne, Joe Stinn, Zeke Wald, Jack Waldsmith, Joelle Williams, and Xinyi Zhang for their work as research assistants on this project. We thank Krista Jabs and John Kagel for helpful discussion of this paper as well as seminar participants at the Antigua Experimental Economics Conference, the ESA, EWEBE, SITE, and SEA meetings, the University of Amsterdam, Macquarrie School of Business, and Queen Mary University. Any errors are solely our own.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David J. Cooper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (DOCX 52 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Charness, G., Cooper, D.J. & Grossman, Z. Silence is golden: team problem solving and communication costs. Exp Econ 23, 668–693 (2020). https://doi.org/10.1007/s10683-019-09627-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10683-019-09627-w

Keywords

JEL Classification

Navigation