Abstract
We investigate how the selection process of a leader affects team performance with respect to social learning. We use a laboratory experiment in which an incentivized guessing task is repeated in a star network with the leader at the center. Leader selection is either based on competence, on self-confidence, or made at random. In our setting, teams with random leaders do not underperform. They even outperform teams with leaders selected on self-confidence. Hence, self-confidence can be a dangerous proxy for competence of a leader. We show that it is the declaration of the selection procedure which makes non-random leaders overly influential. To investigate the opinion dynamics, we set up a horse race between several rational and naïve models of social learning. The prevalent conservatism in updating, together with the strong influence of the team leader, imply an information loss since the other team members’ knowledge is not sufficiently integrated.
Similar content being viewed by others
Notes
Indeed, disastrous decisions can often be traced back to management teams whose members are in disagreement, or—what is arguably even worse—who unintendedly agree on a distorted view of reality.
Think about the canonical framework with a binary state space and equally precise, conditionally independent signals about the true state. If this is made common knowledge, it is clear how well informed each agent is, and there is no need to communicate confidence. Our technology to provide a confidence level for each estimate is somewhat similar to the literature that considers “tagging” pieces of information with their source (Acemoglu et al. 2014; Mobius et al. 2015).
Experiments on belief updating frequently find that real people are more conservative updaters than the theoretical model would predict (Möbius et al. 2011; Mannes and Moore 2013; Ambuehl and Li 2018), a pattern that has already been summarized in a classic survey (Peterson and Beach 1967): “when statistical man and subjects start with the same prior probabilities for two population proportions, subjects revise their probabilities in the same direction but not as much as statistical man does[.]” In this paper, we cannot study the sources of conservative updating, but we can study well the consequences.
One exception is the study by Haslam et al. (1998), which shows experimentally that randomly selected leaders can enhance team performance in a task of deciding upon priorities in a hypothetical survival situation (e.g., after a plane crash). The mechanism behind the effect, however, remains largely unclear. Interestingly, they also observe that randomly selected leaders are, despite their superior performance, often perceived by their team members as less effective than formally selected leaders.
Participants were mostly undergraduate students from various disciplines; there was no restriction on the pool of participants.
The chosen payoff function has a convex shape. This provides incentives to report the guess that is most likely the correct answer. Theoretically, an agent’s belief is a distribution on an interval and the payoff function is designed to elicit the mode of this distribution, as we explain in Sect. B.4.1 of Online Appendix B.
The full list of questions can be found as Table C.1 in Online Appendix C.
The full schedule of which group played which question in which treatment is given by Table C.3 in Online Appendix C.
As we discuss in the next section, among rational agents there are indeed incentives to communicate truthfully the level of confidence in our setting in order to foster optimal learning in the group. However, our experimental results will not rely on the assumption that the confidence statements are truthful.
A more detailed description of the experimental procedures can be found in Online Appendix C.
In the experiment, the correct answer is rounded and belongs to the finite set \(\Theta =\{0,0.01,0,02,\ldots ,0.99,1\}\), which we can also model as the interval \(\Theta =[0,1]\).
For easier readability, we use the female form for the center and the male form for the pendants.
A formal statement of this result can be found in Online Appendix B. There we introduce the general framework (B.1), prove the proposition (B.2), and provide two specific examples how such a rational model unfolds in our setting (B.4.1).
Since efficiency here means that not only the sum but also each individual’s expected payoffs are maximal, there are no incentives to deviate, e.g., by misrepresenting the own opinion or confidence level.
We study such models in Sect. 5. They are formally introduced in Online Appendix B.4.
In the experiment, we did not induce a common prior because we used questions of real topics. Nevertheless, we argue that models that assume a common prior and signals can contribute to our understanding of social learning in real settings.
A formal statement of this result can be found in Online Appendix B. There we introduce a probabilistic framework and prove the proposition (B.3); and also provide two specific examples (B.5.1).
Thus, the crowd error measures whether the correct answer lies within the interval that is spanned by the four answers, and if so, whether it also lies within the interval that is spanned by the two answers which are contained in the interval of the two other answers. “Bracketing” is important when the decision maker assumes that the truth lies in the interval spanned by the answers.
Recall that we derived the predictions from the Bayesian approach using the assumption that guess and confidence taken together are a sufficient statistic for someone’s belief. If this assumption fails, higher order beliefs matter and more rounds of learning are expected.
The apparent differences between treatments in the first round of the collective error are neither significant, nor are they driving the subsequent results, as it can be shown.
Learning cannot stem from having more time to think about a question since subjects who are not confronted with any information about the guesses and confidence of others did not at all improve over time. We tested this possibility with subjects who were randomly drawn from all potential participants in sessions whose number of potential participants was not divisible by four, the size of our groups.
In the regression tables we report the t-statistics, which can be transformed into the p-values. The tests are two-sided.
For easier readability, we often only write the most confident or the most accurate center without explicitly repeating that this refers to confidence and accuracy in the corresponding question of phase I.
We will return to this observation when extending the social learning models in Sect. 5.
The models are formally defined and characterized in Sects. B.4 and B.5 of Online Appendix B.
In the rational learning models, we derive conservative behavior from the assumption of overprecision (cf. Sect. B.4.2 of Online Appendix B). In the naïve learning models, we base conservative behavior on a framework from Friedkin and Johnsen (1990) (cf. Sect. B.5.2).
The exception is that the Corazzini et al. Model predicts the center’s guesses better than the Corazzini et al. Plus Model. Recall that the center already has a high weight on herself in the Corazzini et al. Model model.
These manifold comparisons are not all reported in the paper. The exceptions are the centers in the rational models (Standard Model and Sophisticated Model) who are left better off when no group member is conservative.
This is highly plausible because conservatism leads to less convergence of opinions and can thereby help “bracket” the truth. Hence, conservatism harms individual guesses, but works against the negative effect of social influence that was uncovered in Lorenz et al. (2011).
The substantial amount of conservatism that we find in this paper can be partially due to the more realistic setup with the lack of common knowledge about the others’ signal precisions.
References
Acemoglu, D., Bimpikis, K., & Ozdaglar, A. (2014). Dynamics of information exchange in endogenous social networks. Theoretical Economics, 9, 41–97.
Acemoglu, D., Ozdaglar, A., & ParandehGheibi, A. (2010). Spread of (mis)information in social networks. Games and Economic Behavior, 70, 194–227.
Ambuehl, S., & Li, S. (2018). Belief updating and the demand for information. Games and Economic Behavior, 109, 21–39.
Aumann, R. J. (1976). Agreeing to disagree. The Annals of Statistics, 4(6), 1236–1239.
Battiston, P., & Stanca, L. (2015). Boundedly rational opinion dynamics in social networks: Does indegree matter? Journal of Economic Behavior & Organization, 119, 400–421.
Bolton, P., Brunnermeier, M. K., & Veldkamp, L. (2013). Leadership, coordination, and corporate culture. The Review of Economic Studies, 80, 512–537.
Brandts, J., Giritligil, A. E., & Weber, R. A. (2015). An experimental study of persuasion bias and social influence in networks. European Economic Review, 80, 214–229.
Çelen, B., Kariv, S., & Schotter, A. (2010). An experimental test of advice and social learning. Management Science, 56, 1687–1701.
Chandrasekhar, A. G., Larreguy, H., & Xandri, J. P. (2015). Testing models of social learning on networks: Evidence from a lab experiment in the field. National Bureau of Economic Research: Technical report.
Choi, S., Gale, D., & Kariv, S. (2005). Behavioral aspects of learning in social networks: An experimental study. Advances in Applied Microeconomics, 13, 25–61.
Corazzini, L., Pavesi, F., Petrovich, B., & Stanca, L. (2012). Influential listeners: An experiment on persuasion bias in social networks. European Economic Review, 56, 1276–1288.
DeGroot, M. H. (1974). Reaching a consensus. Journal of the American Statistical Association, 69, 118–121.
DeMarzo, P. M., Vayanos, D., & Zwiebel, J. (2003). Persuasion bias, social influence, and unidimensional opinions. The Quarterly Journal of Economics, 118, 909–968.
Frey, B. S., & Osterloh, M. (2016). Aleatoric Democracy. CESifo Group Munich: Technical report.
Friedkin, N. E. (1991). Theoretical Foundations for Centrality Measures. The American Journal of Sociology, 96, 1478–1504.
Friedkin, N. E., & Johnsen, E. C. (1990). Social influence and opinions. Journal of Mathematical Sociology, 15, 193–206.
Gale, D., & Kariv, S. (2003). Bayesian learning in social networks. Games and Economic Behavior, 45, 329–346.
Gervais, S., & Goldstein, I. (2007). The positive effects of biased self-perceptions in firms. Review of Finance, 11, 453–496.
Golub, B., & Jackson, M. O. (2010). Naïve Learning in Social Networks and the Wisdom of Crowds. American Economic Journal: Microeconomics, 2, 112–49.
Grimm, V., & Mengel, F. (2018). An experiment on belief formation in networks. Journal of the European Economic Association. https://doi.org/10.1093/jeea/jvy038.
Haslam, S. A., McGarty, C., Brown, P. M., Eggins, R. A., Morrison, B. E., & Reynolds, K. J. (1998). Inspecting the emperor’s clothes: Evidence that random selection of leaders can enhance group performance. Group Dynamics: Theory, Research, and Practice, 2, 168–184.
Herz, H., Schunk, D., & Zehnder, C. (2014). How do judgmental overconfidence and overoptimism shape innovative activity? Games and Economic Behavior, 83, 1–23.
Keuschnigg, M., & Ganser, C. (2017). Crowd wisdom relies on agents’ ability in small groups with a voting aggregation rule. Management Science, 63, 818–828.
Lorenz, J., Rauhut, H., Schweitzer, F., & Helbing, D. (2011). How social influence can undermine the wisdom of crowd effect. Proceedings of the National Academy of Sciences, 108, 9020–9025.
Mannes, A. E. (2009). Are we wise about the wisdom of crowds? The use of group judgments in belief revision. Management Science, 55, 1267–1279.
Mannes, A. E., & Moore, D. A. (2013). A Behavioral Demonstration of Overconfidence in Judgment. Psychological Science, 24, 1190–1197.
Mobius, M., Phan, T., & Szeidl, A. (2015). Treasure hunt: Social learning in the field (No. w21014). National Bureau of Economic Research.
Möbius, M. M., Niederle, M., Niehaus, P., & Rosenblat, T. S. (2011). Managing self-confidence: Theory and experimental evidence. National Bureau of Economic Research: Technical report.
Moore, D. A., & Healy, P. J. (2008). The trouble with overconfidence. Psychological Review, 115, 502–517.
Moussaïd, M., Kämmer, J. E., Analytis, P. P., & Neth, H. (2013). Social influence and the collective dynamics of opinion formation. PloS One, 8, 1–8.
Mueller-Frank, M. (2013). A general framework for rational learning in social networks. Theoretical Economics, 8, 1–40.
Peterson, C. R., & Beach, L. R. (1967). Man as an intuitive statistician. Psychological Bulletin, 68, 29.
Rauhut, H., & Lorenz, J. (2011). The wisdom of crowds in one mind: How individuals can simulate the knowledge of diverse societies to reach better decisions. Journal of Mathematical Psychology, 55, 191–197.
Rosenberg, D., Solan, E., & Vieille, N. (2009). Informational externalities and emergence of consensus. Games and Economic Behavior, 66, 979–994.
Soll, J. B., & Klayman, J. (2004). Overconfidence in interval estimates. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 299.
Surowiecki, J. (2004). The wisdom of crowds. New York: Random House.
Zeitoun, H., Osterloh, M., & Frey, B. S. (2014). Learning from ancient Athens: Demarchy and corporate governance. The Academy of Management Perspectives, 28, 1–14.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations
We thank Arun Advani, Sandro Ambuehl, Vincent Buskens, Arun Chandrasekhar, Syngjoo Choi, P.J. Healy, Holger Herz, Matt Jackson, Bernhard Kittel, Michael Kosfeld, Jan Lorenz, Friederike Mengel, Claudia Neri, Muriel Niederle, and Tanya Rosenblat for helpful comments. Berno Buechel gratefully acknowledges the hospitality of the Economics Department of Stanford University and the financial support by the Fritz Thyssen Foundation. Heiko Rauhut acknowledges support by the SNSF Starting Grant BSSGI0_155981.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Buechel, B., Klößner, S., Lochmüller, M. et al. The strength of weak leaders: an experiment on social influence and social learning in teams. Exp Econ 23, 259–293 (2020). https://doi.org/10.1007/s10683-019-09614-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10683-019-09614-1