Skip to main content

Fair Statistical Communication in HCI

  • Chapter
  • First Online:
Book cover Modern Statistical Methods for HCI

Part of the book series: Human–Computer Interaction Series ((HCIS))

Abstract

Statistics are tools to help end users accomplish their task. In research, to be qualified as usable, statistical tools should help researchers advance scientific knowledge by supporting and promoting the effective communication of research findings. Yet areas such as human-computer interaction (HCI) have adopted tools — i.e., p-values and dichotomous testing procedures — that have proven to be poor at supporting these tasks. The abusive use of these procedures has been severely criticized in a range of disciplines for several decades, suggesting that tools should be blamed, not end users. This chapter explains in a non-technical manner why it would be beneficial for HCI to switch to an estimation approach, i.e., reporting informative charts with effect sizes and interval estimates, and offering nuanced interpretations of our results. Advice is offered on how to communicate our empirical results in a clear, accurate, and transparent way without using any tests or p-values.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The width of confidence intervals generally increases with the variability of observations and decreases (somehow slowly) with sample size (Cumming 2012). So either pill 1 has a much more consistent effect or the number of subjects was remarkably larger. It is not very important here.

  2. 2.

    The term effect size is often used in a narrower sense to refer to standardized effect sizes (Coe 2002, see also Chap. 5). Although sometimes useful, reporting standardized effect sizes is not always necessary nor is it always recommended (Baguley 2009; Wilkinson 1999, p. 599).

  3. 3.

    Briefly, statistical power is the probability of correctly detecting an effect whose magnitude has been postulated in advance. The more participants, the larger the effect size and the lower the variability, the higher the statistical power (see also Chap. 5).

  4. 4.

    Strictly speaking, Neyman–Pearson ’s procedure involved choosing between the null hypothesis and an alternative hypothesis generally stating that the effect exists and takes some precise value. Accepting the null if the alternative hypothesis is true is a Type II error. Its frequentist probability is noted \(\beta \), and power is defined as \(1-\beta \). These notions are not important to the present discussion.

  5. 5.

    The sharp distinction between pills 2 and 3 is not a caricature. Due to Neyman–Pearson ’s heritage, even pointing out that a non-significant p-value is close to .05 is often considered a serious fault.

  6. 6.

    Since computing \(\beta \) (or the probability of a Type II error) requires assigning a precise value to the population mean, \(\beta \) is also very unlikely to correspond to an actual probability or error rate.

  7. 7.

    For elements of discussion concerning this particular dichotomy, see Stewart-Oaten (1995), Norman (2010), Velleman and Wilkinson (1993), Wierdsma (2013), Abelson (1995, Chap. 1) and Gigerenzer (2004, pp. 587–588).

  8. 8.

    The meaning of robust here differs from its use in robust statistics, where it refers to robustness to outliers and to departures from statistical assumptions.

  9. 9.

    There is considerable debate on how to best collect and analyze questionnaire data, and I have not gone through enough of the literature to provide definitive recommendations. Likert scales are easy to analyze if they are constructed adequately, i.e., by averaging responses from multiple question items (see Carifio and Perla 2007). If responses to individual items are of interest, it can be sufficient to report all responses visually (see Tip 22). Visual analogue scales seem to be a promising option to consider if inferences need to be made on individual items (Reips and Funke 2008). However, analyzing many items individually is not recommended (see Tips 1, 5 and 30).

  10. 10.

    Both types of inferences can be combined using hierarchical or multi-level models, and tools exist for computing hierarchical confidence intervals (see Chap. 11).

  11. 11.

    For more on the important concepts of sampling distribution and the central limit theorem, see, e.g., Cumming (2013, Chap. 3) and the applet at http://tinyurl.com/sdsim.

  12. 12.

    Visual robustness is related to the concept of visual-data correspondence recently introduced in infovis (Kindlmann and Scheidegger 2014). The counterpart of robustness (i.e., a visualization’s ability to reveal differences in data) has been variously termed distinctness (Rensink 2014), power (Hofmann et al. 2012), and unambiguity (Kindlmann and Scheidegger 2014).

  13. 13.

    see, e.g., http://centerforopenscience.org/ and https://osf.io/.

References

  • Abelson R (1995) Statistics as principled argument. Lawrence Erlbaum Associates

    Google Scholar 

  • Abelson RP (1997) A retrospective on the significance test ban of 1999. What if there were no significance tests. pp 117–141

    Google Scholar 

  • Anderson G (2012) No result is worthless: the value of negative results in science. http://tinyurl.com/anderson-negative

  • APA (2010) The publication manual of the APA, 6th edn. Washington, DC

    Google Scholar 

  • Bååth R (2015) The non-parametric bootstrap as a Bayesian model. http://tinyurl.com/bayes-bootstrap

  • Baguley T (2009) Standardized or simple effect size: what should be reported? Br J Psychol 100(3):603–617

    Article  Google Scholar 

  • Baguley T (2012) Calculating and graphing within-subject confidence intervals for ANOVA. Behav Res Meth 44(1):158–175

    Article  Google Scholar 

  • Bayarri MJ, Berger JO (2004) The interplay of Bayesian and frequentist analysis. Stat Sci 58–80

    Google Scholar 

  • Beaudouin-Lafon M (2008) Interaction is the future of computing. In: McDonald DW, Erickson T (eds) HCI remixed, reflections on works that have influenced the HCI community. The MIT Press, pp 263–266

    Google Scholar 

  • Bender R, Lange S (2001) Adjusting for multiple testing: when and how? J Clin Epidemiol 54(4):343–349

    Article  Google Scholar 

  • Beyth-Marom R, Fidler F, Cumming G (2008) Statistical cognition: towards evidence-based practice in statistics and statistics education. Stat Educ Res J 7(2):20–39

    Google Scholar 

  • Brewer MB (2000) Research design and issues of validity. Handbook of research methods in social and personality psychology. pp 3–16

    Google Scholar 

  • Brodeur A, Lé M, Sangnier M, Zylberberg Y (2012) Star wars: the empirics strike back. Paris school of economics working paper (2012–29)

    Google Scholar 

  • Carifio J, Perla RJ (2007) Ten common misunderstandings, misconceptions, persistent myths and urban legends about Likert scales and Likert response formats and their antidotes. J Soc Sci 3(3):106

    Google Scholar 

  • Chevalier F, Dragicevic P, Franconeri S (2014) The not-so-staggering effect of staggered animated transitions on visual tracking. IEEE Trans Visual Comput Graphics 20(12):2241–2250

    Article  Google Scholar 

  • Coe R (2002) It’s the effect size, stupid. In: Paper presented at the British Educational Research Association annual conference, vol 12. p 14

    Google Scholar 

  • Cohen J (1990) Things I have learned (so far). Am Psychol 45(12):1304

    Article  Google Scholar 

  • Cohen J (1994) The Earth is round (p < .05). Am psychol 49(12):997

    Google Scholar 

  • Colquhoun D (2014) An investigation of the false discovery rate and the misinterpretation of p-values. Roy Soc Open Sci 1(3):140, 216

    Google Scholar 

  • Correll M, Gleicher M (2014) Error bars considered harmful: exploring alternate encodings for mean and error. IEEE Trans Visual Comput Graphics 20(12):2142–2151

    Article  Google Scholar 

  • Cumming G (2008) Replication and p intervals: p values predict the future only vaguely, but confidence intervals do much better. Perspect Psychol Sci 3(4):286–300

    Article  MathSciNet  Google Scholar 

  • Cumming G (2009a) Dance of the p values [video]. http://tinyurl.com/danceptrial2

  • Cumming G (2009b) Inference by eye: reading the overlap of independent confidence intervals. Stat med 28(2):205–220

    Article  MathSciNet  Google Scholar 

  • Cumming G (2012) Understanding the new statistics : effect sizes, confidence intervals, and meta-analysis. Multivariate applications series. Routledge Academic, London

    Google Scholar 

  • Cumming G (2013) The new statistics: why and how. Psychol Sci

    Google Scholar 

  • Cumming G, Finch S (2005) Inference by eye: confidence intervals and how to read pictures of data. Am Psychol 60(2):170

    Article  Google Scholar 

  • Cumming G, Williams R (2011) Significant does not equal important: why we need the new statistics. Podcast. http://tinyurl.com/geoffstalk

  • Cumming G, Fidler F, Vaux DL (2007) Error bars in experimental biology. J Cell Biol 177(1):7–11

    Article  Google Scholar 

  • Dawkins R (2011) The tyranny of the discontinuous mind. New Statesman 19:54–57

    Google Scholar 

  • Dienes Z (2014) Using Bayes to get the most out of non-significant results. Front Psychol 5

    Google Scholar 

  • Dragicevic P (2012) My technique is 20% faster: problems with reports of speed improvements in HCI. Research report

    Google Scholar 

  • Dragicevic P (2015) The dance of plots. http://www.aviz.fr/danceplots

  • Dragicevic P, Chevalier F, Huot S (2014) Running an HCI experiment in multiple parallel universes. CHI extended abstracts. ACM, New York

    Google Scholar 

  • Drummond GB, Vowler SL (2011) Show the data, don’t conceal them. Adv Physiol Educ 35(2):130–132

    Article  Google Scholar 

  • Duckworth WM, Stephenson WR (2003) Resampling methods: not just for statisticians anymore. In: 2003 joint statistical meetings

    Google Scholar 

  • Ecklund A (2012) Beeswarm: the bee swarm plot, an alternative to stripchart. R package version 01

    Google Scholar 

  • Eich E (2014) Business not as usual (editorial). Psychol Sci 25(1):3–6. http://tinyurl.com/psedito

    Google Scholar 

  • Fekete JD, Van Wijk JJ, Stasko JT, North C (2008) The value of information visualization. In: Information visualization. Springer, pp 1–18

    Google Scholar 

  • Fidler F (2010) The american psychological association publication manual, 6th edn. Implications for statistics education. In: Data and context in statistics education: towards an evidence based society

    Google Scholar 

  • Fidler F, Cumming G (2005) Teaching confidence intervals: problems and potential solutions. In: Proceedings of the 55th international statistics institute session

    Google Scholar 

  • Fidler F, Loftus GR (2009) Why figures with error bars should replace p values. Zeitschrift für Psychologie/J Psychol 217(1):27–37

    Article  Google Scholar 

  • Fisher R (1955) Statistical methods and scientific induction. J Roy Stat Soc Ser B (Methodol): 69–78

    Google Scholar 

  • Forum C (2015) Is there a minimum sample size required for the t-test to be valid? http://tinyurl.com/minsample

  • Franz VH, Loftus GR (2012) Standard errors and confidence intervals in within-subjects designs: generalizing Loftus and Masson (1994) and avoiding the biases of alternative accounts. Psychon Bull Rev 19(3):395–404

    Article  Google Scholar 

  • Frick RW (1998) Interpreting statistical testing: process and propensity, not population and random sampling. Behav Res Meth Instrum Comput 30(3):527–535

    Article  MathSciNet  Google Scholar 

  • Gardner MJ, Altman DG (1986) Confidence intervals rather than p values: estimation rather than hypothesis testing. BMJ 292(6522):746–750

    Article  Google Scholar 

  • Gelman A (2004) Type 1, type 2, type S, and type M errors. http://tinyurl.com/typesm

  • Gelman A (2013a) Commentary: p-values and statistical practice. Epidemiology 24(1):69–72

    Article  Google Scholar 

  • Gelman A (2013b) Interrogating p-values. J Math Psychol 57(5):188–189

    Article  MathSciNet  MATH  Google Scholar 

  • Gelman A, Loken E (2013) The garden of forking paths. Online article

    Google Scholar 

  • Gelman A, Stern H (2006) The difference between significant and not significant is not itself statistically significant. Am Stat 60(4):328–331

    Article  MathSciNet  Google Scholar 

  • Gigerenzer G (2004) Mindless statistics. J Socio Econ 33(5):587–606

    Article  Google Scholar 

  • Gigerenzer G, Kruger L, Beatty J, Porter T, Daston L, Swijtink Z (1990) The empire of chance: how probability changed science and everyday life, vol 12. Cambridge University Press

    Google Scholar 

  • Giner-Sorolla R (2012) Science or art? how aesthetic standards grease the way through the publication bottleneck but undermine science. Perspect Psychol Sci 7(6):562–571

    Article  Google Scholar 

  • Gliner JA, Leech NL, Morgan GA (2002) Problems with null hypothesis significance testing (NHST): what do the textbooks say? J Exp Educ 71(1):83–92

    Article  Google Scholar 

  • Goldacre B (2012) What doctors don’t know about the drugs they prescribe [TED talk]. http://tinyurl.com/goldacre-ted

  • Goodman SN (1999) Toward evidence-based medical statistics. 1: the p value fallacy. Ann Intern Med 130(12):995–1004

    Article  Google Scholar 

  • Greenland S, Poole C (2013) Living with p values: resurrecting a Bayesian perspective on frequentist statistics. Epidemiology 24(1):62–68

    Article  Google Scholar 

  • Hager W (2002) The examination of psychological hypotheses by planned contrasts referring to two-factor interactions in fixed-effects ANOVA. Method Psychol Res, Online 7:49–77

    MathSciNet  Google Scholar 

  • Haller H, Krauss S (2002) Misinterpretations of significance: a problem students share with their teachers. Methods Psychol Res 7(1):1–20

    Google Scholar 

  • Hoekstra R, Finch S, Kiers HA, Johnson A (2006) Probability as certainty: dichotomous thinking and the misuse of p values. Psychon Bull Rev 13(6):1033–1037

    Article  Google Scholar 

  • Hofmann H, Follett L, Majumder M, Cook D (2012) Graphical tests for power comparison of competing designs. IEEE Trans Visual Comput Graphics 18(12):2441–2448

    Article  Google Scholar 

  • Hornbæk K, Sander SS, Bargas-Avila JA, Grue Simonsen J (2014) Is once enough?: on the extent and content of replications in human-computer interaction. In: Proceedings of ACM, ACM conference on human factors in computing systems, pp 3523–3532

    Google Scholar 

  • Ioannidis JP (2005) Why most published research findings are false. PLoS med 2(8):e124

    Article  MathSciNet  Google Scholar 

  • Jansen Y (2014) Physical and tangible information visualization. PhD thesis, Université Paris Sud-Paris XI

    Google Scholar 

  • Kaptein M, Robertson J (2012) Rethinking statistical analysis methods for CHI. In: Proceedings of the SIGCHI conference on human factors in computing systems, ACM, pp 1105–1114

    Google Scholar 

  • Keene ON (1995) The log transformation is special. Stat Med 14(8):811–819

    Article  Google Scholar 

  • Kerr NL (1998) HARKing: hypothesizing after the results are known. Pers Soc Psychol Rev 2(3):196–217

    Article  Google Scholar 

  • Kindlmann G, Scheidegger C (2014) An algebraic process for visualization design. IEEE Trans Visual Comput Graphics 20(12):2181–2190

    Article  Google Scholar 

  • Kirby KN, Gerlanc D (2013) BootES: an R package for bootstrap confidence intervals on effect sizes. Behav Res Methods 45(4):905–927

    Article  Google Scholar 

  • Kirk RE (2001) Promoting good statistical practices: some suggestions. Educ Psychol Meas 61(2):213–218

    Article  MathSciNet  Google Scholar 

  • Kline RB (2004) What’s wrong with statistical tests–and where we go from here. Am Psychol Assoc

    Google Scholar 

  • Lakens D, Pigliucci M, Galef J (2014) Daniel Lakens on p-hacking and other problems in psychology research. Podcast. http://tinyurl.com/lakens-podcast

  • Lambdin C (2012) Significance tests as sorcery: science is empirical, significance tests are not. Theory Psychol 22(1):67–90

    Article  Google Scholar 

  • Lazic SE (2010) The problem of pseudoreplication in neuroscientific studies: is it affecting your analysis? BMC Neurosci 11(1):5

    Article  Google Scholar 

  • Levine TR, Weber R, Hullett C, Park HS, Lindsey LLM (2008a) A critical assessment of null hypothesis significance testing in quantitative communication research. Hum Commun Res 34(2):171–187

    Article  Google Scholar 

  • Levine TR, Weber R, Park HS, Hullett CR (2008b) A communication researchers’ guide to null hypothesis significance testing and alternatives. Hum Commun Res 34(2):188–209

    Article  Google Scholar 

  • Loftus GR (1993) A picture is worth a thousand p values: on the irrelevance of hypothesis testing in the microcomputer age. Behav Res Meth Instrum Comput 25(2):250–256

    Article  Google Scholar 

  • MacCallum RC, Zhang S, Preacher KJ, Rucker DD (2002) On the practice of dichotomization of quantitative variables. Psychol Methods 7(1):19

    Article  Google Scholar 

  • Mazar N, Amir O, Ariely D (2008) The dishonesty of honest people: a theory of self-concept maintenance. J Mark Res 45(6):633–644

    Article  Google Scholar 

  • Meehl PE (1967) Theory-testing in psychology and physics: a methodological paradox. Philos Sci: 103–115

    Google Scholar 

  • Miller J (1991) Short report: reaction time analysis with outlier exclusion: bias varies with sample size. Q J Exp Psychol 43(4):907–912

    Article  Google Scholar 

  • Morey RD, Hoekstra R, Rouder JN, Lee MD, Wagenmakers EJ (2015) The fallacy of placing confidence in confidence intervals (version 2). http://tinyurl.com/cifallacy

  • Nelson MJ (2011) You might want a tolerance interval. http://tinyurl.com/tol-interval

  • Newcombe RG (1998a) Interval estimation for the difference between independent proportions: comparison of eleven methods. Stat Med 17(8):873–890

    Article  Google Scholar 

  • Newcombe RG (1998b) Two-sided confidence intervals for the single proportion: comparison of seven methods. Stat Med 17(8):857–872

    Article  Google Scholar 

  • Newman GE, Scholl BJ (2012) Bar graphs depicting averages are perceptually misinterpreted: the within-the-bar bias. Psychon Bull Rev 19(4):601–607

    Article  Google Scholar 

  • Norman DA (2002) The Design of Everyday Things. Basic Books Inc, New York

    Google Scholar 

  • Norman G (2010) Likert scales, levels of measurement and the laws of statistics. Adv Health Sci Educ 15(5):625–632

    Article  Google Scholar 

  • Nuzzo R (2014) Scientific method: statistical errors. Nature 506(7487):150–152

    Article  Google Scholar 

  • Open Science Collaboration (2015) Estimating the reproducibility of psychological science. Science 349(6251):aac4716+

    Google Scholar 

  • Osborne JW, Overbay A (2004) The power of outliers (and why researchers should always check for them). Pract Asses Res Eval 9(6):1–12

    Google Scholar 

  • Perin C, Dragicevic P, Fekete JD (2014) Revisiting Bertin matrices: new interactions for crafting tabular visualizations. IEEE Trans Visual Comput Graphics 20(12):2082–2091

    Article  Google Scholar 

  • Pollard P, Richardson J (1987) On the probability of making Type I errors. Psychol Bull 102(1):159

    Article  Google Scholar 

  • Rawls RL (1998) Breaking up is hard to do. Chem Eng News 76(25):29–34

    Article  Google Scholar 

  • Reips UD, Funke F (2008) Interval-level measurement with visual analogue scales in internet-based research: VAS generator. Behav Res Methods 40(3):699–704

    Article  Google Scholar 

  • Rensink RA (2014) On the prospects for a science of visualization. In: Handbook of Human Centric Visualization. Springer, pp 147–175

    Google Scholar 

  • Ricketts C, Berry J (1994) Teaching statistics through resampling. Teach Stat 16(2):41–44

    Article  Google Scholar 

  • Rosenthal R (2009) Artifacts in behavioral research: Robert Rosenthal and Ralph L. Rosnow’s Classic Books. Oxford University Press, Oxford

    Book  Google Scholar 

  • Rosenthal R, Fode KL (1963) The effect of experimenter bias on the performance of the albino rat. Behav Sci 8(3):183–189

    Article  Google Scholar 

  • Rosnow RL, Rosenthal R (1989) Statistical procedures and the justification of knowledge in psychological science. Am Psychol 44(10):1276

    Article  Google Scholar 

  • Rossi JS (1990) Statistical power of psychological research: what have we gained in 20 years? J Consult Clin Psychol 58(5):646

    Article  Google Scholar 

  • Sauro J, Lewis JR (2010) Average task times in usability tests: what to report? In: Proceedings of the SIGCHI conference on human factors in computing systems, ACM, pp 2347–2350

    Google Scholar 

  • Schmidt FL, Hunter J (1997) Eight common but false objections to the discontinuation of significance testing in the analysis of research data. What if there were no significance tests. pp 37–64

    Google Scholar 

  • Simmons JP, Nelson LD, Simonsohn U (2011) False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci 22(11):1359–1366

    Article  Google Scholar 

  • Smith RA, Levine TR, Lachlan KA, Fediuk TA (2002) The high cost of complexity in experimental design and data analysis: type I and type II error rates in multiway ANOVA. Hum Commun Res 28(4):515–530

    Article  Google Scholar 

  • Stewart-Oaten A (1995) Rules and judgments in statistics: three examples. Ecology: 2001–2009

    Google Scholar 

  • The Economist (2013) Unreliable research: Trouble at the lab. http://tinyurl.com/trouble-lab

  • Thompson B (1998) Statistical significance and effect size reporting: portrait of a possible future. Res Sch 5(2):33–38

    Google Scholar 

  • Thompson B (1999) Statistical significance tests, effect size reporting and the vain pursuit of pseudo-objectivity. Theory Psychol 9(2):191–196

    Article  Google Scholar 

  • Trafimow D, Marks M (eds) (2015) Basic Appl Social Psychol 37(1):1–2. http://tinyurl.com/trafimow

  • Tryon WW (2001) Evaluating statistical difference, equivalence, and indeterminacy using inferential confidence intervals: an integrated alternative method of conducting null hypothesis statistical tests. Psychol Methods 6(4):371

    Article  Google Scholar 

  • Tukey JW (1980) We need both exploratory and confirmatory. Am Stat 34(1):23–25

    Google Scholar 

  • Ulrich R, Miller J (1994) Effects of truncation on reaction time analysis. J Exp Psychol: Gen 123(1):34

    Article  Google Scholar 

  • van Deemter K (2010) Not exactly: in praise of vagueness. Oxford University Press, Oxford

    Google Scholar 

  • Velleman PF, Wilkinson L (1993) Nominal, ordinal, interval, and ratio typologies are misleading. Am Stat 47(1):65–72

    Google Scholar 

  • Vicente KJ, Torenvliet GL (2000) The Earth is spherical (p < 0.05): alternative methods of statistical inference. Theor Issues Ergon Sci 1(3):248–271

    Google Scholar 

  • Victor B (2011) Explorable explanations. http://worrydream.com/ExplorableExplanations/

  • Wainer H (1984) How to display data badly. Am Stat 38(2):137–147

    MathSciNet  Google Scholar 

  • Wickham H, Stryjewski L (2011) 40 years of boxplots. Am Stat

    Google Scholar 

  • Wierdsma A (2013) What is wrong with tests of normality? http://tinyurl.com/normality-wrong

  • Wilcox RR (1998) How many discoveries have been lost by ignoring modern statistical methods? Am Psychol 53(3):300

    Article  Google Scholar 

  • Wilkinson L (1999) Statistical methods in psychology journals: guidelines and explanations. Am Psychol 54(8):594

    Article  Google Scholar 

  • Willett W, Jenny B, Isenberg T, Dragicevic P (2015) Lightweight relief shearing for enhanced terrain perception on interactive maps. In: Proceedings of ACM conference on human factors in computing systems. ACM, New York, NY, USA, CHI ’15, pp 3563–3572

    Google Scholar 

  • Wilson W (1962) A note on the inconsistency inherent in the necessity to perform multiple comparisons. Psychol Bull 59(4):296

    Article  Google Scholar 

  • Wood M (2004) Statistical inference using bootstrap confidence intervals. Significance 1(4):180–182

    Article  MathSciNet  Google Scholar 

  • Wood M (2005) Bootstrapped confidence intervals as an approach to statistical inference. Organ Res Meth 8(4):454–470

    Article  Google Scholar 

  • Zacks J, Tversky B (1999) Bars and lines: a study of graphic communication. Mem Cogn 27(6):1073–1079

    Article  Google Scholar 

  • Ziliak ST, McCloskey DN (2008) The cult of statistical significance. University of Michigan Press, Ann Arbor

    Google Scholar 

Download references

Acknowledgments

Many thanks to Elie Cattan, Fanny Chevalier, Geoff Cumming, Steven Franconeri, Steve Haroz, Petra Isenberg, Yvonne Jansen, Maurits Kaptein, Heidi Lam, Judy Robertson, Michael Sedlmair, Dan Simons, Chat Wacharamanotham and Wesley Willett for their helpful feedback and comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pierre Dragicevic .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Dragicevic, P. (2016). Fair Statistical Communication in HCI. In: Robertson, J., Kaptein, M. (eds) Modern Statistical Methods for HCI. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-319-26633-6_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-26633-6_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-26631-2

  • Online ISBN: 978-3-319-26633-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics