Skip to main content
Log in

Understanding user sensemaking in fairness and transparency in algorithms: algorithmic sensemaking in over-the-top platform

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

A number of artificial intelligence (AI) systems have been proposed to assist users in identifying the issues of algorithmic fairness and transparency. These AI systems use diverse bias detection methods from various perspectives, including exploratory cues, interpretable tools, and revealing algorithms. This study explains the design of AI systems by probing how users make sense of fairness and transparency as they are hypothetical in nature, with no specific ways for evaluation. Focusing on individual perceptions of fairness and transparency, this study examines the roles of normative values in over-the-top (OTT) platforms by empirically testing their effects on sensemaking processes. A mixed-method design incorporating both qualitative and quantitative approaches was used to discover user heuristics and to test the effects of such normative values on user acceptance. Collectively, a composite concept of transparent fairness emerged around user sensemaking processes and its formative roles regarding their underlying relations to perceived quality and credibility. From a sensemaking perspective, this study discusses the implications of transparent fairness in algorithmic media platforms by clarifying how and what should be done to make algorithmic media more trustable and reliable platforms. Based on the findings, a theoretical model is developed to define transparent fairness as an essential algorithmic attribute in the context of OTT platforms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

Download references

Funding

This project has been funded by the Office of Research and the Institute for Social and Economic Research at Zayed University (The Policy Research Incentive Program 2022). It also received the support from Provost's Research Fellowship Award of Zayed University (R21050/2022).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Donghee Shin.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shin, D., Lim, J.S., Ahmad, N. et al. Understanding user sensemaking in fairness and transparency in algorithms: algorithmic sensemaking in over-the-top platform. AI & Soc 39, 477–490 (2024). https://doi.org/10.1007/s00146-022-01525-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-022-01525-9

Keywords

Navigation