Skip to main content

Effect of Computation and Cognitive Bias in Healthcare Intelligence and Pharmacogenomics

  • Chapter
  • First Online:
Next Generation Healthcare Informatics

Part of the book series: Studies in Computational Intelligence ((SCI,volume 1039))

  • 384 Accesses

Abstract

Healthcare intelligence is derived from human-centric solutions (predictive and analytical) that deal with diagnosis and treatment based on the patient’s information. In an attempt to embed computational accuracy, MYCIN (a rule-based system) was developed in the 1970s, to diagnose the blood-borne bacterial infections. Pharmacogenomics, the study of individualized medicine and lifesaving treatments, aims to identify the effect of genes in response to drugs and treatments. It is emerging and offering collaborative solutions of pharmacology (the science of drugs), genomics (study of genes), and machine intelligence (AI technologies). Machine learning and natural language processing are being used by IBM Watson to advance precision medicine, particularly diagnosis and treatment of cancer. But the above systems were not adopted for clinical practices, they demonstrated promise for accurate diagnoses and treatments. In the past, healthcare decisions were almost entirely made by people and integrating smart intelligent devices and models into the process raises questions about accountability, transparency, consent, and privacy. Healthcare decisions are shifting from exclusive human-centric to semi- or fully smart intelligent machines. This entails bias and ethical concerns. In the literal sense, the computation and cognitive processes of such bias effects are manifest in explicit preconceived ideas (consciously) and assumptions or stereotypes (unconsciously) as well as skewed data insights for a particular segment of class (inadvertently). The objective of the rest of discussion is to perform an analytical investigation for future healthcare informatics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Buolamwini, J. A. (2017). Gender shades: Intersectional phenotypic and demographic evaluation of face datasets and gender classifiers. Massachusetts Institute of Technology.

    Google Scholar 

  2. Cahan, E. M., Hernandez-Boussard, T., Thadaney-Israni, S., & Rubin, D. L. (2019). Putting the data before the algorithm in big data addressing personalized healthcare. NPJ Digital Medicine, 2, 78.

    Article  Google Scholar 

  3. Char, D. S., Shah, N. H., & Magnus, D. (2018). Implementing machine learning in health care addressing ethical challenges. New England Journal of Medicine, 378, 981–983.

    Article  Google Scholar 

  4. Chen, I. Y., Szolovits, P., & Ghassemi, M. (2019). Can AI help reduce disparities in general medical and mental health care? AMA Journal of Ethics, 21, E167–E179.

    Article  Google Scholar 

  5. Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgement. Science, 243(4899), 1668–1674.

    Article  Google Scholar 

  6. Wyatt, J. C., & Altman, D. G. (1995). Commentary: Prognostic models: clinically useful or quickly forgotten? BMJ, 311(7019), 1539–1541.

    Google Scholar 

  7. Jiang, F., Jiang, Y., & Zhhi, H. (2017). Artificial intelligence in healthcare: Past, present and future. Stroke and Vascular Neurology, 2(4), 230–243.

    Article  Google Scholar 

  8. Sidey-Gibbons, A., Jenni, M., & Sidey-Gibbons, C. J. (2019). Machine learning in medicine: A practical introduction. BMC Medical Research Methodology, 19.

    Google Scholar 

  9. Wang, H., Klinginsmith, J., Dong, X., Lee, A. C., Guha, R., Wu, Y., Crippen, G. M., & Wild, D. J. (2007). Chemical data mining of the NCI human tumor cell line database. Journal of Chemical Information and Modeling, 47, 2063–2076.

    Article  Google Scholar 

  10. Davenport, T., & Kalakota, R. (2019). The potenidal for artificial intelligence in healthcare. Future Healthcare Journal, 6(2), 94–98.

    Article  Google Scholar 

  11. Keto, J., Ventola, H., & Jokelainen, J. (2016). Cardiovascular disease risk factors in relation to smoking behaviour and history: A population-based cohort study. Open Heart, 3(2).

    Google Scholar 

  12. Olokoba, A. B., Obateru, O. A., & Olokoba, L. B. (2012). Type-2 diabetes mellitus: A review of current trends. Oman Medical Journal, 27(4), 269–273.

    Article  Google Scholar 

  13. Sartzetakis, I., Christodoulopoulos, K., & Varvarigos, E. (2019). Accurate quality of transmission estimation with machine learning. Journal of Optical communication and Networking, 11(3), 140–150.

    Article  Google Scholar 

  14. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436–444.

    Article  Google Scholar 

  15. Regan, J. (2016). New Zealand passport robot tells applicant of Asian descent to open eyes. Reuters News.

    Google Scholar 

  16. Rajkomar, A., Hardt, M., Howell, M. D., Corrado, G., & Chin, M. H. (2018). Ensuring fairness in machine learning to advance health equity. Annals of Internal Medicine, 169, 866–872.

    Article  Google Scholar 

  17. Wiens, J., Price II, W. N., & Sjoding M., W. (2020). Diagnosing bias in data-driven algorithms for healthcare. Nature Medicine, 26, 25–26.

    Google Scholar 

  18. Melissa, D. M., Shalmali, J., James, A. A., Mjaye, A. G., & Randi, Z. S. (2020). Patient safety and quality improvement: Ethical principles for a regulatory approach to bias in healthcare machine learning. Journal of the American Medical Informatics Association, 27(12), 2024–2027.

    Google Scholar 

  19. Melissa, D. M., Shalmali, J., James, A. A., Mjaye, A. G., & Randi, Z. S. (2021). New horizons-addressing healthcare disparities in endocrine disease: Bias, science and patient care. The Journal of Clinical Endocrinology & Metabolism, 106(12), e4887–e4902.

    Google Scholar 

  20. Gopal, D. P., Chetty, U., O’Donnell, P., Gajrai, C., & Blackadder-Weinstein, J. (2021). Implicit bias in healthcare: Clinical practice, research and decision making. Future Healthcare Journal, 8(1), 40.

    Google Scholar 

  21. FitzGerald, C., & Hurst, S. (2017). Implicit bias in healthcare professionals: A systematic review. BMC Medical Ethics, 18.

    Google Scholar 

  22. Staats, C., & Patton, C. (2013). State of the science: Implicit bias review. In OH: The Kirwan Institute for the Study of Race and Ethnicity, The Ohio State University (pp. 1–102).

    Google Scholar 

  23. Mooney, R. (1996). Comparative experiments on disambiguating word senses: An illustration of the role of bias in machine learning. In Conference on Empirical Methods in Natural Language Processing (pp. 82–91). Austin, TX: University of Texas.

    Google Scholar 

  24. Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open Science. Patterns, 2(10), 100347.

    Article  Google Scholar 

  25. Maserejian, N. N., Link, C. L., Lutfey, K. L., Marceau, L. D., & McKinlay, J. B. (2002). Disparities in physicians’ interpretations of heart disease symptoms by patient gender: Results of a video vignette factorial experiment.

    Google Scholar 

  26. Kumar, H., Dundy, G., Kini, H., Tiwari, A., & Bhardwaj, M. (2018). Spectrum of gallbladder diseases—A comparative study in north vs south Indian population. Indian Journal of Pathology and Oncology, 5(2), 273–276.

    Google Scholar 

  27. Uwe, S., Gaby S., & Vera, Z. (2008). Pharmacogenomics bias-Systematic distortion of study results by genetic heterogeneity. GMS Health Technology Assessment.

    Google Scholar 

  28. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6), 1–35.

    Article  Google Scholar 

  29. Obin, J. A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399.

    Article  Google Scholar 

  30. Wolff, R. F., Moons, K. G., Riley, R. D., Whiting, P. F., Westwood, M., Collins, G. S., & Mallett, S. (2019). A tool to assess the risk of bias and applicability of prediction model studies. Annal of Internal Medicine, 170(1), 51–58.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to G. K. Panda .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Panda, G.K., Sahu, I.K., Sahu, D. (2022). Effect of Computation and Cognitive Bias in Healthcare Intelligence and Pharmacogenomics. In: Tripathy, B.K., Lingras, P., Kar, A.K., Chowdhary, C.L. (eds) Next Generation Healthcare Informatics. Studies in Computational Intelligence, vol 1039. Springer, Singapore. https://doi.org/10.1007/978-981-19-2416-3_4

Download citation

Publish with us

Policies and ethics