Abstract
Healthcare intelligence is derived from human-centric solutions (predictive and analytical) that deal with diagnosis and treatment based on the patient’s information. In an attempt to embed computational accuracy, MYCIN (a rule-based system) was developed in the 1970s, to diagnose the blood-borne bacterial infections. Pharmacogenomics, the study of individualized medicine and lifesaving treatments, aims to identify the effect of genes in response to drugs and treatments. It is emerging and offering collaborative solutions of pharmacology (the science of drugs), genomics (study of genes), and machine intelligence (AI technologies). Machine learning and natural language processing are being used by IBM Watson to advance precision medicine, particularly diagnosis and treatment of cancer. But the above systems were not adopted for clinical practices, they demonstrated promise for accurate diagnoses and treatments. In the past, healthcare decisions were almost entirely made by people and integrating smart intelligent devices and models into the process raises questions about accountability, transparency, consent, and privacy. Healthcare decisions are shifting from exclusive human-centric to semi- or fully smart intelligent machines. This entails bias and ethical concerns. In the literal sense, the computation and cognitive processes of such bias effects are manifest in explicit preconceived ideas (consciously) and assumptions or stereotypes (unconsciously) as well as skewed data insights for a particular segment of class (inadvertently). The objective of the rest of discussion is to perform an analytical investigation for future healthcare informatics.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Buolamwini, J. A. (2017). Gender shades: Intersectional phenotypic and demographic evaluation of face datasets and gender classifiers. Massachusetts Institute of Technology.
Cahan, E. M., Hernandez-Boussard, T., Thadaney-Israni, S., & Rubin, D. L. (2019). Putting the data before the algorithm in big data addressing personalized healthcare. NPJ Digital Medicine, 2, 78.
Char, D. S., Shah, N. H., & Magnus, D. (2018). Implementing machine learning in health care addressing ethical challenges. New England Journal of Medicine, 378, 981–983.
Chen, I. Y., Szolovits, P., & Ghassemi, M. (2019). Can AI help reduce disparities in general medical and mental health care? AMA Journal of Ethics, 21, E167–E179.
Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgement. Science, 243(4899), 1668–1674.
Wyatt, J. C., & Altman, D. G. (1995). Commentary: Prognostic models: clinically useful or quickly forgotten? BMJ, 311(7019), 1539–1541.
Jiang, F., Jiang, Y., & Zhhi, H. (2017). Artificial intelligence in healthcare: Past, present and future. Stroke and Vascular Neurology, 2(4), 230–243.
Sidey-Gibbons, A., Jenni, M., & Sidey-Gibbons, C. J. (2019). Machine learning in medicine: A practical introduction. BMC Medical Research Methodology, 19.
Wang, H., Klinginsmith, J., Dong, X., Lee, A. C., Guha, R., Wu, Y., Crippen, G. M., & Wild, D. J. (2007). Chemical data mining of the NCI human tumor cell line database. Journal of Chemical Information and Modeling, 47, 2063–2076.
Davenport, T., & Kalakota, R. (2019). The potenidal for artificial intelligence in healthcare. Future Healthcare Journal, 6(2), 94–98.
Keto, J., Ventola, H., & Jokelainen, J. (2016). Cardiovascular disease risk factors in relation to smoking behaviour and history: A population-based cohort study. Open Heart, 3(2).
Olokoba, A. B., Obateru, O. A., & Olokoba, L. B. (2012). Type-2 diabetes mellitus: A review of current trends. Oman Medical Journal, 27(4), 269–273.
Sartzetakis, I., Christodoulopoulos, K., & Varvarigos, E. (2019). Accurate quality of transmission estimation with machine learning. Journal of Optical communication and Networking, 11(3), 140–150.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436–444.
Regan, J. (2016). New Zealand passport robot tells applicant of Asian descent to open eyes. Reuters News.
Rajkomar, A., Hardt, M., Howell, M. D., Corrado, G., & Chin, M. H. (2018). Ensuring fairness in machine learning to advance health equity. Annals of Internal Medicine, 169, 866–872.
Wiens, J., Price II, W. N., & Sjoding M., W. (2020). Diagnosing bias in data-driven algorithms for healthcare. Nature Medicine, 26, 25–26.
Melissa, D. M., Shalmali, J., James, A. A., Mjaye, A. G., & Randi, Z. S. (2020). Patient safety and quality improvement: Ethical principles for a regulatory approach to bias in healthcare machine learning. Journal of the American Medical Informatics Association, 27(12), 2024–2027.
Melissa, D. M., Shalmali, J., James, A. A., Mjaye, A. G., & Randi, Z. S. (2021). New horizons-addressing healthcare disparities in endocrine disease: Bias, science and patient care. The Journal of Clinical Endocrinology & Metabolism, 106(12), e4887–e4902.
Gopal, D. P., Chetty, U., O’Donnell, P., Gajrai, C., & Blackadder-Weinstein, J. (2021). Implicit bias in healthcare: Clinical practice, research and decision making. Future Healthcare Journal, 8(1), 40.
FitzGerald, C., & Hurst, S. (2017). Implicit bias in healthcare professionals: A systematic review. BMC Medical Ethics, 18.
Staats, C., & Patton, C. (2013). State of the science: Implicit bias review. In OH: The Kirwan Institute for the Study of Race and Ethnicity, The Ohio State University (pp. 1–102).
Mooney, R. (1996). Comparative experiments on disambiguating word senses: An illustration of the role of bias in machine learning. In Conference on Empirical Methods in Natural Language Processing (pp. 82–91). Austin, TX: University of Texas.
Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open Science. Patterns, 2(10), 100347.
Maserejian, N. N., Link, C. L., Lutfey, K. L., Marceau, L. D., & McKinlay, J. B. (2002). Disparities in physicians’ interpretations of heart disease symptoms by patient gender: Results of a video vignette factorial experiment.
Kumar, H., Dundy, G., Kini, H., Tiwari, A., & Bhardwaj, M. (2018). Spectrum of gallbladder diseases—A comparative study in north vs south Indian population. Indian Journal of Pathology and Oncology, 5(2), 273–276.
Uwe, S., Gaby S., & Vera, Z. (2008). Pharmacogenomics bias-Systematic distortion of study results by genetic heterogeneity. GMS Health Technology Assessment.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6), 1–35.
Obin, J. A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399.
Wolff, R. F., Moons, K. G., Riley, R. D., Whiting, P. F., Westwood, M., Collins, G. S., & Mallett, S. (2019). A tool to assess the risk of bias and applicability of prediction model studies. Annal of Internal Medicine, 170(1), 51–58.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Panda, G.K., Sahu, I.K., Sahu, D. (2022). Effect of Computation and Cognitive Bias in Healthcare Intelligence and Pharmacogenomics. In: Tripathy, B.K., Lingras, P., Kar, A.K., Chowdhary, C.L. (eds) Next Generation Healthcare Informatics. Studies in Computational Intelligence, vol 1039. Springer, Singapore. https://doi.org/10.1007/978-981-19-2416-3_4
Download citation
DOI: https://doi.org/10.1007/978-981-19-2416-3_4
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-2415-6
Online ISBN: 978-981-19-2416-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)