Skip to main content
Log in

Acquiring Weak Annotations for Tumor Localization in Temporal and Volumetric Data

  • Research Article
  • Published:
Machine Intelligence Research Aims and scope Submit manuscript

Abstract

Creating large-scale and well-annotated datasets to train AI algorithms is crucial for automated tumor detection and localization. However, with limited resources, it is challenging to determine the best type of annotations when annotating massive amounts of unlabeled data. To address this issue, we focus on polyps in colonoscopy videos and pancreatic tumors in abdominal CT scans; Both applications require significant effort and time for pixel-wise annotation due to the high dimensional nature of the data, involving either temporary or spatial dimensions. In this paper, we develop a new annotation strategy, termed Drag&Drop, which simplifies the annotation process to drag and drop. This annotation strategy is more efficient, particularly for temporal and volumetric imaging, than other types of weak annotations, such as per-pixel, bounding boxes, scribbles, ellipses and points. Furthermore, to exploit our Drag&Drop annotations, we develop a novel weakly supervised learning method based on the watershed algorithm. Experimental results show that our method achieves better detection and localization performance than alternative weak annotations and, more importantly, achieves similar performance to that trained on detailed per-pixel annotations. Interestingly, we find that, with limited resources, allocating weak annotations from a diverse patient population can foster models more robust to unseen images than allocating per-pixel annotations for a small set of images. In summary, this research proposes an efficient annotation strategy for tumor detection and localization that is less accurate than per-pixel annotations but useful for creating large-scale datasets for screening tumors in various medical modalities. Project Page: https://github.com/johnson111788/Drag-Drop

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Y. D. Xia, Q. H. Yu, L. D. Chu, S. Kawamoto, S. Park, F. Z. Liu, J. N. Chen, Z. T. Zhu, B. W. Li, Z. W. Zhou, Y. Y. Lu, Y. Wang, W. Shen, L. X. Xie, Y. Y. Zhou, C. Wolfgang, A. Javed, D. F. Fouladi, S. Shayesteh, J. Graves, A. Blanco, E. S. Zinreich, B. Kinny-Köster, K. Kinzler, R. H. Hruban, B. Vogelstein, A. L. Yuille, E. K. Fishman. The FELIX project: Deep networks to detect pancreatic neoplasms. medRxiv, 2022. DOI: https://doi.org/10.1101/2022.09.24.22280071.

  2. S. J. Winawer, A. G. Zauber, M. N. Ho, M. J. O’Brien, L. S. Gottlieb, S. S. Sternberg, J. D. Waye, M. Schapiro, J. H. Bond, J. F. Panish, F. Ackroyd, M. Shike, R. C. Kurtz, L. Hornsby-Lewis, H. Gerdes, E. T. Stewart. The National Polyp Study Workgroup. Prevention of colorectal cancer by colonoscopic polypectomy. New England Journal of Medicine, vol.329, no. 27, pp. 1977–1981, 1993. DOI: https://doi.org/10.1056/NEJM199312303292701.

    Article  CAS  PubMed  Google Scholar 

  3. D. K. Rex, C. R. Boland, J. A. Dominitz, F. M. Giardiello, D. A. Johnson, T. Kaltenbach, T. R. Levin, D. Lieberman, D. J. Robertson. Colorectal cancer screening: Recommendations for physicians and patients from the U.S. multi-society task force on colorectal cancer. Gastroenterology, vol.153, no.1, pp. 307–323, 2017. DOI: https://doi.org/10.1053/j.gastro.2017.05.013.

    Article  PubMed  Google Scholar 

  4. D. Vázquez, J. Bernai, F. J. Sánchez, G. Fernández-Espar-rach, A. M. Lopez, A. Romero, M. Drozdzal, A. Courville. A benchmark for endoluminal scene segmentation of colonoscopy images. Journal of Healthcare Engineering, vol.2017, pp.4037190, 2017. DOI: https://doi.org/10.1155/2017/4037190.

    Article  PubMed  PubMed Central  Google Scholar 

  5. M. Misawa, S. E. Kudo, Y. Mori, K. Hotta, K. Ohtsuka, T. Matsuda, S. Saito, T. Kudo, T. Baba, F. Ishida, H. Itoh, M. Oda, K. Mori. Development of a computer-aided detection system for colonoscopy and a publicly accessible large colonoscopy video database (with video). Gastrointestinal Endoscopy, vol.93, no. 4, pp. 960–967.e3, 2021. DOI: https://doi.org/10.1016/j.gie.2020.07.060.

    Article  PubMed  Google Scholar 

  6. Y. T. Ma, X. J. Chen, K. Cheng, Y. Li, B. Sun. LDPoly-pVideo benchmark: A Large-Scale colonoscopy video data-set of diverse polyps. In Proceedings of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention, Strasbourg, France, pp. 387–396, 2021. DOI: https://doi.org/10.1007/978-3-030-87240-3_37.

  7. P. H. Smedsrud, V. Thambawita, S. A. Hicks, H. Gjestang, O. O. Nedrejord, E. Næss, H. Borgli, D. Jha, T. J. D. Berstad, S. L. Eskeland, M. Lux, H. Espeland, A. Petlund, D. T. D. Nguyen, E. Garcia-Ceja, D. Johansen, P. T. Schmidt, E. Toth, H. L. Hammer, T. De Lange, M. A. Riegler, P. Halvorsen. Kvasir-Capsule, a video capsule endoscopy dataset. Scientific Data, vol.8, no.1, pp.142, 2021. DOI: https://doi.org/10.1038/s41597-021-00920-z.

    Article  PubMed  PubMed Central  Google Scholar 

  8. G. P. Ji, G. B. Xiao, Y. C. Chou, D. P. Fan, K. Zhao, G. Chen, L. Van Gool. Video polyp segmentation: A deep learning perspective. Machine Intelligence Research, vol.19, no. 6, pp. 531–549, 2022. DOI: https://doi.org/10.1007/s11633-022-1371-y.

    Article  Google Scholar 

  9. R. S. Lee, F. Gimenez, A. Hoogi, K. K. Miyake, M. Gorovoy, D. L. Rubin. A curated mammography data set for use in computer-aided detection and diagnosis research. Scientific Data, vol.4, pp.170177, 2017. DOI: https://doi.org/10.1038/sdata.2017.177.

    Article  PubMed  PubMed Central  Google Scholar 

  10. P. Porwal, S. Pachade, R. Kamble, M. Kokare, G. Deshmukh, V. Sahasrabuddhe, F. Meriaudeau. Indian diabetic retinopathy image dataset (IDRiD): A database for diabetic retinopathy screening research. Data, vol. 3, no. 3, pp. 25, 2018. DOI: https://doi.org/10.3390/data3030025.

    Article  Google Scholar 

  11. C. Y. Qu, T. Z. Zhang, H. L. Qiao, J. Liu, Y. C. Tang, A. YuiUe, Z. W. Zhou. AbdomenAtlas-8K: Annotating 8 000 CT volumes for multi-organ segmentation in three weeks, [Online], Available: https://arxiv.org/abs/2305.09666

  12. N. Heller, S. McSweeney, M. T. Peterson, S. Peterson, J. Rickman, B. Stai, R. Tejpaul, M. Oestreich, P. Blake, J. Rosenberg, K. Moore, E. Walczak, Z. Rengel, Z. Edgerton, R. Vasdev, A. Kalapara, N. J. Sathianathen, N. Papanikolopoulos, C. J. Weight. An international challenge to use artificial intelligence to define the state-of-the-art in kidney and kidney tumor segmentation in CT imaging. Journal of Clinical Oncology, vol. 38, no. S6, pp. 626, 2020. DOI: https://doi.org/10.1200/JCO.2020.38.6_suppl.626.

    Article  Google Scholar 

  13. F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen, K. H. Maier-Hein. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nature Methods, vol.18, no. 2, pp. 203–211, 2021. DOI: https://doi.org/10.1038/s41592-020-01008-z.

    Article  CAS  PubMed  Google Scholar 

  14. J. N. Chen, Y. D. Xia, J. W. Yao, K. Yan, J. P. Zhang, L. Lu, F. K. Wang, B. Zhou, M. Y. Qiu, Q. H. Yu, M. Z. Yuan, W. Fang, Y. X. Tang, M. F. Xu, J. Zhou, Y. Q. Zhao, Q. F. Wang, X. H. Ye, X. L. Yin, Y. Shi, X. Chen, J. R. Zhou, A. Yuille, Z. Y. Liu, L. Zhang. CancerUniT: Towards a single unified model for effective detection, segmentation, and diagnosis of eight major cancers using a large collection of CT scans, [Online], Available: https://arxiv.org/abs/2301.12291

  15. J. Liu, Y. X. Zhang, J. N. Chen, J. F. Xiao, Y. Y. Lu, B. A. Landman, Y. X. Yuan, A. Yuille, Y. C. Tang, Z. W. Zhou. CLIP-driven universal model for organ segmentation and tumor detection. In Proceedings of IEEE/CVF International Conference on Computer Vision, Paris, France, pp.21152–21164, 2023.

  16. M. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, C. Pal, P. M. Jodoin, H. Larochelle. Brain tumor segmentation with Deep Neural Networks. Medical Image Analysis, vol.35, pp. 18–31, 2017. DOI: https://doi.org/10.1016/j.media.2016.05.004.

    Article  PubMed  Google Scholar 

  17. A. Myronenko. 3D MRI brain tumor segmentation using autoencoder regularization. In Proceedings of the 4th Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Granada, Spain, pp.311–320, 2019. DOI: https://doi.org/10.1007/978-3-030-11726-9 28.

  18. W. X. Wang, C. Chen, M. Ding, H. Yu, S. Zha, J. Y. Li. TransBTS: Multimodal brain tumor segmentation using transformer. In Proceedings of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention, Strasbourg, France, pp. 109–119, 2021. DOI: https://doi.org/10.1007/978-3-030-87193-211.

  19. Y. Jiang, Y. Zhang, X. Lin, J. K. Dong, T. T. Cheng, J. Liang. SwinBTS: A method for 3D multimodal brain tumor segmentation using swin transformer. Brain Sciences, vol. 12, no. 6, pp. 797, 2022. DOI: https://doi.org/10.3390/brainscil2060797.

    Article  PubMed  PubMed Central  Google Scholar 

  20. D. K. Jin, Z. Y. Xu, Y. B. Tang, A. P. Harrison, D. J. Mollura. CT-Realistic lung nodule simulation from 3D conditional generative adversarial networks for robust lung segmentation. In Proceedings of the 21st International Conference on Medical Image Computing and Computer Assisted Intervention, Granada, Spain, pp.732–740, 2018. DOI: https://doi.org/10.1007/978-3-030-00934-281.

  21. D. P. Fan, T. Zhou, G. P. Ji, Y. Zhou, G. Chen, H. Z. Fu, J. B. Shen, L. Shao. Inf-Net: Automatic COVID-19 lung infection segmentation from CT images. IEEE Transactions on Medical Imaging, vol.39, no. 8, pp. 2626–2637, 2020. DOI: https://doi.org/10.1109/TMI.2020.2996645.

    Article  PubMed  Google Scholar 

  22. S. Wang, M. Zhou, Z. Y. Liu, Z. Y. Liu, D. S. Gu, Y. L. Zang, D. Dong, O. Gevaert, J. Tian. Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation. Medical Image Analysis, vol.40, pp. 172–183, 2017. DOI: https://doi.org/10.1016/j.media.2017.06.014.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Z. T. Zhu, Y. D. Xia, L. X. Xie, E. K. Fishman, A. L. Yuille. Multi-scale coarse-to-fine segmentation for screening pancreatic ductal adenocarcinoma. In Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention, Shenzhen, China, pp. 3–12, 2019. DOI: https://doi.org/10.1007/978-3-030-32226-71.

  24. L. Zhang, Y. Shi, J. W. Yao, Y. Bian, K. Cao, D. K. Jin, J. Xiao, L. Lu. Robust pancreatic ductal adenocarcinoma segmentation with multi-institutional multi-phase partially-annotated CT scans. In Proceedings of the 23rd International Conference on Medical Image Computing and Computer Assisted Intervention, Lima, Peru, pp.491–500, 2020. DOI: https://doi.org/10.1007/978-3-030-59719-148.

  25. B. W. Li, Y. C. Chou, S. W. Sun, H. L. Qiao, A. Yuille, Z. W. Zhou. Early detection and localization of pancreatic cancer by label-free tumor synthesis, [Online], Available: https://arxiv.org/abs/2308.03008

  26. S. Almotairi, G. Kareem, M. Aouf, B. Almutairi, M. A. M. Salem. Liver tumor segmentation in CT scans using modified SegNet. Sensors, vol. 20, no. 5, pp. 1516, 2020. DOI: https://doi.org/10.3390/s20051516.

    Article  PubMed  PubMed Central  ADS  Google Scholar 

  27. W. Li, F. C. Jia, Q. M. Hu. Automatic segmentation of liver tumor in CT images with deep convolutional neural networks. Journal of Computer and Communications, vol.3, no. 11, pp. 146–151, 2015. DOI: https://doi.org/10.4236/jcc.2015.311023.

    Article  Google Scholar 

  28. G. P. Ji, Y. C. Chou, D. P. Fan, G. Chen, H. Z. Fu, D. Jha, L. Shao. Progressively normalized self-attent ion network for video polyp segmentation. In Proceedings of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention, Strasbourg, France, pp. 142–152, 2021. DOI: https://doi.org/10.1007/978-3-030-87193-214.

  29. D. P. Fan, G. P. Ji, T. Zhou, G. Chen, H. Z. Fu, J. B. Shen, L. Shao. PraNet: ParaUel reverse attention network for polyp segmentation. In Proceedings of the 23rd International Conference on Medical Image Computing and Computer Assisted Intervention, Lima, Peru, pp.263–273, 2020. DOI: https://doi.org/10.1007/978-3-030-59725-226.

  30. G. P. Ji, D. P. Fan, Y. C. Chou, D. X. Dai, A. Liniger, L. Van Gool. Deep gradient learning for efficient camouflaged object detection. Machine Intelligence Research, vol.20, no.1, pp.92–108, 2023. DOI: https://doi.org/10.1007/s11633-022-1365-9.

    Article  PubMed Central  Google Scholar 

  31. J. Wei, Y. W. Hu, R. M. Zhang, Z. Li, S. K. Zhou, S. G. Cui. Shallow attention network for polyp segmentation. In Proceedings of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention, Strasbourg, France, pp. 699–708, 2021. DOI: https://doi.org/10.1007/978-3-030-87193-266.

  32. G. Papandreou, L. C. Chen, K. P. Murphy, A. L. Yuille. Weakly-and semi-sup er vised learning of a deep convolutional network for semantic image segmentation. In Proceedings of the International Conference on Computer Vision, Santiago, Chile, pp. 1742–1750, 2015. DOI: https://doi.org/10.1109/ICCV.2015.203.

  33. C. C. Hsu, K. J. Hsu, C. C. Tsai, Y. Y. Lin, Y. Y. Chuang. Weakly supervised instance segmentation using the bounding box tightness prior. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, Canada, Article number 591, 2019.

  34. Y. Zeng, Y. Z. Zhuge, H. C. Lu, L. H. Zhang, M. Y. Qian, Y. Z. Yu. Multi-source weak supervision for sahency detection. In Proceedings of IEEE/’CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, pp. 6067–6076, 2019. DOI: https://doi.org/10.1109/CVPR.2019.00623.

  35. L. J. Wang, H. C. Lu, Y. F. Wang, M. Y. Feng, D. Wang, B. C. Yin, X. Ruan. Learning to detect salient objects with image-level supervision. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp. 3796–3805, 2017. DOI: https://doi.org/10.1109/CVPR.2017.404.

  36. J. Zhang, X. Yu, A. X. Li, P. P. Song, B. W. Liu, Y. C. Dai. Weakly-supervised salient object detection via scribble annotations. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, pp. 12543–12552, 2020. DOI: https://doi.org/10.1109/CVPR42600.2020.01256.

  37. T. S. Chu, X. M. Li, H. V. Vo, R. M. Summers, E. Sizikova. Improving weakly supervised lesion segmentation using multi-task learning. In Proceedings of the 4th Conference on Medical Imaging with Deep Learning, Lübeck, Germany, pp.60–73, 2021.

  38. B. W. Cheng, O. Parkhi, A. Kirillov. Pointly-supervised instance segmentation. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, pp. 2607–2616, 2022. DOI: https://doi.org/10.1109/CVPR52688.2022.00264.

  39. C. Li, X. G. Wang, W. Y. Liu, L. J. Latecki, B. Wang, J. Z. Huang. Weakly supervised mitosis detection in breast histopathology images using concentric loss. Medical Image Analysis, vol.53, pp. 165–178, 2019. DOI: https://doi.org/10.1016/j.media.2019.01.013.

    Article  PubMed  Google Scholar 

  40. W. T. Li, W. Y. Liu, J. K. Zhu, M. M. Cui, R. S. Yu, X. S. Hua, L. Zhang. Box2Mask: Box-sup er vised instance segmentation via level-set evolution, [Online], Available: https://arxiv.org/abs/2212.01579

  41. M. Tang, F. Perazzi, A. Djelouah, I. Ben Ayed, C. Schroers, Y. Boykov. On regularized losses for weakly-supervised CNN segmentation. In Proceedings of the 15th European Conference on Computer Vision, Munich, Germany, pp. 524–540, 2018. DOI: https://doi.org/10.1007/978-3-030-01270-031.

  42. M. Tang, A. Djelouah, F. Perazzi, Y. Boykov, C. Schroers. Normalized cut loss for weakly-supervised CNN segmentation. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, pp. 1818–1827, 2018. DOI: https://doi.org/10.1109/CVPR.2018.00195.

  43. Z. Chen, Z. Q. Tian, J. H. Zhu, C. Li, S. Y. Du. C-CAM: Causal CAM for weakly supervised semantic segmentation on medical image. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, pp. 11666–11675, 2022. DOI: https://doi.org/10.1109/CVPR52688.2022.01138.

  44. X. M. Liu, Q. Yuan, Y. Z. Gao, K. L. He, S. Wang, X. Tang, J. S. Tang, D. G. Shen. Weakly supervised segmentation of COVID19 infection with scribble annotation on CT images. Pattern Recognition, vol.122, Article number 108341, 2022. DOI: https://doi.org/10.1016/j.patcog.2021.108341.

  45. K. Zhang, X. H. Zhuang. CycleMix: A holistic strategy for medical image segmentation from scribble supervision. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, pp. 11646–11655, 2022. DOI: https://doi.org/10.1109/CVPR52688.2022.01136.

  46. W. Z. Lu, X. Jia, W. C. Xie, L. L. Shen, Y. C. Zhou, J. M. Duan. Geometry constrained weakly supervised object localization. In Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, pp.481–496, 2020. DOI: https://doi.org/10.1007/978-3-030-58574-7_29.

  47. Z. Tian, C. H. Shen, X. L. Wang, H. Chen. Boxlnst: High-performance instance segmentation with box annotations. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, pp. 5439–5448, 2021. DOI: https://doi.org/10.1109/CVPR46437.2021.00540.

  48. H. R. Roth, D. Yang, Z. Y. Xu, X. S. Wang, D. G. Xu. Going to extremes: Weakly supervised medical image segmentation. Machine Learning and Knowledge Extraction, vol.3, no. 2, pp. 507–524, 2021. DOI: https://doi.org/10.3390/make3020026.

    Article  Google Scholar 

  49. Z. W. Zhou, J. Shin, R. B. Feng, R. T. Hurst, C. B. Kendall, J. M. Liang. Integrating active learning and transfer learning for carotid intima-media thickness video interpretation. Journal of Digital Imaging, vol. 32, no. 2, pp. 290–299, 2019. DOI: https://doi.org/10.1007/sl0278-018-0143-2.

    Article  PubMed  Google Scholar 

  50. A. W. Fitzgibbon, R. B. Fisher. A Buyer’s Guide to Conic Fitting, Britishi Machine Vision Conference, 1996.

  51. F. Meyer. Topographic distance and watershed lines. Signal Processing, vol.38, no.1, pp. 113–125, 1994. DOI: https://doi.org/10.1016/0165-1684(94)90060-4.

  52. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W. Y. Lo, P. Dollar, R. Girshick. Segment anything, [Online], Available: https://arxiv.org/abs/2304.02643

  53. J. Ma, Y. T. He, F. F. Li, L. Han, C. Y. You, B. Wang. Segment anything in medical images, [Online], Available: https://arxiv.org/abs/2304.12306

  54. D. P. Fan, G. P. Ji, M. M. Cheng, L. Shao. Concealed object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.44, no. 10, pp. 6024–6042, 2022. DOI: https://doi.org/10.1109/TPAMI.2021.3085766.

    Article  PubMed  Google Scholar 

  55. G. Ciaparrone, F. L. Sanchez, S. Tabik, L. Troiano, R. Tagliaferri, F. Herrera. Deep learning in video multi-object tracking: A survey. Neurocomputing, vol.381, pp.61–88, 2020. DOI: https://doi.org/10.1016/j.neucom.2019.11.023.

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Lustgarten Foundation for Pancreatic Cancer Research and the Patrick J. McGovern Foundation Award.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zongwei Zhou.

Ethics declarations

Deng-Ping Fan is an Associate Editor for Machine Intelligence Research and was not involved in the editorial review, or the decision to publish this article. All authors declare that there are no other competing interests.

Additional information

Colored figures are available in the online version at https://link.springer.com/journal/11633

Yu-Cheng Chou received the B. Sc. degree in software engineering at the School of Computer Science, Wuhan University, China in 2022. He is currently a Ph.D. degree candidate at Johns Hopkins University, supervised by Zongwei Zhou and Prof. Alan Yuille.

His research interests include medical imaging, causality, and computer vision. His research focuses on developing novel methodologies to accurately detect lesions and exploring explainability through causality for computer-aided diagnosis and surgery.

Bowen Li received the B. Sc. degree in automation at Tsinghua University, China in 2013, and the M. Sc. degree in computer science at Johns Hopkins University, USA in 2019. He is currently a Ph.D. degree candidate at Johns Hopkins University, supervised by Zongwei Zhou and Prof. Alan Yuille.

His research interest is computer-aided diagnosis with ultrasound and CT.

Deng-Ping Fan received the Ph.D. degree from Nankai University, China in 2019. He joined the Inception Institute of Artificial Intelligence (HAI), UAE in 2019. He has published about 50 top journal and conference papers such as TPAMI, IJCV, TIP, TNNLS, TMI, CVPR, ICCV, ECCV, IJCAI, etc. He won the Best Paper Finalist Award at IEEE CVPR 2019, and the Best Paper Award Nominee at IEEE CVPR 2020. He was recognized as the CVPR 2019 outstanding reviewer with a special mention award, the CVPR 2020 outstanding reviewer, the ECCV 2020 high-quality reviewer, and the CVPR 2021 outstanding reviewer. He served as a program committee board (PCB) member of IJCAI 2022–2024, a senior program committee (SPC) member of IJCAI 2021, a program committee member (PC) of CAD&CG 2021, a committee member of China Society of Image and Graphics (CSIG), area chair in NeurlPS 2021 Datasets and Benchmarks Track, area chair in MICCAI2020 Workshop.

His research interests include computer vision, deep learning, and saliency detection.

Alan Yuille received the B. Sc. degree in mathematics from the University of Cambridge, UK in 1976, and the Ph.D. degree in theoretical physics, supervised by Prof. S.W. Hawking, was approved in 1981. He was a research scientist in the Artificial Intelligence Laboratory at MIT and the Division of Applied Sciences at Harvard University, USA from 1982 to 1988. He served as an assistant and associate professor at Harvard until 1996. He was a senior research scientist at the Smith-Kettlewell Eye Research Institute, USA from 1996 to 2002. He was a full professor of statistics at the University of California, Los Angeles, as a full professor with joint appointments in computer science, psychiatry, and psychology. He moved to Johns Hopkins University, USA in January 2016.

His research interests include computational models of vision, mathematical models of cognition, medical image analysis, artificial inteUigence and neural networks.

Zongwei Zhou received the Ph.D. degree in biomedical informatics at Arizona State University, USA in 2021. He is currently a postdoctoral researcher at Johns Hopkins University, USA. He received the AMIA Doctoral Dissertation Award in 2022, the Elsevier-MedIA Best Paper Award in 2020, and the MICCAI Young Scientist Award in 2019. In addition to five U.S. patents, he has published over 30 peer-reviewed journal/conference articles, two of which have been ranked among the most popular articles in IEEE TMI and the highest-cited article in EJNMMI Research. He was named the top 2% of Scientists released by Stanford University in 2022 and 2023.

His research interest is developing novel methods to reduce the annotation efforts for computer-aided detection and diagnosis.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chou, YC., Li, B., Fan, DP. et al. Acquiring Weak Annotations for Tumor Localization in Temporal and Volumetric Data. Mach. Intell. Res. 21, 318–330 (2024). https://doi.org/10.1007/s11633-023-1380-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11633-023-1380-5

Keywords

Navigation