Skip to main content

Automatic Coronary Anatomy Segmentation and Stenosis Detection from X-ray Coronary Angiography

Papers:

  • YOLO-Angio: An Algorithm for Coronary Anatomy Segmentation

https://arxiv.org/abs/2310.15898

  • StenUNet: Automatic Stenosis Detection from X-ray Coronary Angiograph

https://arxiv.org/abs/2310.14961

GitHub:

https://github.com/HuiLin0220/StenUNet

Awards: Two 3rd places in the Automatic Region-basedCoronary Artery Disease diagnostics using x-ray angiography imagEs (ARCADE) challenge

Team: Hui Lin, Tom Liu, Adrienne Kline, Aggelos K. Katsaggelos

Description: Coronary angiography continues to serve as the primary method for diagnosing coronary artery disease (CAD), which is the leading global cause of mortality. The severity of CAD is quantified by the location, degree of narrowing (stenosis), and number of arteries involved. In current practice, this quantification is performed manually using visual inspection and thus suffers from poor inter- and intra-rater reliability. The MICCAI grand challenge: Automatic Region-based Coronary Artery Disease diagnostics using the X-ray angiography imagEs (ARCADE) curated a dataset with stenosis annotations, to create an automated Coronary Anatomy Segmentation and stenosis detection algorithm. Using a combination of machine learning and other computer vision techniques, we propose algorithms YOLO-Angio and StenUNet. Our submission to the ARCADE challenge placed 3rd among all teams in both tasks.

Figures

Figure 1. Overview of YOLO-Angio. Feature selection is performed to enhance vessel contrast, followed by YOLO-based segmentation using an ensemble model and a logic-based approach to construct the final coronary artery tree.

Figure 2. The proposed StenUNet for stenosis detection.

Figure 3. 2D visualization of stenosis detection results by StenUNet-pre+post. The orange number displayed in the bottom-right corner of each picture represents the individual image’s F1 score. The leftmost two columns are the raw XCA images and their corresponding ground truth annotations.