Browsing by Author "Fang, Yingying"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Explainable COVID-19 Infections Identification and Delineation Using Calibrated Pseudo Labels(2023-02-01) Li, Ming; Fang, Yingying; Tang, Zeyu; Onuorah, Chibudom; Xia, Jun; Ser, Javier Del; Walsh, Simon; Yang, Guang; IAThe upheaval brought by the arrival of the COVID-19 pandemic has continued to bring fresh challenges over the past two years. During this COVID-19 pandemic, there has been a need for rapid identification of infected patients and specific delineation of infection areas in computed tomography (CT) images. Although deep supervised learning methods have been established quickly, the scarcity of both image-level and pixel-level labels as well as the lack of explainable transparency still hinder the applicability of AI. Can we identify infected patients and delineate the infections with extreme minimal supervision? Semi-supervised learning has demonstrated promising performance under limited labelled data and sufficient unlabelled data. Inspired by semi-supervised learning, we propose a model-agnostic calibrated pseudo-labelling strategy and apply it under a consistency regularization framework to generate explainable identification and delineation results. We demonstrate the effectiveness of our model with the combination of limited labelled data and sufficient unlabelled data or weakly-labelled data. Extensive experiments have shown that our model can efficiently utilize limited labelled data and provide explainable classification and segmentation results for decision-making in clinical routine.Item Probing perfection: The relentless art of meddling for pulmonary airway segmentation from HRCT via a human-AI collaboration based active learning method(2024-08) Wang, Shiyi; Nan, Yang; Zhang, Sheng; Felder, Federico; Xing, Xiaodan; Fang, Yingying; Del Ser, Javier; Walsh, Simon L.F.; Yang, Guang; IAIn the realm of pulmonary tracheal segmentation, the scarcity of annotated data stands as a prevalent pain point in most medical segmentation endeavors. Concurrently, most Deep Learning (DL) methodologies employed in this domain invariably grapple with other dual challenges: the inherent opacity of ‘black box’ models and the ongoing pursuit of performance enhancement. In response to these intertwined challenges, the core concept of our Human-Computer Interaction (HCI) based learning models (RS_UNet, LC_UNet, UUNet and WD_UNet) hinge on the versatile combination of diverse query strategies and an array of deep learning models. We train four HCI models based on the initial training dataset and sequentially repeat the following steps 1–4: (1) Query Strategy: Our proposed HCI models selects those samples which contribute the most additional representative information when labeled in each iteration of the query strategy (showing the names and sequence numbers of the samples to be annotated). Additionally, in this phase, the model selects the unlabeled samples with the greatest predictive disparity by calculating the Wasserstein Distance, Least Confidence, Entropy Sampling, and Random Sampling. (2) Central line correction: The selected samples in previous stage are then used for domain expert correction of the system-generated tracheal central lines in each training round. (3) Update training dataset: When domain experts are involved in each epoch of the DL model's training iterations, they update the training dataset with greater precision after each epoch, thereby enhancing the trustworthiness of the ‘black box’ DL model and improving the performance of models. (4) Model training: Proposed HCI model is trained using the updated training dataset and an enhanced version of existing UNet. Experimental results validate the effectiveness of this Human-Computer Interaction-based approaches, demonstrating that our proposed WD-UNet, LC-UNet, UUNet, RS-UNet achieve comparable or even superior performance than the state-of-the-art DL models, such as WD-UNet with only 15 %–35 % of the training data, leading to substantial reductions (65 %–85 % reduction of annotation effort) in physician annotation time.Item Swin transformer for fast MRI(2022-07-07) Huang, Jiahao; Fang, Yingying; Wu, Yinzhe; Wu, Huanjun; Gao, Zhifan; Li, Yang; Ser, Javier Del; Xia, Jun; Yang, Guang; IAMagnetic resonance imaging (MRI) is an important non-invasive clinical tool that can produce high-resolution and reproducible images. However, a long scanning time is required for high-quality MR images, which leads to exhaustion and discomfort of patients, inducing more artefacts due to voluntary movements of the patients and involuntary physiological movements. To accelerate the scanning process, methods by k-space undersampling and deep learning based reconstruction have been popularised. This work introduced SwinMR, a novel Swin transformer based method for fast MRI reconstruction. The whole network consisted of an input module (IM), a feature extraction module (FEM) and an output module (OM). The IM and OM were 2D convolutional layers and the FEM was composed of a cascaded of residual Swin transformer blocks (RSTBs) and 2D convolutional layers. The RSTB consisted of a series of Swin transformer layers (STLs). The shifted windows multi-head self-attention (W-MSA/SW-MSA) of STL was performed in shifted windows rather than the multi-head self-attention (MSA) of the original transformer in the whole image space. A novel multi-channel loss was proposed by using the sensitivity maps, which was proved to reserve more textures and details. We performed a series of comparative studies and ablation studies in the Calgary-Campinas public brain MR dataset and conducted a downstream segmentation experiment in the Multi-modal Brain Tumour Segmentation Challenge 2017 dataset. The results demonstrate our SwinMR achieved high-quality reconstruction compared with other benchmark methods, and it shows great robustness with different undersampling masks, under noise interruption and on different datasets. The code is publicly available at https://github.com/ayanglab/SwinMR.