Browsing by Author "Zhang, Heye"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Multi-level multi-type self-generated knowledge fusion for cardiac ultrasound segmentation(2023-04) Yu, Chengjin; Li, Shuang; Ghista, Dhanjoo; Gao, Zhifan; Zhang, Heye; Ser, Javier Del; Xu, Lin; IAMost existing works on cardiac echocardiography segmentation require a large number of ground-truth labels to appropriately train a neural network; this, however, is time consuming and laborious for physicians. Self-supervision learning is one of the potential solutions to address this challenge by deeply exploiting the raw data. However, existing works mainly exploit single type/level of pretext task. In this work, we propose fusion of the multi-level and multi-type self-generated knowledge. We obtain multi-level information of sub-anatomical structures in ultrasound images via a superpixel method. Subsequently, we fuse various types of information generated through multi-types of pretext tasks. In the end, we transfer the learned knowledge to our downstream task. In the experimental studies, we have demonstrated the prove the effectiveness of this method through the cardiac ultrasound segmentation task. The results show that the performance of our proposed method for echocardiography segmentation matches the performance of fully supervised methods without requiring a high amount of labeled data.Item Multi-task learning with Multi-view Weighted Fusion Attention for artery-specific calcification analysis(2021-07) Zhang, Weiwei; Yang, Guang; Zhang, Nan; Xu, Lei; Wang, Xiaoqing; Zhang, Yanping; Zhang, Heye; Del Ser, Javier; de Albuquerque, Victor Hugo C.; IAIn general, artery-specific calcification analysis comprises the simultaneous calcification segmentation and quantification tasks. It can help provide a thorough assessment for calcification of different coronary arteries, and further allow for an efficient and rapid diagnosis of cardiovascular diseases (CVD). However, as a high-dimensional multi-type estimation problem, artery-specific calcification analysis has not been profoundly investigated due to the intractability of obtaining discriminative feature representations. In this work, we propose a Multi-task learning network with Multi-view Weighted Fusion Attention (MMWFAnet) to solve this challenging problem. The MMWFAnet first employs a Multi-view Weighted Fusion Attention (MWFA) module to extract discriminative feature representations by enhancing the collaboration of multiple views. Specifically, MWFA weights these views to improve multi-view learning for calcification features. Based on the fusion of these multiple views, the proposed approach takes advantage of multi-task learning to obtain accurate segmentation and quantification of artery-specific calcification simultaneously. We perform experimental studies on 676 non-contrast Computed Tomography scans, achieving state-of-the-art performance in terms of multiple evaluation metrics. These compelling results evince that the proposed MMWFAnet is capable of improving the effectivity and efficiency of clinical CVD diagnosis.Item Vessel-GAN: Angiographic reconstructions from myocardial CT perfusion with explainable generative adversarial networks(2022-05) Wu, Chulin; Zhang, Heye; Chen, Jiaqi; Gao, Zhifan; Zhang, Pengfei; Muhammad, Khan; Del Ser, Javier; IADynamic CT angiography derived from CT perfusion data can obviate a separate coronary CT angiography and the use of ionizing radiation and contrast agent, thereby enhancing patient safety. However, the image quality of dynamic CT angiography is inferior to standard CT angiography images in many studies. This paper proposes an explainable generative adversarial network named vessel-GAN, which resorts to explainable knowledge-based artificial intelligence to perform image translation with increased trustworthiness. Specifically, we design a loss term to better learn the representations of blood vessels in CT angiography images. The loss term based on expert knowledge guides the generator to focus its training on the important features predicted by the discriminator. Additionally, we propose a generator architecture that effectively fuses spatio-temporal representations and further enhances temporal consistency, thereby improving the quality of the generated CT angiography images. The experiment is conducted on a dataset consisting of 232 patients with suspected coronary artery stenosis. Experimental results show that the PSNR value of vessel-GAN is 28.32 dB, SSIM value is 0.91 and MAE value is 47.36. To validate the effectiveness of the proposed synthesis method, we compare that with other image translation frameworks and GAN-based methods. Compared to other image translation methods, the proposed method vessel-GAN can generate more clearly visible blood vessels from source perfusion images. The CTA images generated by vessel-GAN are closer to the real CTA due to the use of adversarial learning. Compared with other GAN-based methods, vessel-GAN can produce sharper and more homogeneous outputs, including realistic vascular structures. The experiment demonstrates that the explainable generative adversarial network has superior performance for it can better control how models learn. Overall, the CT angiography images generated by vessel-GAN can potentially replace a separate standard CT angiography, allowing the possibility of “one-stop” cardiac examination for high-risk coronary artery disease patients who need assessment of myocardial ischemia.