Browsing by Keyword "Convolutional neural network"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Analysis of Few-Shot Techniques for Fungal Plant Disease Classification and Evaluation of Clustering Capabilities Over Real Datasets(2022-03-07) Egusquiza, Itziar; Picon, Artzai; Irusta, Unai; Bereciartua-Perez, Arantza; Eggers, Till; Klukas, Christian; Aramendi, Elisabete; Navarra-Mestre, Ramon; COMPUTER_VISIONPlant fungal diseases are one of the most important causes of crop yield losses. Therefore, plant disease identification algorithms have been seen as a useful tool to detect them at early stages to mitigate their effects. Although deep-learning based algorithms can achieve high detection accuracies, they require large and manually annotated image datasets that is not always accessible, specially for rare and new diseases. This study focuses on the development of a plant disease detection algorithm and strategy requiring few plant images (Few-shot learning algorithm). We extend previous work by using a novel challenging dataset containing more than 100,000 images. This dataset includes images of leaves, panicles and stems of five different crops (barley, corn, rape seed, rice, and wheat) for a total of 17 different diseases, where each disease is shown at different disease stages. In this study, we propose a deep metric learning based method to extract latent space representations from plant diseases with just few images by means of a Siamese network and triplet loss function. This enhances previous methods that require a support dataset containing a high number of annotated images to perform metric learning and few-shot classification. The proposed method was compared over a traditional network that was trained with the cross-entropy loss function. Exhaustive experiments have been performed for validating and measuring the benefits of metric learning techniques over classical methods. Results show that the features extracted by the metric learning based approach present better discriminative and clustering properties. Davis-Bouldin index and Silhouette score values have shown that triplet loss network improves the clustering properties with respect to the categorical-cross entropy loss. Overall, triplet loss approach improves the DB index value by 22.7% and Silhouette score value by 166.7% compared to the categorical cross-entropy loss model. Moreover, the F-score parameter obtained from the Siamese network with the triplet loss performs better than classical approaches when there are few images for training, obtaining a 6% improvement in the F-score mean value. Siamese networks with triplet loss have improved the ability to learn different plant diseases using few images of each class. These networks based on metric learning techniques improve clustering and classification results over traditional categorical cross-entropy loss networks for plant disease identification.Item Analysis on the characterization of multiphoton microscopy images for malignant neoplastic colon lesion detection under deep learning methods(2021-01-01) Terradillos, Elena; Saratxaga, CristinaL; Mattana, Sara; Cicchi, Riccardo; Pavone, FrancescoS; Andraka, Nagore; Glover, BenjaminJ; Arbide, Nagore; Velasco, Jacques; Etxezarraga, MªCarmen; Picon, Artzai; VISUALColorectal cancer has a high incidence rate worldwide, with over 1.8 million new cases and 880,792 deaths in 2018. Fortunately, its early detection significantly increases the survival rate, reaching a cure rate of 90% when diagnosed at a localized stage. Colonoscopy is the gold standard technique for detection and removal of colorectal lesions with potential to evolve into cancer. When polyps are found in a patient, the current procedure is their complete removal. However, in this process, gastroenterologists cannot assure complete resection and clean margins which are given by the histopathology analysis of the removed tissue, which is performed at laboratory. Aims: In this paper, we demonstrate the capabilities of multiphoton microscopy (MPM) technology to provide imaging biomarkers that can be extracted by deep learning techniques to identify malignant neoplastic colon lesions and distinguish them from healthy, hyperplastic, or benign neoplastic tissue, without the need for histopathological staining. Materials and Methods: To this end, we present a novel MPM public dataset containing 14,712 images obtained from 42 patients and grouped into 2 classes. A convolutional neural network is trained on this dataset and a spatially coherent predictions scheme is applied for performance improvement. Results: We obtained a sensitivity of 0.8228 ± 0.1575 and a specificity of 0.9114 ± 0.0814 on detecting malignant neoplastic lesions. We also validated this approach to estimate the self-confidence of the network on its own predictions, obtaining a mean sensitivity of 0.8697 and a mean specificity of 0.9524 with the 18.67% of the images classified as uncertain. Conclusions: This work lays the foundations for performing in vivo optical colon biopsies by combining this novel imaging technology together with deep learning algorithms, hence avoiding unnecessary polyp resection and allowing in situ diagnosis assessment.Item Autofluorescence image reconstruction and virtual staining for in-vivo optical biopsying(2021-02) Picon, Artzai; Medela, Alfonso; Sanchez-Peralta, Luisa F.; Cicchi, Riccardo; Bilbao, Roberto; Alfieri, Domenico; Elola, Andoni; Glover, Ben; Saratxaga, Cristina L.; COMPUTER_VISION; VISUALModern photonic technologies are emerging, allowing the acquisition of in-vivo endoscopic tissue imaging at a microscopic scale, with characteristics comparable to traditional histological slides, and with a label-free modality. This raises the possibility of an ‘optical biopsy’ to aid clinical decision making. This approach faces barriers for being incorporated into clinical practice, including the lack of existing images for training, unfamiliarity of clinicians with the novel image domains and the uncertainty of trusting ‘black-box’ machine learned image analysis, where the decision making remains inscrutable. In this paper, we propose a new method to transform images from novel photonics techniques (e.g. autofluorescence microscopy) into already established domains such as Hematoxilyn-Eosin (H-E) microscopy through virtual reconstruction and staining. We introduce three main innovations: 1) we propose a transformation method based on a Siamese structure that simultaneously learns the direct and inverse transformation ensuring domain back-transformation quality of the transformed data. 2) We also introduced an embedding loss term that ensures similarity not only at pixel level, but also at the image embedding description level. This drastically reduces the perception distortion trade-off problem existing in common domain transfer based on generative adversarial networks. These virtually stained images can serve as reference standard images for comparison with the already known H-E images. 3) We also incorporate an uncertainty margin concept that allows the network to measure its own confidence, and demonstrate that these reconstructed and virtually stained images can be used on previously-studied classification models of H-E images that have been computationally degraded and de-stained. The three proposed methods can be seamlessly incorporated on any existing architectures. We obtained balanced accuracies of 0.95 and negative predictive values of 1.00 over the reconstructed and virtually stained image-set on the detection of color-rectal tumoral tissue. This is of great importance as we reduce the need for extensive labeled datasets for training, which are normally not available on the early studies of a new imaging technology.Item Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions(2019-12) Picon, Artzai; Seitz, Maximiliam; Alvarez-Gila, Aitor; Mohnke, Patrick; Ortiz-Barredo, Amaia; Echazarra, Jone; Tecnalia Research & Innovation; COMPUTER_VISION; VISUALConvolutional Neural Networks (CNN) have demonstrated their capabilities on the agronomical field, especially for plant visual symptoms assessment. As these models grow both in the number of training images and in the number of supported crops and diseases, there exist the dichotomy of (1) generating smaller models for specific crop or, (2) to generate a unique multi-crop model in a much more complex task (especially at early disease stages) but with the benefit of the entire multiple crop image dataset variability to enrich image feature description learning. In this work we first introduce a challenging dataset of more than one hundred-thousand images taken by cell phone in real field wild conditions. This dataset contains almost equally distributed disease stages of seventeen diseases and five crops (wheat, barley, corn, rice and rape-seed) where several diseases can be present on the same picture. When applying existing state of the art deep neural network methods to validate the two hypothesised approaches, we obtained a balanced accuracy (BAC=0.92) when generating the smaller crop specific models and a balanced accuracy (BAC=0.93) when generating a single multi-crop model. In this work, we propose three different CNN architectures that incorporate contextual non-image meta-data such as crop information onto an image based Convolutional Neural Network. This combines the advantages of simultaneously learning from the entire multi-crop dataset while reducing the complexity of the disease classification tasks. The crop-conditional plant disease classification network that incorporates the contextual information by concatenation at the embedding vector level obtains a balanced accuracy of 0.98 improving all previous methods and removing 71% of the miss-classifications of the former methods.Item Deep convolutional neural network for damaged vegetation segmentation from RGB images based on virtual NIR-channel estimation(2022-01) Picon, Artzai; Bereciartua-Perez, Arantza; Eguskiza, Itziar; Romero-Rodriguez, Javier; Jimenez-Ruiz, Carlos Javier; Eggers, Till; Klukas, Christian; Navarra-Mestre, Ramon; COMPUTER_VISIONPerforming accurate and automated semantic segmentation of vegetation is a first algorithmic step towards more complex models that can extract accurate biological information on crop health, weed presence and phenological state, among others. Traditionally, models based on normalized difference vegetation index (NDVI), near infrared channel (NIR) or RGB have been a good indicator of vegetation presence. However, these methods are not suitable for accurately segmenting vegetation showing damage, which precludes their use for downstream phenotyping algorithms. In this paper, we propose a comprehensive method for robust vegetation segmentation in RGB images that can cope with damaged vegetation. The method consists of a first regression convolutional neural network to estimate a virtual NIR channel from an RGB image. Second, we compute two newly proposed vegetation indices from this estimated virtual NIR: the infrared-dark channel subtraction (IDCS) and infrared-dark channel ratio (IDCR) indices. Finally, both the RGB image and the estimated indices are fed into a semantic segmentation deep convolutional neural network to train a model to segment vegetation regardless of damage or condition. The model was tested on 84 plots containing thirteen vegetation species showing different degrees of damage and acquired over 28 days. The results show that the best segmentation is obtained when the input image is augmented with the proposed virtual NIR channel (F1=0.94) and with the proposed IDCR and IDCS vegetation indices (F1=0.95) derived from the estimated NIR channel, while the use of only the image or RGB indices lead to inferior performance (RGB(F1=0.90) NIR(F1=0.82) or NDVI(F1=0.89) channel). The proposed method provides an end-to-end land cover map segmentation method directly from simple RGB images and has been successfully validated in real field conditions.Item Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild(2019-06) Picon, Artzai; Alvarez-Gila, Aitor; Seitz, Maximiliam; Ortiz-Barredo, Amaia; Echazarra, Jone; Johannes, Alexander; Tecnalia Research & Innovation; COMPUTER_VISION; VISUALFungal infection represents up to 50% of yield losses, making it necessary to apply effective and cost efficient fungicide treatments, whose efficacy depends on infestation type, situation and time. In these cases, a correct and early identification of the specific infection is mandatory to minimize yield losses and increase the efficacy and efficiency of the treatments. Over the last years, a number of image analysis-based methodologies have been proposed for automatic image disease identification. Among these methods, the use of Deep Convolutional Neural Networks (CNNs) has proven tremendously successful for different visual classification tasks. In this work we extend previous work by Johannes et al. (2017) with an adapted Deep Residual Neural Network-based algorithm to deal with the detection of multiple plant diseases in real acquisition conditions where different adaptions for early disease detection have been proposed. This work analyses the performance of early identification of three relevant European endemic wheat diseases: Septoria (Septoria triciti), Tan Spot (Drechslera triciti-repentis) and Rust (Puccinia striiformis & Puccinia recondita).Item Deep learning-based segmentation of multiple species of weeds and corn crop using synthetic and real image datasets(2022-03) Picon, Artzai; San-Emeterio, Miguel G.; Bereciartua-Perez, Arantza; Klukas, Christian; Eggers, Till; Navarra-Mestre, Ramon; COMPUTER_VISION; Tecnalia Research & InnovationWeeds compete with productive crops for soil, nutrients and sunlight and are therefore a major contributor to crop yield loss, which is why safer and more effective herbicide products are continually being developed. Digital evaluation tools to automate and homogenize field measurements are of vital importance to accelerate their development. However, the development of these tools requires the generation of semantic segmentation datasets, which is a complex, time-consuming and not easily affordable task. In this paper, we present a deep learning segmentation model that is able to distinguish between different plant species at the pixel level. First, we have generated three extensive datasets targeting one crop species (Zea mays), three grass species (Setaria verticillata, Digitaria sanguinalis, Echinochloa crus-galli) and three broadleaf species (Abutilon theophrasti, Chenopodium albums, Amaranthus retroflexus). The first dataset consists of real field images that were manually annotated. The second dataset is composed of images of plots where only one species is present at a time and the third type of dataset was synthetically generated from images of individual plants mimicking the distribution of real field images. Second, we have proposed a semantic segmentation architecture by extending a PSPNet architecture with an auxiliary classification loss to aid model convergence. Our results show that the network performance increases when supplementing the real field image dataset with the other types of datasets without increasing the manual annotation effort. More specifically, the use of the real field dataset obtains a Dice-Søensen Coefficient (DSC) score of 25.32. This performance increases when this dataset is combined with the single-species class dataset (DSC=47.97) or the synthetic dataset (DSC=45.20). As for the proposed model, the ablation method shows that by removing the proposed auxiliary classification loss, the segmentation performance decreases (DSC=45.96) compared to the proposed architecture method (DSC=47.97). The proposed method shows better performance than the current state of the art. In addition, the use of proposed single-species or synthetic datasets can double the performance of the algorithm than when using real datasets without additional manual annotation effort.Item Deep Neural Networks for ECG-Based Pulse Detection during Out-of-Hospital Cardiac Arrest(2019-03-01) Elola, Andoni; Aramendi, Elisabete; Irusta, Unai; Picón, Artzai; Alonso, Erik; Owens, Pamela; Idris, Ahamed; COMPUTER_VISIONThe automatic detection of pulse during out-of-hospital cardiac arrest (OHCA) is necessary for the early recognition of the arrest and the detection of return of spontaneous circulation (end of the arrest). The only signal available in every single defibrillator and valid for the detection of pulse is the electrocardiogram (ECG). In this study we propose two deep neural network (DNN) architectures to detect pulse using short ECG segments (5 s), i.e., to classify the rhythm into pulseless electrical activity (PEA) or pulse-generating rhythm (PR). A total of 3914 5-s ECG segments, 2372 PR and 1542 PEA, were extracted from 279 OHCA episodes. Data were partitioned patient-wise into training (80%) and test (20%) sets. The first DNN architecture was a fully convolutional neural network, and the second architecture added a recurrent layer to learn temporal dependencies. Both DNN architectures were tuned using Bayesian optimization, and the results for the test set were compared to state-of-the art PR/PEA discrimination algorithms based on machine learning and hand crafted features. The PR/PEA classifiers were evaluated in terms of sensitivity (Se) for PR, specificity (Sp) for PEA, and the balanced accuracy (BAC), the average of Se and Sp. The Se/Sp/BAC of the DNN architectures were 94.1%/92.9%/93.5% for the first one, and 95.5%/91.6%/93.5% for the second one. Both architectures improved the performance of state of the art methods by more than 1.5 points in BAC.Item Few Shot Learning in Histopathological Images:Reducing the Need of Labeled Data on Biological Datasets: Reducing the need of labeled data on biological datasets(IEEE, 2019-07-11) Medela, Alfonso; Picon, Artzai; Saratxaga, Cristina L.; Belar, Oihana; Cabezon, Virginia; Cicchi, Riccardo; Bilbao, Roberto; Glover, Ben; COMPUTER_VISION; VISUALAlthough deep learning pathology diagnostic algorithms are proving comparable results with human experts in a wide variety of tasks, they still require a huge amount of well annotated data for training. Generating such extensive and well labelled datasets is time consuming and is not feasible for certain tasks and so, most of the medical datasets available are scarce in images and therefore, not enough for training. In this work we validate that the use of few shot learning techniques can transfer knowledge from a well defined source domain from Colon tissue into a more generic domain composed by Colon, Lung and Breast tissue by using very few training images. Our results show that our few-shot approach is able to obtain a balanced accuracy (BAC) of 90% with just 60 training images, even for the Lung and Breast tissues that were not present on the training set. This outperforms the finetune transfer learning approach that obtains 73% BAC with 60 images and requires 600 images to get up to 81% BAC.Item Insect counting through deep learning-based density maps estimation(2022-06) Bereciartua-Pérez, Arantza; Gómez, Laura; Picón, Artzai; Navarra-Mestre, Ramón; Klukas, Christian; Eggers, Till; COMPUTER_VISIONDigitalization and automation of assessments in field trials are established practice for farming product development. The use of image-based methods has provided good results in different applications. Although these models can leverage some problems, they still perform poorly under real field conditions using mobile devices on complex applications. Among these applications, insect counting and detection is necessary for integrated pest management strategies in order to apply specific treatments at early infection stages to reduce economic losses and minimize chemical usage. Currently the counting task for the assessment of the degree of infestation is done manually by the farmer. Current state of the art object counting methods do not provide accurate counting in crowded images with overlapped or touching objects which is the case for insect counting images. This makes necessary to define novel approaches for insect counting. In this work, we propose a novel solution based on deep learning density map estimation to tackle insects counting in wild conditions. To this end, a fully convolutional regression network has been designed to accurately estimate a probabilistic density map for the counting regression problem. The estimated density map is then used for counting whiteflies in eggplant leaves. The proposed method was compared with a baseline based on candidate object selection and classification approach. The results for alive adult whitefly counting by means of density map estimation provided R2 = 0.97 for the counted insects in the main leaf of the image, that outperforms by far the baseline algorithm (R2 = 0.85) based on image processing methods for feature extraction and candidate selection and deep learning-based classifier. This solution was embedded to be used in mobile devices, and it has been gone for exhaustive validation tests, with diverse illumination conditions and background variability, over leaves taken at different heights, with different perspectives and even unfocused images, for the analyzed pest under real conditions.