Browsing by Keyword "Deep learning"
Now showing 1 - 20 of 24
Results Per Page
Sort Options
Item 3D Convolutional Neural Networks Initialized from Pretrained 2D Convolutional Neural Networks for Classification of Industrial Parts(Multidisciplinary Digital Publishing Institute (MDPI), 2021-02-04) Merino, Ibon; Azpiazu, Jon; Remazeilles, Anthony; Sierra, BasilioDeep learning methods have been successfully applied to image processing, mainly using 2D vision sensors. Recently, the rise of depth cameras and other similar 3D sensors has opened the field for new perception techniques. Nevertheless, 3D convolutional neural networks perform slightly worse than other 3D deep learning methods, and even worse than their 2D version. In this paper, we propose to improve 3D deep learning results by transferring the pretrained weights learned in 2D networks to their corresponding 3D version. Using an industrial object recognition context, we have analyzed different combinations of 3D convolutional networks (VGG16, ResNet, Inception ResNet, and EfficientNet), comparing the recognition accuracy. The highest accuracy is obtained with EfficientNetB0 using extrusion with an accuracy of 0.9217, which gives comparable results to state-of-the art methods. We also observed that the transfer approach enabled to improve the accuracy of the Inception ResNet 3D version up to 18% with respect to the score of the 3D approach alone.Item An active adaptation strategy for streaming time series classification based on elastic similarity measures(2022-08) Oregi, Izaskun; Pérez, Aritz; Del Ser, Javier; Lozano, Jose A.; Quantum; IAIn streaming time series classification problems, the goal is to predict the label associated to the most recently received observations over the stream according to a set of categorized reference patterns. In on-line scenarios, data arise from non-stationary processes, which results in a succession of different patterns or events. This work presents an active adaptation strategy that allows time series classifiers to accommodate to the dynamics of streamed time series data. Specifically, our approach consists of a classifier that detects changes between events over streaming time series. For this purpose, the classifier uses features of the dynamic time warping measure computed between the streamed data and a set of reference patterns. When classifying a streaming series, the proposed pattern end detector analyzes such features to predict changes and adapt off-line time series classifiers to newly arriving events. To evaluate the performance of the proposed scheme, we employ the pattern end detection model along with dynamic time warping-based nearest neighbor classifiers over a benchmark of ten time series classification problems. The obtained results present exciting insights into the detection accuracy and latency performance of the proposed strategy.Item Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB(IEEE, 2017-10) Alvarez-Gila, Aitor; Van de Weijer, Joost; Garrote, Estibaliz; Tecnalia Research & Innovation; QuantumHyperspectral signal reconstruction aims at recovering the original spectral input that produced a certain trichromatic (RGB) response from a capturing device or observer. Given the heavily underconstrained, non-linear nature of the problem, traditional techniques leverage different statistical properties of the spectral signal in order to build informative priors from real world object reflectances for constructing such RGB to spectral signal mapping. However, most of them treat each sample independently, and thus do not benefit from the contextual information that the spatial dimensions can provide. We pose hyperspectral natural image reconstruction as an image to image mapping learning problem, and apply a conditional generative adversarial framework to help capture spatial semantics. This is the first time Convolutional Neural Networks -and, particularly, Generative Adversarial Networks- are used to solve this task. Quantitative evaluation shows a Root Mean Squared Error (RMSE) drop of 44.7% and a Relative RMSE drop of 47.0% on the ICVL natural hyperspectral image dataset.Item Characterization of Optical Coherence Tomography Images for Colon Lesion Differentiation under Deep Learning(2021-04-01) Saratxaga, Cristina L.; Bote, Jorge; Ortega-Morán, Juan F.; Picón, Artzai; Terradillos, Elena; del Río, Nagore Arbide; Andraka, Nagore; Garrote, Estibaliz; Conde, Olga M.; VISUAL; COMPUTER_VISION; Quantum(1) Background: Clinicians demand new tools for early diagnosis and improved detection of colon lesions that are vital for patient prognosis. Optical coherence tomography (OCT) allows microscopical inspection of tissue and might serve as an optical biopsy method that could lead to in-situ diagnosis and treatment decisions; (2) Methods: A database of murine (rat) healthy, hyperplastic and neoplastic colonic samples with more than 94,000 images was acquired. A methodology that includes a data augmentation processing strategy and a deep learning model for automatic classification (benign vs. malignant) of OCT images is presented and validated over this dataset. Comparative evaluation is performed both over individual B-scan images and C-scan volumes; (3) Results: A model was trained and evaluated with the proposed methodology using six different data splits to present statistically significant results. Considering this, 0.9695 (_0.0141) sensitivity and 0.8094 (_0.1524) specificity were obtained when diagnosis was performed over B-scan images. On the other hand, 0.9821 (_0.0197) sensitivity and 0.7865 (_0.205) specificity were achieved when diagnosis was made considering all the images in the whole C-scan volume; (4) Conclusions: The proposed methodology based on deep learning showed great potential for the automatic characterization of colon polyps and future development of the optical biopsy paradigm.Item Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions(2019-12) Picon, Artzai; Seitz, Maximiliam; Alvarez-Gila, Aitor; Mohnke, Patrick; Ortiz-Barredo, Amaia; Echazarra, Jone; Tecnalia Research & Innovation; COMPUTER_VISION; VISUALConvolutional Neural Networks (CNN) have demonstrated their capabilities on the agronomical field, especially for plant visual symptoms assessment. As these models grow both in the number of training images and in the number of supported crops and diseases, there exist the dichotomy of (1) generating smaller models for specific crop or, (2) to generate a unique multi-crop model in a much more complex task (especially at early disease stages) but with the benefit of the entire multiple crop image dataset variability to enrich image feature description learning. In this work we first introduce a challenging dataset of more than one hundred-thousand images taken by cell phone in real field wild conditions. This dataset contains almost equally distributed disease stages of seventeen diseases and five crops (wheat, barley, corn, rice and rape-seed) where several diseases can be present on the same picture. When applying existing state of the art deep neural network methods to validate the two hypothesised approaches, we obtained a balanced accuracy (BAC=0.92) when generating the smaller crop specific models and a balanced accuracy (BAC=0.93) when generating a single multi-crop model. In this work, we propose three different CNN architectures that incorporate contextual non-image meta-data such as crop information onto an image based Convolutional Neural Network. This combines the advantages of simultaneously learning from the entire multi-crop dataset while reducing the complexity of the disease classification tasks. The crop-conditional plant disease classification network that incorporates the contextual information by concatenation at the embedding vector level obtains a balanced accuracy of 0.98 improving all previous methods and removing 71% of the miss-classifications of the former methods.Item Data Augmentation for Industrial Prognosis Using Generative Adversarial Networks(Springer, 2020-10-27) Ortego, Patxi; Diez-Olivan, Alberto; Del Ser, Javier; Sierra, Basilio; Analide, Cesar; Novais, Paulo; Camacho, David; Yin, Hujun; Tecnalia Research & Innovation; IAThe Industry 4.0 revolution allows monitoring and intelligent processing of big amounts of data. When monitoring certain assets, very few data is found for operation under faulty conditions because the cost of not operating properly is unacceptable and thus preventive strategies are put in practice. Because machine learning algorithms are data exhaustive, synthetic data can be created for these cases. Deep learning techniques have been proven to work very well for these cases. Generative Adversarial Networks (GANs) have been deployed in numerous applications with data augmentation objectives, but not so much for balancing unidimensional series with few data. In this paper, a GAN is applied in order to augment data for assets operating under faulty conditions. The proposed method is validated on a real industrial case, yielding promising results with respect to the case with no strategy for class imbalance whatsoever.Item Deep convolutional neural network for damaged vegetation segmentation from RGB images based on virtual NIR-channel estimation(2022-01) Picon, Artzai; Bereciartua-Perez, Arantza; Eguskiza, Itziar; Romero-Rodriguez, Javier; Jimenez-Ruiz, Carlos Javier; Eggers, Till; Klukas, Christian; Navarra-Mestre, Ramon; COMPUTER_VISIONPerforming accurate and automated semantic segmentation of vegetation is a first algorithmic step towards more complex models that can extract accurate biological information on crop health, weed presence and phenological state, among others. Traditionally, models based on normalized difference vegetation index (NDVI), near infrared channel (NIR) or RGB have been a good indicator of vegetation presence. However, these methods are not suitable for accurately segmenting vegetation showing damage, which precludes their use for downstream phenotyping algorithms. In this paper, we propose a comprehensive method for robust vegetation segmentation in RGB images that can cope with damaged vegetation. The method consists of a first regression convolutional neural network to estimate a virtual NIR channel from an RGB image. Second, we compute two newly proposed vegetation indices from this estimated virtual NIR: the infrared-dark channel subtraction (IDCS) and infrared-dark channel ratio (IDCR) indices. Finally, both the RGB image and the estimated indices are fed into a semantic segmentation deep convolutional neural network to train a model to segment vegetation regardless of damage or condition. The model was tested on 84 plots containing thirteen vegetation species showing different degrees of damage and acquired over 28 days. The results show that the best segmentation is obtained when the input image is augmented with the proposed virtual NIR channel (F1=0.94) and with the proposed IDCR and IDCS vegetation indices (F1=0.95) derived from the estimated NIR channel, while the use of only the image or RGB indices lead to inferior performance (RGB(F1=0.90) NIR(F1=0.82) or NDVI(F1=0.89) channel). The proposed method provides an end-to-end land cover map segmentation method directly from simple RGB images and has been successfully validated in real field conditions.Item Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild(2019-06) Picon, Artzai; Alvarez-Gila, Aitor; Seitz, Maximiliam; Ortiz-Barredo, Amaia; Echazarra, Jone; Johannes, Alexander; Tecnalia Research & Innovation; COMPUTER_VISION; VISUALFungal infection represents up to 50% of yield losses, making it necessary to apply effective and cost efficient fungicide treatments, whose efficacy depends on infestation type, situation and time. In these cases, a correct and early identification of the specific infection is mandatory to minimize yield losses and increase the efficacy and efficiency of the treatments. Over the last years, a number of image analysis-based methodologies have been proposed for automatic image disease identification. Among these methods, the use of Deep Convolutional Neural Networks (CNNs) has proven tremendously successful for different visual classification tasks. In this work we extend previous work by Johannes et al. (2017) with an adapted Deep Residual Neural Network-based algorithm to deal with the detection of multiple plant diseases in real acquisition conditions where different adaptions for early disease detection have been proposed. This work analyses the performance of early identification of three relevant European endemic wheat diseases: Septoria (Septoria triciti), Tan Spot (Drechslera triciti-repentis) and Rust (Puccinia striiformis & Puccinia recondita).Item A deep learning approach to the inversion of borehole resistivity measurements(2020-04-13) Shahriari, M.; Pardo, D.; Picon, A.; Galdran, A.; Del Ser, J.; Torres-Verdín, C.; COMPUTER_VISION; IABorehole resistivity measurements are routinely employed to measure the electrical properties of rocks penetrated by a well and to quantify the hydrocarbon pore volume of a reservoir. Depending on the degree of geometrical complexity, inversion techniques are often used to estimate layer-by-layer electrical properties from measurements. When used for well geosteering purposes, it becomes essential to invert the measurements into layer-by-layer values of electrical resistivity in real time. We explore the possibility of using deep neural networks (DNNs) to perform rapid inversion of borehole resistivity measurements. Accordingly, we construct a DNN that approximates the following inverse problem: given a set of borehole resistivity measurements, the DNN is designed to deliver a physically reliable and data-consistent piecewise one-dimensional layered model of the surrounding subsurface. Once the DNN is constructed, we can invert borehole measurements in real time. We illustrate the performance of the DNN for inverting logging-while-drilling (LWD) measurements acquired in high-angle wells via synthetic examples. Numerical results are promising, although further work is needed to achieve the accuracy and reliability required by petrophysicists and drillers.Item Deep learning to find colorectal polyps in colonoscopy: A systematic literature review: A systematic literature review(2020-08) Sánchez-Peralta, Luisa F.; Bote-Curiel, Luis; Picón, Artzai; Sánchez-Margallo, Francisco M.; Pagador, J. Blas; COMPUTER_VISIONColorectal cancer has a great incidence rate worldwide, but its early detection significantly increases the survival rate. Colonoscopy is the gold standard procedure for diagnosis and removal of colorectal lesions with potential to evolve into cancer and computer-aided detection systems can help gastroenterologists to increase the adenoma detection rate, one of the main indicators for colonoscopy quality and predictor for colorectal cancer prevention. The recent success of deep learning approaches in computer vision has also reached this field and has boosted the number of proposed methods for polyp detection, localization and segmentation. Through a systematic search, 35 works have been retrieved. The current systematic review provides an analysis of these methods, stating advantages and disadvantages for the different categories used; comments seven publicly available datasets of colonoscopy images; analyses the metrics used for reporting and identifies future challenges and recommendations. Convolutional neural networks are the most used architecture together with an important presence of data augmentation strategies, mainly based on image transformations and the use of patches. End-to-end methods are preferred over hybrid methods, with a rising tendency. As for detection and localization tasks, the most used metric for reporting is the recall, while Intersection over Union is highly used in segmentation. One of the major concerns is the difficulty for a fair comparison and reproducibility of methods. Even despite the organization of challenges, there is still a need for a common validation framework based on a large, annotated and publicly available database, which also includes the most convenient metrics to report results. Finally, it is also important to highlight that efforts should be focused in the future on proving the clinical value of the deep learning based methods, by increasing the adenoma detection rate.Item Deep Learning-Based Method for Accurate Real-Time Seed Detection in Glass Bottle Manufacturing(2022-11-04) Bereciartua-Perez, Arantza; Duro, Gorka; Echazarra, Jone; González, Francico Javier; Serrano, Alberto; Irizar, Liher; COMPUTER_VISIONGlass bottle-manufacturing companies produce bottles of different colors, shapes and sizes. One identified problem is that seeds appear in the bottle mainly due to the temperature and parameters of the oven. This paper presents a new system capable of detecting seeds of 0.1 mm2 in size in glass bottles as they are being manufactured, 24 h per day and 7 days per week. The bottles move along the conveyor belt at 50 m/min, at a production rate of 250 bottles/min. This new proposed method includes deep learning-based artificial intelligence techniques and classical image processing on images acquired with a high-speed line camera. The algorithm comprises three stages. First, the bottle is identified in the input image. Next, an algorithm based in thresholding and morphological operations is applied on this bottle region to locate potential candidates for seeds. Finally, a deep learning-based model can classify whether the proposed candidates are real seeds or not. This method manages to filter out most of false positives due to stains in the glass surface, while no real seeds are lost. The F1 achieved is 0.97. This method reveals the advantages of deep learning techniques for problems where classical image processing algorithms are not sufficient.Item Deep learning-based segmentation of multiple species of weeds and corn crop using synthetic and real image datasets(2022-03) Picon, Artzai; San-Emeterio, Miguel G.; Bereciartua-Perez, Arantza; Klukas, Christian; Eggers, Till; Navarra-Mestre, Ramon; COMPUTER_VISION; Tecnalia Research & InnovationWeeds compete with productive crops for soil, nutrients and sunlight and are therefore a major contributor to crop yield loss, which is why safer and more effective herbicide products are continually being developed. Digital evaluation tools to automate and homogenize field measurements are of vital importance to accelerate their development. However, the development of these tools requires the generation of semantic segmentation datasets, which is a complex, time-consuming and not easily affordable task. In this paper, we present a deep learning segmentation model that is able to distinguish between different plant species at the pixel level. First, we have generated three extensive datasets targeting one crop species (Zea mays), three grass species (Setaria verticillata, Digitaria sanguinalis, Echinochloa crus-galli) and three broadleaf species (Abutilon theophrasti, Chenopodium albums, Amaranthus retroflexus). The first dataset consists of real field images that were manually annotated. The second dataset is composed of images of plots where only one species is present at a time and the third type of dataset was synthetically generated from images of individual plants mimicking the distribution of real field images. Second, we have proposed a semantic segmentation architecture by extending a PSPNet architecture with an auxiliary classification loss to aid model convergence. Our results show that the network performance increases when supplementing the real field image dataset with the other types of datasets without increasing the manual annotation effort. More specifically, the use of the real field dataset obtains a Dice-Søensen Coefficient (DSC) score of 25.32. This performance increases when this dataset is combined with the single-species class dataset (DSC=47.97) or the synthetic dataset (DSC=45.20). As for the proposed model, the ablation method shows that by removing the proposed auxiliary classification loss, the segmentation performance decreases (DSC=45.96) compared to the proposed architecture method (DSC=47.97). The proposed method shows better performance than the current state of the art. In addition, the use of proposed single-species or synthetic datasets can double the performance of the algorithm than when using real datasets without additional manual annotation effort.Item Deep Neural Networks for ECG-Based Pulse Detection during Out-of-Hospital Cardiac Arrest(2019-03-01) Elola, Andoni; Aramendi, Elisabete; Irusta, Unai; Picón, Artzai; Alonso, Erik; Owens, Pamela; Idris, Ahamed; COMPUTER_VISIONThe automatic detection of pulse during out-of-hospital cardiac arrest (OHCA) is necessary for the early recognition of the arrest and the detection of return of spontaneous circulation (end of the arrest). The only signal available in every single defibrillator and valid for the detection of pulse is the electrocardiogram (ECG). In this study we propose two deep neural network (DNN) architectures to detect pulse using short ECG segments (5 s), i.e., to classify the rhythm into pulseless electrical activity (PEA) or pulse-generating rhythm (PR). A total of 3914 5-s ECG segments, 2372 PR and 1542 PEA, were extracted from 279 OHCA episodes. Data were partitioned patient-wise into training (80%) and test (20%) sets. The first DNN architecture was a fully convolutional neural network, and the second architecture added a recurrent layer to learn temporal dependencies. Both DNN architectures were tuned using Bayesian optimization, and the results for the test set were compared to state-of-the art PR/PEA discrimination algorithms based on machine learning and hand crafted features. The PR/PEA classifiers were evaluated in terms of sensitivity (Se) for PR, specificity (Sp) for PEA, and the balanced accuracy (BAC), the average of Se and Sp. The Se/Sp/BAC of the DNN architectures were 94.1%/92.9%/93.5% for the first one, and 95.5%/91.6%/93.5% for the second one. Both architectures improved the performance of state of the art methods by more than 1.5 points in BAC.Item Eigenloss: Combined PCA-Based Loss Function for Polyp Segmentation: Combined PCA-based loss function for polyp segmentation(2020-08) Sánchez-Peralta, Luisa F.; Picón, Artzai; Antequera-Barroso, Juan Antonio; Ortega-Morán, Juan Francisco; Sánchez-Margallo, Francisco M.; Pagador, J. Blas; COMPUTER_VISIONColorectal cancer is one of the leading cancer death causes worldwide, but its early diagnosis highly improves the survival rates. The success of deep learning has also benefited this clinical field. When training a deep learning model, it is optimized based on the selected loss function. In this work, we consider two networks (U-Net and LinkNet) and two backbones (VGG-16 and Densnet121). We analyzed the influence of seven loss functions and used a principal component analysis (PCA) to determine whether the PCA-based decomposition allows for the defining of the coefficients of a non-redundant primal loss function that can outperform the individual loss functions and different linear combinations. The eigenloss is defined as a linear combination of the individual losses using the elements of the eigenvector as coefficients. Empirical results show that the proposed eigenloss improves the general performance of individual loss functions and outperforms other linear combinations when Linknet is used, showing potential for its application in polyp segmentation problems.Item Few-Shot Learning approach for plant disease classification using images taken in the field(2020-08) Argüeso, David; Picon, Artzai; Irusta, Unai; Medela, Alfonso; San-Emeterio, Miguel G; Bereciartua, Arantza; Alvarez-Gila, Aitor; Tecnalia Research & Innovation; COMPUTER_VISION; VISUALPrompt plant disease detection is critical to prevent plagues and to mitigate their effects on crops. The most accurate automatic algorithms for plant disease identification using plant field images are based on deep learning. These methods require the acquisition and annotation of large image datasets, which is frequently technically or economically unfeasible. This study introduces Few-Shot Learning (FSL) algorithms for plant leaf classification using deep learning with small datasets. For the study 54,303 labeled images from the PlantVillage dataset were used, comprising 38 plant leaf and/or disease types (classes). The data was split into a source (32 classes) and a target (6 classes) domain. The Inception V3 network was fine-tuned in the source domain to learn general plant leaf characteristics. This knowledge was transferred to the target domain to learn new leaf types from few images. FSL using Siamese networks and Triplet loss was used and compared to classical fine-tuning transfer learning. The source and target domain sets were split into a training set (80%) to develop the methods and a test set (20%) to obtain the results. Algorithm performance was evaluated using the total accuracy, and the precision and recall per class. For the FSL experiments the algorithms were trained with different numbers of images per class and the experiments were repeated 20 times to statistically characterize the results. The accuracy in the source domain was 91.4% (32 classes), with a median precision/recall per class of 93.8%/92.6%. The accuracy in the target domain was 94.0% (6 classes) learning from all the training data, and the median accuracy (90% confidence interval) learning from 1 image per class was 55.5 (46.0–61.7)%. Median accuracies of 80.0 (76.4–86.5)% and 90.0 (86.1–94.2)% were reached for 15 and 80 images per class, yielding a reduction of 89.1% (80 images/class) in the training dataset with only a 4-point loss in accuracy. The FSL method outperformed the classical fine tuning transfer learning which had accuracies of 18.0 (16.0–24.0)% and 72.0 (68.0–77.3)% for 1 and 80 images per class, respectively. It is possible to learn new plant leaf and disease types with very small datasets using deep learning Siamese networks with Triplet loss, achieving almost a 90% reduction in training data needs and outperforming classical learning techniques for small training sets.Item A Generalization Performance Study Using Deep Learning Networks in Embedded Systems(Multidisciplinary Digital Publishing Institute (MDPI), 2021-02-03) Gorospe, Joseba; Mulero, Rubén; Arbelaitz, Olatz; Muguerza, Javier; Antón, Miguel ÁngelDeep learning techniques are being increasingly used in the scientific community as a consequence of the high computational capacity of current systems and the increase in the amount of data available as a result of the digitalisation of society in general and the industrial world in particular. In addition, the immersion of the field of edge computing, which focuses on integrating artificial intelligence as close as possible to the client, makes it possible to implement systems that act in real time without the need to transfer all of the data to centralised servers. The combination of these two concepts can lead to systems with the capacity to make correct decisions and act based on them immediately and in situ. Despite this, the low capacity of embedded systems greatly hinders this integration, so the possibility of being able to integrate them into a wide range of micro-controllers can be a great advantage. This paper contributes with the generation of an environment based on Mbed OS and TensorFlow Lite to be embedded in any general purpose embedded system, allowing the introduction of deep learning architectures. The experiments herein prove that the proposed system is competitive if compared to other commercial systems.Item Insect counting through deep learning-based density maps estimation(2022-06) Bereciartua-Pérez, Arantza; Gómez, Laura; Picón, Artzai; Navarra-Mestre, Ramón; Klukas, Christian; Eggers, Till; COMPUTER_VISIONDigitalization and automation of assessments in field trials are established practice for farming product development. The use of image-based methods has provided good results in different applications. Although these models can leverage some problems, they still perform poorly under real field conditions using mobile devices on complex applications. Among these applications, insect counting and detection is necessary for integrated pest management strategies in order to apply specific treatments at early infection stages to reduce economic losses and minimize chemical usage. Currently the counting task for the assessment of the degree of infestation is done manually by the farmer. Current state of the art object counting methods do not provide accurate counting in crowded images with overlapped or touching objects which is the case for insect counting images. This makes necessary to define novel approaches for insect counting. In this work, we propose a novel solution based on deep learning density map estimation to tackle insects counting in wild conditions. To this end, a fully convolutional regression network has been designed to accurately estimate a probabilistic density map for the counting regression problem. The estimated density map is then used for counting whiteflies in eggplant leaves. The proposed method was compared with a baseline based on candidate object selection and classification approach. The results for alive adult whitefly counting by means of density map estimation provided R2 = 0.97 for the counted insects in the main leaf of the image, that outperforms by far the baseline algorithm (R2 = 0.85) based on image processing methods for feature extraction and candidate selection and deep learning-based classifier. This solution was embedded to be used in mobile devices, and it has been gone for exhaustive validation tests, with diverse illumination conditions and background variability, over leaves taken at different heights, with different perspectives and even unfocused images, for the analyzed pest under real conditions.Item MRI Deep Learning-Based Solution for Alzheimer’s Disease Prediction(2021-09-09) Saratxaga, Cristina L.; Moya, Iratxe; Picón, Artzai; Acosta, Marina; Moreno-Fernandez-de-Leceta, Aitor; Garrote, Estibaliz; Bereciartua-Perez, Arantza; VISUAL; COMPUTER_VISION; QuantumBackground: Alzheimer’s is a degenerative dementing disorder that starts with a mild memory impairment and progresses to a total loss of mental and physical faculties. The sooner the diagnosis is made, the better for the patient, as preventive actions and treatment can be started. Al though tests such as the Mini-Mental State Tests Examination are usually used for early identification, diagnosis relies on magnetic resonance imaging (MRI) brain analysis. Methods: Public initiatives such as the OASIS (Open Access Series of Imaging Studies) collection provide neuroimaging datasets openly available for research purposes. In this work, a new method based on deep learning and image processing techniques for MRI-based Alzheimer’s diagnosis is proposed and compared with previous literature works. Results: Our method achieves a balance accuracy (BAC) up to 0.93 for image-based automated diagnosis of the disease, and a BAC of 0.88 for the establishment of the disease stage (healthy tissue, very mild and severe stage). Conclusions: Results obtained surpassed the state-of-the-art proposals using the OASIS collection. This demonstrates that deep learning-based strategies are an effective tool for building a robust solution for Alzheimer’s-assisted diagnosis based on MRI data.Item PICCOLO White-Light and Narrow-Band Imaging Colonoscopic Dataset: A Performance Comparative of Models and Datasets(Multidisciplinary Digital Publishing Institute (MDPI), 2020-11-28) Sánchez-Peralta, Luisa F.; Pagador, J. Blas; Picón, Artzai; Calderón, Ángel José; Polo, Francisco; Andraka, Nagore; Bilbao, Roberto; Glover, Ben; Saratxaga, Cristina L.; Sánchez-Margallo, Francisco M.Colorectal cancer is one of the world leading death causes. Fortunately, an early diagnosis allows for e_ective treatment, increasing the survival rate. Deep learning techniques have shown their utility for increasing the adenoma detection rate at colonoscopy, but a dataset is usually required so the model can automatically learn features that characterize the polyps. In this work, we present the PICCOLO dataset, that comprises 3433 manually annotated images (2131 white-light images 1302 narrow-band images), originated from 76 lesions from 40 patients, which are distributed into training (2203), validation (897) and test (333) sets assuring patient independence between sets. Furthermore, clinical metadata are also provided for each lesion. Four di_erent models, obtained by combining two backbones and two encoder–decoder architectures, are trained with the PICCOLO dataset and other two publicly available datasets for comparison. Results are provided for the test set of each dataset. Models trained with the PICCOLO dataset have a better generalization capacity, as they perform more uniformly along test sets of all datasets, rather than obtaining the best results for its own test set. This dataset is available at the website of the Basque Biobank, so it is expected that it will contribute to the further development of deep learning methods for polyp detection, localisation and classification, which would eventually result in a better and earlier diagnosis of colorectal cancer, hence improving patient outcomes.Item A Real Application of an Autonomous Industrial Mobile Manipulator within Industrial Context(2021-05-27) Outón, Jose Luis; Merino, Ibon; Villaverde, Iván; Ibarguren, Aitor; Herrero, Héctor; Daelman, Paul; Sierra, Basilio; Tecnalia Research & Innovation; ROBOTICA_FLEX; ROBOTICA_AUTOMAIn modern industry there are still a large number of low added-value processes that can be automated or semi-automated with safe cooperation between robot and human operators. The European SHERLOCK project aims to integrate an autonomous industrial mobile manipulator (AIMM) to perform cooperative tasks between a robot and a human. To be able to do this, AIMMs need to have a variety of advanced cognitive skills like autonomous navigation, smart perception and task management. In this paper, we report the project’s tackle in a paradigmatic industrial application combining accurate autonomous navigation with deep learning-based 3D perception for pose estimation to locate and manipulate different industrial objects in an unstructured environment. The proposed method presents a combination of different technologies fused in an AIMM that achieve the proposed objective with a success rate of 83.33% in tests carried out in a real environment.