Browsing by Keyword "Image processing"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Automatic pigmented lesion segmentation through a dermoscopy-guided OCT approach for early diagnosis(2019) López Sarachaga, Cristina; Lage, Sergio; Morales, Maria Celia; Boyano, Mª Dolores; Asumendi, Aintzane; Garrote, Estibaliz; Conde, Olga M.; Boyano, Ma Dolores; VISUAL; QuantumEarly diagnosis of pigmented lesions, specially melanoma, is an unmet clinical need that would help to improve patient prognosis. Apart from histopathological biopsy, the only gold standard non-invasive imaging technique during diagnosis is dermatoscopy (DD). Over the last years, new medical imaging techniques are being developed and Optical Coherence Tomography (OCT) has demonstrated to be very helpful on dermatology. OCT is non-invasive and provides in-depth structural microscopic information of the skin in real-time. In comparison with other novel techniques, as Reflectance Confocal Microscopy (RCM), the acquisition time is lower and the field-of-view higher. Hence, consolidated diagnosis techniques and novel imaging modalities can be combined to improve decision making during diagnosis and treatment. With actual methods, the delineation of lesion margins directly on OCT images during early stages of the disease is still really challenging and, at the same time, relevant from a prognosis perspective. This work proposes combining DD and OCT images to take advantage of their complementary information. The goal is to guide lesions delineation on OCT images considering the clinical features on DD images. The developed method applies image processing techniques to DD image to automatically segment the lesion; later, and after a calibration procedure, DD and OCT images become coregistered. In a final step the DD segmentation is transferred into the OCT image. Applying advanced image processing techniques and the proposed strategy of lesion delimitation, histopathological characteristics of the segmented lesion can be studied on OCT images afterwards. This proposal can lead to early, real-time and non-invasive diagnosis of pigmented lesions.Item Beach carrying capacity management under Covid-19 era on the Basque Coast by means of automated coastal videometry(2021-07-01) Epelde, Irati; Liria, Pedro; de Santiago, Iñaki; Garnier, Roland; Uriarte, Adolfo; Picón, Artzai; Galdrán, Adrián; Arteche, Jose Antonio; Lago, Alberto; Corera, Zurik; Puga, Iñaki; Andueza, Jose Luis; Lopez, Gabriel; COMPUTER_VISIONThis paper describes the methodology followed to implement social distancing recommendations in the COVID-19 context along the beaches of the coast of Gipuzkoa (Basque Country, Northern Spain) by means of automated coastal videometry. The coastal videometry network of Gipuzkoa, based on the KostaSystem technology, covers 14 beaches, with 12 stations, along 50 km of coastline. A beach user detection algorithm based on a machine learning approach has been developed allowing for automatic assessment of beach attendance in real time at regional scale. For each beach, a simple classification of occupancy (low, medium, high, and full) was estimated as a function of the beach user density (BUD), obtained in real time from the images and the maximum beach carrying capacity (BCC), estimated based on the minimal social distance recommended by the authorities. This information was displayed in real time via a web/mobile app and was simultaneously sent to beach managers who controlled the beach access. The results showed a strong receptivity from beach users (more than 50.000 app downloads) and that real time information of beach occupation can help in short-term/daily beach management. In the longer term, the analysis of this information provides the necessary data for beach carrying capacity management and can help the authorities in controlling and in determining their maximum capacity.Item Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions(2019-12) Picon, Artzai; Seitz, Maximiliam; Alvarez-Gila, Aitor; Mohnke, Patrick; Ortiz-Barredo, Amaia; Echazarra, Jone; Tecnalia Research & Innovation; COMPUTER_VISION; VISUALConvolutional Neural Networks (CNN) have demonstrated their capabilities on the agronomical field, especially for plant visual symptoms assessment. As these models grow both in the number of training images and in the number of supported crops and diseases, there exist the dichotomy of (1) generating smaller models for specific crop or, (2) to generate a unique multi-crop model in a much more complex task (especially at early disease stages) but with the benefit of the entire multiple crop image dataset variability to enrich image feature description learning. In this work we first introduce a challenging dataset of more than one hundred-thousand images taken by cell phone in real field wild conditions. This dataset contains almost equally distributed disease stages of seventeen diseases and five crops (wheat, barley, corn, rice and rape-seed) where several diseases can be present on the same picture. When applying existing state of the art deep neural network methods to validate the two hypothesised approaches, we obtained a balanced accuracy (BAC=0.92) when generating the smaller crop specific models and a balanced accuracy (BAC=0.93) when generating a single multi-crop model. In this work, we propose three different CNN architectures that incorporate contextual non-image meta-data such as crop information onto an image based Convolutional Neural Network. This combines the advantages of simultaneously learning from the entire multi-crop dataset while reducing the complexity of the disease classification tasks. The crop-conditional plant disease classification network that incorporates the contextual information by concatenation at the embedding vector level obtains a balanced accuracy of 0.98 improving all previous methods and removing 71% of the miss-classifications of the former methods.Item Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild(2019-06) Picon, Artzai; Alvarez-Gila, Aitor; Seitz, Maximiliam; Ortiz-Barredo, Amaia; Echazarra, Jone; Johannes, Alexander; Tecnalia Research & Innovation; COMPUTER_VISION; VISUALFungal infection represents up to 50% of yield losses, making it necessary to apply effective and cost efficient fungicide treatments, whose efficacy depends on infestation type, situation and time. In these cases, a correct and early identification of the specific infection is mandatory to minimize yield losses and increase the efficacy and efficiency of the treatments. Over the last years, a number of image analysis-based methodologies have been proposed for automatic image disease identification. Among these methods, the use of Deep Convolutional Neural Networks (CNNs) has proven tremendously successful for different visual classification tasks. In this work we extend previous work by Johannes et al. (2017) with an adapted Deep Residual Neural Network-based algorithm to deal with the detection of multiple plant diseases in real acquisition conditions where different adaptions for early disease detection have been proposed. This work analyses the performance of early identification of three relevant European endemic wheat diseases: Septoria (Septoria triciti), Tan Spot (Drechslera triciti-repentis) and Rust (Puccinia striiformis & Puccinia recondita).Item Deep Learning-Based Method for Accurate Real-Time Seed Detection in Glass Bottle Manufacturing(2022-11-04) Bereciartua-Perez, Arantza; Duro, Gorka; Echazarra, Jone; González, Francico Javier; Serrano, Alberto; Irizar, Liher; COMPUTER_VISIONGlass bottle-manufacturing companies produce bottles of different colors, shapes and sizes. One identified problem is that seeds appear in the bottle mainly due to the temperature and parameters of the oven. This paper presents a new system capable of detecting seeds of 0.1 mm2 in size in glass bottles as they are being manufactured, 24 h per day and 7 days per week. The bottles move along the conveyor belt at 50 m/min, at a production rate of 250 bottles/min. This new proposed method includes deep learning-based artificial intelligence techniques and classical image processing on images acquired with a high-speed line camera. The algorithm comprises three stages. First, the bottle is identified in the input image. Next, an algorithm based in thresholding and morphological operations is applied on this bottle region to locate potential candidates for seeds. Finally, a deep learning-based model can classify whether the proposed candidates are real seeds or not. This method manages to filter out most of false positives due to stains in the glass surface, while no real seeds are lost. The F1 achieved is 0.97. This method reveals the advantages of deep learning techniques for problems where classical image processing algorithms are not sufficient.Item Insect counting through deep learning-based density maps estimation(2022-06) Bereciartua-Pérez, Arantza; Gómez, Laura; Picón, Artzai; Navarra-Mestre, Ramón; Klukas, Christian; Eggers, Till; COMPUTER_VISIONDigitalization and automation of assessments in field trials are established practice for farming product development. The use of image-based methods has provided good results in different applications. Although these models can leverage some problems, they still perform poorly under real field conditions using mobile devices on complex applications. Among these applications, insect counting and detection is necessary for integrated pest management strategies in order to apply specific treatments at early infection stages to reduce economic losses and minimize chemical usage. Currently the counting task for the assessment of the degree of infestation is done manually by the farmer. Current state of the art object counting methods do not provide accurate counting in crowded images with overlapped or touching objects which is the case for insect counting images. This makes necessary to define novel approaches for insect counting. In this work, we propose a novel solution based on deep learning density map estimation to tackle insects counting in wild conditions. To this end, a fully convolutional regression network has been designed to accurately estimate a probabilistic density map for the counting regression problem. The estimated density map is then used for counting whiteflies in eggplant leaves. The proposed method was compared with a baseline based on candidate object selection and classification approach. The results for alive adult whitefly counting by means of density map estimation provided R2 = 0.97 for the counted insects in the main leaf of the image, that outperforms by far the baseline algorithm (R2 = 0.85) based on image processing methods for feature extraction and candidate selection and deep learning-based classifier. This solution was embedded to be used in mobile devices, and it has been gone for exhaustive validation tests, with diverse illumination conditions and background variability, over leaves taken at different heights, with different perspectives and even unfocused images, for the analyzed pest under real conditions.Item On the Duality Between Retinex and Image Dehazing(IEEE Computer Society, 2018-12-14) Galdran, Adrian; Bria, Alessandro; Alvarez-Gila, Aitor; Vazquez-Corral, Javier; Bertalmío, Marcelo; Tecnalia Research & Innovation; VISUALImage dehazing deals with the removal of undesired loss of visibility in outdoor images due to the presence of fog. Retinex is a color vision model mimicking the ability of the Human Visual System to robustly discount varying illuminations when observing a scene under different spectral lighting conditions. Retinex has been widely explored in the computer vision literature for image enhancement and other related tasks. While these two problems are apparently unrelated, the goal of this work is to show that they can be connected by a simple linear relationship. Specifically, most Retinex-based algorithms have the characteristic feature of always increasing image brightness, which turns them into ideal candidates for effective image dehazing by directly applying Retinex to a hazy image whose intensities have been inverted. In this paper, we give theoretical proof that Retinex on inverted intensities is a solution to the image dehazing problem. Comprehensive qualitative and quantitative results indicate that several classical and modern implementations of Retinex can be transformed into competing image dehazing algorithms performing on pair with more complex fog removal methods, and can overcome some of the main challenges associated with this problem.Item Use of smartphones as optical metrology tools for surface wear detection(2021-03) Diamanti, Eleftheria; Iriarte, Eneko; Oblak, Eva; Dominguez-Meister, Santiago; Ibañez, Iñigo; Braceras, Iñigo; Berger, Andreas; Tecnalia Research & Innovation; INGENIERÍA DE SUPERFICIESProper wear level information and early wear detection are crucial goals in many engineering applications and industrial components in order to improve efficiency and reduce production, maintenance, or replacements costs. Furthermore, this should ideally be achieved with a user-friendly, low-cost, and easy to implement methodology for wear level monitoring and detection. In this work, we present the design of a new approach to accomplish early wear detection that is implemented by means of a stand-alone smartphone device and application providing real-time online metrology. The online monitoring is done by means of optical measurements and image processing based on the advanced smartphone vision system technology currently available in commercial devices. The developed mobile App works in continuous mode without interrupting the wear process. Specifically, it traces surface changes and monitors the progression of wear enabling just-in-time warning alarms for “significant wear” and “critical wear” detection. We demonstrate that critical wear of a surface prior to fatal rupture can be detected, which is the main objective in many industrial applications.