Picon, ArtzaiBereciartua-Perez, ArantzaEguskiza, ItziarRomero-Rodriguez, JavierJimenez-Ruiz, Carlos JavierEggers, TillKlukas, ChristianNavarra-Mestre, Ramon2022-01Picon , A , Bereciartua-Perez , A , Eguskiza , I , Romero-Rodriguez , J , Jimenez-Ruiz , C J , Eggers , T , Klukas , C & Navarra-Mestre , R 2022 , ' Deep convolutional neural network for damaged vegetation segmentation from RGB images based on virtual NIR-channel estimation ' , Artificial Intelligence in Agriculture , vol. 6 , pp. 199-210 . https://doi.org/10.1016/j.aiia.2022.09.0042589-7217researchoutputwizard: 11556/1418Publisher Copyright: © 2022 The AuthorsPerforming accurate and automated semantic segmentation of vegetation is a first algorithmic step towards more complex models that can extract accurate biological information on crop health, weed presence and phenological state, among others. Traditionally, models based on normalized difference vegetation index (NDVI), near infrared channel (NIR) or RGB have been a good indicator of vegetation presence. However, these methods are not suitable for accurately segmenting vegetation showing damage, which precludes their use for downstream phenotyping algorithms. In this paper, we propose a comprehensive method for robust vegetation segmentation in RGB images that can cope with damaged vegetation. The method consists of a first regression convolutional neural network to estimate a virtual NIR channel from an RGB image. Second, we compute two newly proposed vegetation indices from this estimated virtual NIR: the infrared-dark channel subtraction (IDCS) and infrared-dark channel ratio (IDCR) indices. Finally, both the RGB image and the estimated indices are fed into a semantic segmentation deep convolutional neural network to train a model to segment vegetation regardless of damage or condition. The model was tested on 84 plots containing thirteen vegetation species showing different degrees of damage and acquired over 28 days. The results show that the best segmentation is obtained when the input image is augmented with the proposed virtual NIR channel (F1=0.94) and with the proposed IDCR and IDCS vegetation indices (F1=0.95) derived from the estimated NIR channel, while the use of only the image or RGB indices lead to inferior performance (RGB(F1=0.90) NIR(F1=0.82) or NDVI(F1=0.89) channel). The proposed method provides an end-to-end land cover map segmentation method directly from simple RGB images and has been successfully validated in real field conditions.126615771enginfo:eu-repo/semantics/openAccessDeep convolutional neural network for damaged vegetation segmentation from RGB images based on virtual NIR-channel estimationjournal article10.1016/j.aiia.2022.09.004Vegetation indices estimationVegetation coverage mapNear infrared estimationConvolutional neural networkDeep learningVegetation indices estimationVegetation coverage mapNear infrared estimationConvolutional neural networkDeep learningComputer Science (miscellaneous)Engineering (miscellaneous)General Agricultural and Biological SciencesComputer Science ApplicationsArtificial IntelligenceSDG 2 - Zero Hungerhttp://www.scopus.com/inward/record.url?scp=85139369521&partnerID=8YFLogxK