What Lies Beneath: A Note on the Explainability of Black-box Machine Learning Models for Road Traffic Forecasting

dc.contributor.authorBarredo-Arrieta, Alejandro
dc.contributor.authorLana, Ibai
dc.contributor.authorDel Ser, Javier
dc.contributor.institutionTecnalia Research & Innovation
dc.contributor.institutionIA
dc.date.accessioned2024-07-24T11:45:46Z
dc.date.available2024-07-24T11:45:46Z
dc.date.issued2019-10
dc.descriptionPublisher Copyright: © 2019 IEEE.
dc.description.abstractTraffic flow forecasting is widely regarded as an essential gear in the complex machinery underneath Intelligent Transport Systems, being a critical component of avant-garde Automated Traffic Management Systems. Research in this area has stimulated a vibrant activity, yielding a plethora of new forecasting methods contributed to the community on a yearly basis. Efforts in this domain are mainly oriented to the development of prediction models featuring with ever-growing levels of performances and/or computational efficiency. After the swerve towards Artificial Intelligence that gradually took place in the modeling sphere of traffic forecasting, predictive schemes have ever since adopted all the benefits of applied machine learning, but have also incurred some caveats. The adoption of highly complex, black-box models has subtracted comprehensibility to forecasts: even though they perform better, they are more obscure to ITS practitioners, which hinders their practicality. In this paper we propose the adoption of explainable Artificial Intelligence (xAI) tools that are currently being used in other domains, in order to extract further knowledge from black-box traffic forecasting models. In particular we showcase the utility of xAI to unveil the knowledge extracted by Random Forests and Recurrent Neural Networks when predicting real traffic. The obtained results are insightful and suggest that the traffic forecasting model should be analyzed from more points of view beyond that of prediction accuracy or any other regression score alike, due to the treatment each algorithm gives to input variables: even with the same nominal score value, some methods can take advantage of inner knowledge that others instead disregard.en
dc.description.sponsorshipThe authors would like to thank the Basque Government for its support through the EMAITEK program.
dc.description.statusPeer reviewed
dc.format.extent6
dc.identifier.citationBarredo-Arrieta , A , Lana , I & Del Ser , J 2019 , What Lies Beneath : A Note on the Explainability of Black-box Machine Learning Models for Road Traffic Forecasting . in 2019 IEEE Intelligent Transportation Systems Conference, ITSC 2019 . , 8916985 , 2019 IEEE Intelligent Transportation Systems Conference, ITSC 2019 , Institute of Electrical and Electronics Engineers Inc. , pp. 2232-2237 , 2019 IEEE Intelligent Transportation Systems Conference, ITSC 2019 , Auckland , New Zealand , 27/10/19 . https://doi.org/10.1109/ITSC.2019.8916985
dc.identifier.citationconference
dc.identifier.doi10.1109/ITSC.2019.8916985
dc.identifier.isbn9781538670248
dc.identifier.urihttps://hdl.handle.net/11556/1493
dc.identifier.urlhttp://www.scopus.com/inward/record.url?scp=85076798818&partnerID=8YFLogxK
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.relation.ispartof2019 IEEE Intelligent Transportation Systems Conference, ITSC 2019
dc.relation.ispartofseries2019 IEEE Intelligent Transportation Systems Conference, ITSC 2019
dc.relation.projectIDEusko Jaurlaritza
dc.rightsinfo:eu-repo/semantics/restrictedAccess
dc.subject.keywordsArtificial Intelligence
dc.subject.keywordsManagement Science and Operations Research
dc.subject.keywordsInstrumentation
dc.subject.keywordsTransportation
dc.subject.keywordsSDG 11 - Sustainable Cities and Communities
dc.titleWhat Lies Beneath: A Note on the Explainability of Black-box Machine Learning Models for Road Traffic Forecastingen
dc.typeconference output
Files