Show simple item record

dc.contributor.authorRemazeilles, Anthony
dc.contributor.authorDominguez, Alfonso
dc.contributor.authorBarralon, Pierre
dc.contributor.authorTorres-Pardo, Adriana
dc.contributor.authorPinto, David
dc.contributor.authorAller, Felix
dc.contributor.authorMombaur, Katja
dc.contributor.authorConti, Roberto
dc.contributor.authorSaccares, Lorenzo
dc.contributor.authorThorsteinsson, Freygardur
dc.contributor.authorPrinsen, Erik
dc.contributor.authorCantón, Alberto
dc.contributor.authorCastilla, Javier
dc.contributor.authorSanz-Morère, Clara B.
dc.contributor.authorTornero, Jesús
dc.contributor.authorTorricelli, Diego
dc.date.accessioned2022-10-04T14:30:53Z
dc.date.available2022-10-04T14:30:53Z
dc.date.issued2022-08
dc.identifier.citationRemazeilles, Anthony, Alfonso Dominguez, Pierre Barralon, Adriana Torres-Pardo, David Pinto, Felix Aller, Katja Mombaur, et al. “Making Bipedal Robot Experiments Reproducible and Comparable: The Eurobench Software Approach.” Frontiers in Robotics and AI 9 (August 29, 2022). https://doi.org/10.3389/frobt.2022.951663.en
dc.identifier.urihttp://hdl.handle.net/11556/1416
dc.description.abstractThis study describes the software methodology designed for systematic benchmarking of bipedal systems through the computation of performance indicators from data collected during an experimentation stage. Under the umbrella of the European project Eurobench, we collected approximately 30 protocols with related testbeds and scoring algorithms, aiming at characterizing the performances of humanoids, exoskeletons, and/or prosthesis under different conditions. The main challenge addressed in this study concerns the standardization of the scoring process to permit a systematic benchmark of the experiments. The complexity of this process is mainly due to the lack of consistency in how to store and organize experimental data, how to define the input and output of benchmarking algorithms, and how to implement these algorithms. We propose a simple but efficient methodology for preparing scoring algorithms, to ensure reproducibility and replicability of results. This methodology mainly constrains the interface of the software and enables the engineer to develop his/her metric in his/her favorite language. Continuous integration and deployment tools are then used to verify the replicability of the software and to generate an executable instance independent of the language through dockerization. This article presents this methodology and points at all the metrics and documentation repositories designed with this policy in Eurobench. Applying this approach to other protocols and metrics would ease the reproduction, replication, and comparison of experiments.en
dc.description.sponsorshipThis study is supported by the European Union’s Horizon 2020 research and innovation program under Grant Agreement no 779963, project Eurobench.en
dc.language.isoengen
dc.publisherFrontiers Media S.A.en
dc.rightsAttribution 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.titleMaking Bipedal Robot Experiments Reproducible and Comparable: The Eurobench Software Approachen
dc.typejournal articleen
dc.identifier.doi10.3389/frobt.2022.951663en
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/H2020/779963/EU/EUropean ROBotic framework for bipedal locomotion bENCHmarking/EUROBENCHen
dc.rights.accessRightsopen accessen
dc.subject.keywordsSoftwareen
dc.subject.keywordsBenchmarkingen
dc.subject.keywordsReplicabilityen
dc.subject.keywordsExoskeletonen
dc.subject.keywordsHumanoiden
dc.subject.keywordsAlgorithmen
dc.subject.keywordsPerformance indicatoren
dc.identifier.essn2296-9144en
dc.journal.titleFrontiers in Robotics and AIen
dc.page.initial951663en
dc.volume.number9en


Files in this item

Thumbnail

    Show simple item record

    Attribution 4.0 InternationalExcept where otherwise noted, this item's license is described as Attribution 4.0 International