Browsing by Author "Mombaur, Katja"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Benchmarking Bipedal Locomotion: A Unified Scheme for Humanoids, Wearable Robots, and Humans: A Unified Scheme for Humanoids, Wearable Robots, and Humans(2015-09-10) Torricelli, Diego; Gonzalez-Vargas, Jose; Veneman, Jan F.; Mombaur, Katja; Tsagarakis, Nikos; del-Ama, Antonio J.; Gil-Agudo, Angel; Moreno, Juan C.; Pons, Jose L.; Tecnalia Research & InnovationIn the field of robotics, there is a growing awareness of the importance of benchmarking [1], [2]. Benchmarking not only allows the assessment and comparison of the performance of different technologies but also defines and supports the standardization and regulation processes during their introduction to the market. Its importance has been recently emphasized by the adoption of the technology readiness levels (TRLs) in the Horizon 2020 information and communication technologies by the European Union as an important guideline to assess when a technology can shift from one TRL to the other. The objective of this article is to define the basis of a benchmarking scheme for the assessment of bipedal locomotion that could be applied and shared across different research communities.Item Making Bipedal Robot Experiments Reproducible and Comparable: The Eurobench Software Approach: The Eurobench Software Approach(2022-08-29) Remazeilles, Anthony; Dominguez, Alfonso; Barralon, Pierre; Torres-Pardo, Adriana; Pinto, David; Aller, Felix; Mombaur, Katja; Conti, Roberto; Saccares, Lorenzo; Thorsteinsson, Freygardur; Prinsen, Erik; Cantón, Alberto; Castilla, Javier; Sanz-Morère, Clara B.; Tornero, Jesús; Torricelli, Diego; Tecnalia Research & Innovation; Robótica Médica; Medical TechnologiesThis study describes the software methodology designed for systematic benchmarking of bipedal systems through the computation of performance indicators from data collected during an experimentation stage. Under the umbrella of the European project Eurobench, we collected approximately 30 protocols with related testbeds and scoring algorithms, aiming at characterizing the performances of humanoids, exoskeletons, and/or prosthesis under different conditions. The main challenge addressed in this study concerns the standardization of the scoring process to permit a systematic benchmark of the experiments. The complexity of this process is mainly due to the lack of consistency in how to store and organize experimental data, how to define the input and output of benchmarking algorithms, and how to implement these algorithms. We propose a simple but efficient methodology for preparing scoring algorithms, to ensure reproducibility and replicability of results. This methodology mainly constrains the interface of the software and enables the engineer to develop his/her metric in his/her favorite language. Continuous integration and deployment tools are then used to verify the replicability of the software and to generate an executable instance independent of the language through dockerization. This article presents this methodology and points at all the metrics and documentation repositories designed with this policy in Eurobench. Applying this approach to other protocols and metrics would ease the reproduction, replication, and comparison of experiments.