Browsing by Keyword "Motion"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Interaction force and motion estimators facilitating impedance control of the upper limb rehabilitation robot(IEEE Xplore, 2017-08-15) Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Bengoa, Pablo; Jung, Je Hyung; Ajoudani, Arash; Artemiadis, Panagiotis; Beckerle, Philipp; Grioli, Giorgio; Lambercy, Olivier; Mombaur, Katja; Novak, Domen; Rauter, Georg; Rodriguez Guerrero, Carlos; Salvietti, Gionata; Amirabdollahian, Farshid; Balasubramanian, Sivakumar; Castellini, Claudio; Di Pino, Giovanni; Guo, Zhao; Hughes, Charmayne; Iida, Fumiya; Lenzi, Tommaso; Ruffaldi, Emanuele; Sergi, Fabrizio; Soh, Gim Song; Caimmi, Marco; Cappello, Leonardo; Carloni, Raffaella; Carlson, Tom; Casadio, Maura; Coscia, Martina; De Santis, Dalia; Forner-Cordero, Arturo; Howard, Matthew; Piovesan, Davide; Siqueira, Adriano; Sup, Frank; Lorenzo, Masia; Catalano, Manuel Giuseppe; Lee, Hyunglae; Menon, Carlo; Raspopovic, Stanisa; Rastgaar, Mo; Ronsse, Renaud; van Asseldonk, Edwin; Vanderborght, Bram; Venkadesan, Madhusudhan; Bianchi, Matteo; Braun, David; Godfrey, Sasha Blue; Mastrogiovanni, Fulvio; McDaid, Andrew; Rossi, Stefano; Zenzeri, Jacopo; Formica, Domenico; Karavas, Nikolaos; Marchal-Crespo, Laura; Reed, Kyle B.; Tagliamonte, Nevio Luigi; Burdet, Etienne; Basteris, Angelo; Campolo, Domenico; Deshpande, Ashish; Dubey, Venketesh; Hussain, Asif; Sanguineti, Vittorio; Unal, Ramazan; Caurin, Glauco Augusto de Paula; Koike, Yasuharu; Mazzoleni, Stefano; Park, Hyung-Soon; Remy, C. David; Saint-Bauzel, Ludovic; Tsagarakis, Nikos; Veneman, Jan; Zhang, Wenlong; Tecnalia Research & Innovation; Medical TechnologiesIn order to enhance the performance of rehabilitation robots, it is imperative to know both force and motion caused by the interaction between user and robot. However, common direct measurement of both signals through force and motion sensors not only increases the complexity of the system but also impedes affordability of the system. As an alternative of the direct measurement, in this work, we present new force and motion estimators for the proper control of the upper-limb rehabilitation Universal Haptic Pantograph (UHP) robot. The estimators are based on the kinematic and dynamic model of the UHP and the use of signals measured by means of common low-cost sensors. In order to demonstrate the effectiveness of the estimators, several experimental tests were carried out. The force and impedance control of the UHP was implemented first by directly measuring the interaction force using accurate extra sensors and the robot performance was compared to the case where the proposed estimators replace the direct measured values. The experimental results reveal that the controller based on the estimators has similar performance to that using direct measurement (less than 1 N difference in root mean square error between two cases), indicating that the proposed force and motion estimators can facilitate implementation of interactive controller for the UHP in robot-mediated rehabilitation trainings.Item Self-supervised Blur Detection from Synthetically Blurred Scenes(2019-12) Alvarez-Gila, Aitor; Galdran, Adrian; Garrote, Estibaliz; Van de Weijer, Joost; Tecnalia Research & Innovation; VISUAL; QuantumBlur detection aims at segmenting the blurred areas of a given image. Recent deep learning-based methods approach this problem by learning an end-to-end mapping between the blurred input and a binary mask representing the localization of its blurred areas. Nevertheless, the effectiveness of such deep models is limited due to the scarcity of datasets annotated in terms of blur segmentation, as blur annotation is labour intensive. In this work, we bypass the need for such annotated datasets for end-to-end learning, and instead rely on object proposals and a model for blur generation in order to produce a dataset of synthetically blurred images. This allows us to perform self-supervised learning over the generated image and ground truth blur mask pairs using CNNs, defining a framework that can be employed in purely self-supervised, weakly supervised or semi-supervised configurations. Interestingly, experimental results of such setups over the largest blur segmentation datasets available show that this approach achieves state of the art results in blur segmentation, even without ever observing any real blurred image.