dc.contributor.author | Butepage, Judith | |
dc.contributor.author | Ghadirzadeh, Ali | |
dc.contributor.author | Karadağ, Özge Öztimur | |
dc.contributor.author | Bjorkman, Marten | |
dc.contributor.author | Kragic, Danica | |
dc.date.accessioned | 2021-02-19T21:16:17Z | |
dc.date.available | 2021-02-19T21:16:17Z | |
dc.date.issued | 2020 | |
dc.identifier.issn | 2296-9144 | |
dc.identifier.uri | https://doi.org/10.3389/frobt.2020.00047 | |
dc.identifier.uri | https://hdl.handle.net/20.500.12868/360 | |
dc.description | WOS: 000531230100001 | en_US |
dc.description | PubMed: 33501215 | en_US |
dc.description.abstract | To coordinate actions with an interaction partner requires a constant exchange of sensorimotor signals. Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner. They require the ability to predict and adapt to one's partner during an interaction. In this work we want to explore these ideas in a human-robot interaction setting in which a robot is required to learn interactive tasks from a combination of observational and kinesthetic learning. To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner, and (3) generation of robot joint trajectories matching the human motion. As long-term motion prediction methods often suffer from the problem of regression to the mean, our technical contribution here is a novel probabilistic latent variable model which does not predict in joint space but in latent space. To test the proposed method, we collect human-human interaction data and human-robot interaction data of four interactive tasks "hand-shake," "hand-wave," "parachute fist-bump," and "rocket fist-bump." We demonstrate experimentally the importance of predictive and adaptive components as well as low-level abstractions to successfully learn to imitate human behavior in interactive social tasks. | en_US |
dc.description.sponsorship | EU through the project socSMCs (H2020-FETPROACT-2014); Swedish Foundation for Strategic ResearchSwedish Foundation for Strategic Research; EnTimeMent [H2020-FETPROACT-824160]; Knut and AliceWallenberg FoundationKnut & Alice Wallenberg Foundation | en_US |
dc.description.sponsorship | This work was supported by the EU through the project socSMCs (H2020-FETPROACT-2014) and the Swedish Foundation for Strategic Research and EnTimeMent (H2020-FETPROACT-824160), and the Knut and AliceWallenberg Foundation. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Frontiers Media Sa | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | imitation learning | en_US |
dc.subject | human-robot interaction | en_US |
dc.subject | generative models | en_US |
dc.subject | deep learning | en_US |
dc.subject | sensorimotor coordination | en_US |
dc.subject | variational autoencoders | en_US |
dc.title | Imitating by generating: Deep generative models for imitation of interactive tasks | en_US |
dc.type | article | en_US |
dc.contributor.department | ALKÜ | en_US |
dc.contributor.institutionauthor | 0-belirlenecek | |
dc.identifier.doi | 10.3389/frobt.2020.00047 | |
dc.identifier.volume | 7 | en_US |
dc.relation.journal | Frontiers In Robotics And Ai | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |