Skip to main navigation Skip to search Skip to main content

Real-Time Retargeting of Human Poses from Monocular Images and Videos to the NAO Robot

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

To this point, there has been extensive research investigating human-robot motion retargeting, but the vast majority of existing methods rely on sensors or multiple cameras to detect human poses and movements, while many other methods are not suitable for usage on real-time scenarios. The current paper presents an integrated solution for performing realtime human-to-robot pose retargeting utilizing only regular monocular images and video as input data. We use deep learning models to perform three-dimensional human pose estimation on the monocular images and video, after which we calculate a set of joint angles that the robot must utilize to reproduce the detected human pose as accurately as possible. We evaluate our solution on Softbank’s NAO robot and show that it is possible to reproduce promising approximations and imitations of human motions and poses on the NAO robot, although it is subject to the limitations imposed by the robot’s degrees of freedom, joint constraints, and movement speed limitations.

Original languageEnglish
Pages (from-to)47-56
Number of pages10
JournalJournal of Computing Science and Engineering
Volume18
Issue number1
DOIs
StatePublished - 2024

Keywords

  • Geometry
  • Human pose estimation
  • Humanoid robot
  • Motion retargeting
  • Vectors

Fingerprint

Dive into the research topics of 'Real-Time Retargeting of Human Poses from Monocular Images and Videos to the NAO Robot'. Together they form a unique fingerprint.

Cite this