Hasso-Plattner-Institut25 Jahre HPI
Hasso-Plattner-Institut25 Jahre HPI
 

Masterprojekt: HCI Project: Motion Capture using multiple Kinect Depth Cameras (Sommersemester 2011)

Dozent: Prof. Dr. Patrick Baudisch (Human-Computer Interaction)

Allgemeine Information

  • Semesterwochenstunden: 0
  • ECTS: 9
  • Benotet: Ja
  • Einschreibefrist: 21.02.2011
  • Lehrform: Projekt
  • Belegungsart: Wahlpflichtmodul

Studiengänge

  • IT-Systems Engineering MA

Beschreibung

Using sixteen Kinect depth cameras, create a motion capture system that tracks multiple users in a large tracking volume, such as the atrium of the new HPI main building.

In this project you will write and modify hard, real-time computer vision code to stitch, calibrate, register multiple 3D coordinate systems, and understand the Kinect system. You should be proficient in programming C/C++, including a good understanding of manual memory management and efficient programming. In addition, one team member should have a basic sense of electronics.

Background

Traditional optical motion capture systems track users’ poses by observing a set of retro-reflective markers attached to each user (see photo). Multiple cameras observe these markers from different viewpoints, from which the system derives the 3D position for each marker. These 3D positions are then mapped to a predefined human skeleton. Motion capture systems have been used to animate virtual characters in movies, such as Avatar.

Motion capture systems can also be used to build real-time interactive systems, but their price point (traditionally between $10k-200k) has prevented use outside big money industries such as Hollywood and the military.

With the release of the Microsoft’s Xbox 360 Kinect, depth cameras have become available for $150 per piece and thus have the potential to bring motion capture to a wide audience. Game developers and hackers alike are demonstrating right now how to perform simple motion capture with it and a lot of code is already available to build on. Andy Wilson and Hrvoje Benko from Microsoft Research just demonstrated how to combine three depth cameras in order to cover a slightly larger tracking volume.

Description: your objective

Your goal is to create a system that uses multiple cameras (we offer up to 16) to track a larger number of users in a large tracking volume, such as the HPI atrium.

One of the key challenges is to prevent the Kinect cameras from interfering, as every camera projects its own pattern (see dot pattern in the photo). While Wilson & Benko avoided interference by pointing cameras into different directions, you will experiment with time multiplexing to capture users from different angles.

To test your system, compare with the 16-camera Optitrack motion capture system we use in the lab.

References to get started

Watch “Microsoft Lightspace” video and read the paper: bit.ly/lightspaceWatch “OpenNI kinect as3 wrapper skeleton” on Youtube

Watch “OpenNI kinect as3 wrapper skeleton” on Youtube

Contact

Human Computer Interaction
Prof. Dr. Patrick Baudisch & Christian Holz

Zurück