Skip to content

Skeletal Tracking Overview

Skeletal or Skeleton tracking is a term you may or may not have heard before – it has been around for some time, but in some cases if you have encountered it, you may not have realized the technology behind a unique interactive activity was tracking your skeleton. The Microsoft Kinect was one of the earliest consumer examples of skeletal tracking, using the data from human movement to interact with games.

How does it work?

Put simply, skeleton tracking uses sensors, frequently cameras or depth cameras, to track the motion of a human being. Similar to the motion capture you may have seen used in Hollywood special effects, but without the need for a special suit or markers on the person. Skeletal tracking systems usually use depth cameras for the most robust real time results, but it’s also possible to use 2D cameras with open source software to track skeletons at lower frame rates, for example using Openpose.

The cameras differentiate a human from a background, and then identify the position of a number of features or joints, such as shoulders, knees, elbows and hands. Some systems can also track hands or specific gestures, though this is not true of all skeletal tracking systems. Once those joints are identified, the software connects them into a humanoid skeleton and tracks their position real time. This data can then be used to drive interactive displays, games, VR or AR experiences or any number of other unique integrations, such as displaying your ‘shadow’ projected on the side of a real car.

Using depth cameras of any kind allows the skeletal tracking system to disambiguate between overlapping or occluded objects or limbs, as well as making the system more robust to different lighting conditions than a solely 2D camera-based algorithm would. There are a number of skeletal tracking solutions today that support Intel® RealSense™ depth cameras, including the recently launched cubemos™ Skeleton Tracking SDK.

What’s the technology behind skeletal tracking?

In many cases, skeletal tracking software is developed using a machine learning approach. Images or depth images are labelled to create training data sets where the position of the joints and skeleton has been marked manually, and then from there, that data is used to train the software so that it can recognize joint positions or multiple people in real time.

For a more in depth look at skeletal tracking and the technologies that drive it, please check out this talk from Philip Krejov about body tracking in VR/AR with Intel® RealSense™ depth cameras.

Develop once, upgrade without fear.

One of the most useful features of all current generation Intel® RealSense™ devices is the ability for developers to switch out one device for another without needing to make any significant changes to their application. This allows an engineer to develop code once, but still take advantage of new devices as they become available. A great example of this is the Skeletal Tracking SDK from cubemos™. Originally developed to support the D400 series cameras, it was necessary to change only one line of code to alter the aspect ratio of the software to one supported by the camera. After this change was made, the software then supported the L515. By switching to the L515, cubemos could take advantage of the increased range and edge fidelity it offers. As you can see in the video below, the software can pick up a person at a significant distance from the camera and accurately track their movement, as well as the movement of other people within the frame.

If you have previously developed software for another Intel RealSense depth camera, switching to the L515 should be equally simple for you, and in most cases minor settings changes will be all that is necessary to support the LiDAR Camera.

The cubemos Skeleton Tracking SDK will also work well with the D400 series depth cameras, especially in outdoor applications. What would you like to create with support for Skeleton tracking?

Scroll To Top