Intel is always excited to introduce innovative tools and technologies that empower the world's most passionate content creators. In case you're unfamiliar, the Intel RealSense cameras use infrared light to compute depth in addition to normal RGB pictures and video. To assist in the development of applications with this technology, Intel created the RealSense SDK, a library of computer vision algorithms including facial recognition, image segmentation, and 3D scanning.
Seeing the potential use cases for this technology in gaming, we would now like to introduce you to the RealSense Plugin, a collaborative effort among games engineers at Intel to expose the features of the RealSense SDK to the Blueprints Visual Scripting System in UE4.
Check out the plugin source code and a sample project here.
The plugin is architected as a set of Actor Components, each of which encapsulates a distinct set of features from the RealSense SDK. Using these relatively lightweight components, you can add 3D sensing capabilities to nearly any actor in your game, and you can access this data anywhere by simply instantiating another instance of the same component.
Currently, the plugin features these three RealSense Components:
- Camera Streams Component: Provides access to the raw color and depth video streams from the RealSense camera.
- Scan 3D Component: Supports the scanning of real-world objects and human faces (Pictured above).
- Head Tracking Component (Preview): Supports the detection and tracking of a user’s head position and orientation.
Coming soon, you can also expect to see the following additional components added to the plugin:
- Head Tracking Component (Full): Additional functionality for detecting and tracking up to 76 facial landmark points, pulse rate, and facial expressions.
- Background Segmentation Component: Separate the foreground of an image from the background to create “green screen” effects on live video (Pictured above).
- Scene Scanning Component: Generate 3D models of large scenes (approximately 2m x 2m in area) to aid in the creation of user-generated content and augmented reality experiences (Pictured below).
- Hand Tracking Component: Detect and track hand skeletons with 22 feature points, recognize 14 gestures, or track just a single point for responsive cursor-style interactions.
- Speech Recognition Component: Transcribe input speech into text for dictation and detection of keywords for voice-controlled interfaces.
For a more detailed look at the features and architecture of the plugin, as well as some tutorial videos to help you get started, check out the article here.