Kinect Fusion creates a big impression during Microsoft Research TechFest

Reading time icon 2 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

Anyone remember Kinect Fusion? It started off as a Microsoft research project that could reconstruct a model or environment into 3-D, by combining a continuous stream of data from the Kinect for Windows sensor. Just recently during Microsoft Research TechFest in Redmond, it was demonstrated and created a big impression!

“The amazing thing about this solution is how you can take an off-the-shelf Kinect for Windows sensor and create 3-D models rapidly. Normally when you think of Kinect, you think of a static sensor in a living room. But with Kinect Fusion, we allow the user to hold the camera, explore their space, and rapidly scan the world around them,” Microsoft researcher Shahram Izadi stated in an official Kinect blog post. Kinect Fusion pulls depth data from the sensor and constructs a highly detailed 3D map of objects or environments. “This has been a wonderful example of collaboration between Microsoft Research and our product group. We have worked shoulder-to-shoulder over the last year to bring this technology to our customers. The deep engagement that we have maintained with the original research team has allowed us to incorporate cutting edge research, even beyond what was shown in the original Kinect Fusion paper,” Microsoft’s Kinect for Windows Senior Program Manager Chris White stated.

Expect to see Kinect Fusion in a future release of the Kinect for Windows SDK, but until then, you will have to settle with reading about its abilities. Microsoft has yet to reveal exactly when the SDK will be available.