Microsoft provides more details on Seeing AI project highlighted at Build 2016

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

If you were with us yesterday watching the coverage of the first Build keynote, there’s a good chance that you’d have seen the very end portion of the event. An ending where Microsoft employee Saqib Shaikh explained Seeing AI: the revolutionary program that he helped develop, aimed at making the world a better place for the vision-impaired. If you didn’t see the initial presentation of Seeing AI, you can go ahead and look at it right now – it’s most certainly worth your time, trust us.

If you’re done picking your jaw up off the floor, let’s take a moment to read over Microsoft’s follow-up post that digs into the Seeing AI app in a bit more detail. This post describes the Seeing AI app as a “Swiss Army Knife,” using the latest in technology to allow those without the ability to see a way to independently decipher the world around them. Seeing AI runs using a smartphone and the Pivothead smart glasses, and creates a constant stream of information for the wearer using an AI that decipher photos and gives out audio cues to describe the world around it.

The Seeing AI escalates to the realm of miraculous when you find out just how detailed it can be, and how meticulous the team behind it was when creating something that’s going to be user-friendly for the vision impaired. According to the post about Seeing AI from Microsoft, the app has “tools to recognize and accurately describe images.” This doesn’t just end at being able to identify images, however. As you saw in the video above, the glasses can to take a snapshot of your environment and read out important things, like the people around you and their emotions, and the items on a restaurant menu. The magic behind the Seeing AI comes from the fact that its designers are teaching it to be able to put things in context, and be even more accurate with its readings.

This comes down to Seeing AI being able to break free of the limitations that previous AI-like devices have been held back by. According to the article, “instead of just describing an image as “a man and a woman sitting next to each other,” the team is working to a point where Seeing AI would instead say that “Barack Obama and Hillary Clinton are posing for a picture.” It’s this amount of detail that makes this Seeing AI not just be a small compliment to the lives of the vision impaired, but will actually become an incredible asset. In a world where Seeing AI releases to the public and is adopted by the vision-impaired community, it seems like it could be an entirely new era of empowerment for these vision-impaired individuals.

There are already many tools out there to assist the vision-impaired, and passionate people will always be trying to use their talents to help these individuals live their daily lives. With Seeing AI, it seems like the bar is going to be raised for anyone who attempts to make these products in the future.