If it wasn’t clear before, Microsoft is using its developer-focused conference to cement the notion that it sees its future shrouded in machine learning, artificially intelligent, ambient computing goodness, which means, so should its developers if they already have not begun preparing.
Today, several Microsoft executives took the stage at Build to unveil a swath of different intelligent solutions that’ll help bolster its already burgeoning cloud offerings for commercial use. From search to speech, vision to deep neural network processing, Microsoft’s Azure team is building an interconnected web of intelligent layers to help developers focus on their application experiences and less on processing buildout.
For starters, developers can now utilize Project Kinect for Azure which puts a dense set of sensors directly in the hands of creators that includes Microsoft’s next generation depth camera and onboard compute designed for AI on the Edge. While the Kinect has seen little public success beyond its initial Xbox paired unveiling in 2103, several niche use cases in industries unrelated to gaming have demanded that developers and engineers continue to fine to the camera’s depth tracking such as medical, military, and aerospace.
Now, current and new users will be able to combine the rich Time of Flight sensor with the Azure’s artificial intelligence platform to dramatically upscale precision solutions and offer customers a new level of insights for their operations.
Another new item on the docket was Speech Devices SDK that is intended to deliver next-level audio processing of multi-channel sources. While some might see this as a boon for the podcast industry, Microsoft’s more commercial intentions include bolstering businesses who rely on drive-thru ordering communications, smart speakers, and cars, as well as two-way communication wearables.
Microsoft is also making it easier for businesses who already use or who are contemplating the use of Azure in the future, to scale their throughput and expand their storage usage with new Azure Cosmos DB updates scheduled for later this year. As part of the update package, customers will now be able to take advantage of multi-master at global scale capabilities and general availability of VNET for even greater security options.
Azure Cognitive Services
- Azure Cognitive Services are also set to get some updates that include Speech Service to improve speech recognition, text-to-speech support, language translation and customized voice models. Azure’s Custom Vision API’s are also set to get some much-needed improvements to help bolster customer applications by adding additional intelligence layers to their applications.
- A preview of Azure Search combined with Cognitive Services will be going out to select developers in the near future. The new preview will combine AI with indexing technologies to put lightning fast search options in almost any capacity for businesses looking to quickly access data whether it be images, text or insights.
- Along with speech, search and visual improvements to Azure, Microsoft is also making tweaks to the back and forth nature of its processing with new bot frameworks and Cognitive Services for implementing advanced conversational AI experiences. The new update will also allow for the addition of full personality and voice customization to align with a brand identity for businesses.
Arguably the bot revolution of 2015 has yet to fully take form, due in part to the rudimentary back-and-forth nature of communications. Adhering to a strict script-based conversation has held many bot-powered solutions back as customers tend to veer off script pretty wildly when searching or requesting services. Fortunately, it looks like businesses are getting hyped to the phenomenon and are layering richer dialog prompts in their experiences.
Lastly, Microsoft also mentioned its ready to preview Project Brainwave, a cool new way the company infused its Azure platform with deep neural net processing. Customers will now have access to the what Microsoft is calling “the fastest cloud to run real-time AI”, and it’ll be integrated combined with Azure’s already impressive Machine Learning platform.
Project Brainwave will also support Intel FPGA hardware as well as ResNet50-based neural networks for customers looking to automate more complex tasks. There wasn’t an official timeframe laid out but Microsoft mentioned that Brainwave is in development for Azure Stack and Azure Data Box for those looking to take the project for a spin on those platforms.
For the past half-decade Microsoft has made a public showing of its continued development of Azure and at Build, this year developers are getting access to the APIs, platforms, and technologies necessary for next-generation application development that will transcend smartphones and PCs into a new world of powerful ambient computing.