Microsoft publishes guidelines for developers building digital assistants

James Walker

Looking for more How To posts? Check out our How To Page for all the latest tips on Windows, Microsoft Teams, LinkedIn, and more!

Looking for more info on AI, Bing Chat, Chat GPT, or Microsoft's Copilots? Check out our AI / Copilot page for the latest builds from all the channels, information on the program, links, and more!

Microsoft has released a set of guidelines to help developers build “responsible” digital assistants. Adoption of conversational interfaces is growing amid rapid improvements in the underlying technologies. However, consideration of the ethical concerns is moving at a slower pace.

Microsoft said the guidelines are intended to remind developers that their products may be used in ways they cannot always foresee. Although ethical design has always been relevant to the technology industry, it is particularly important in the case of conversational interfaces.

Microsoft used the example of a pizza ordering bot. The service should be designed to facilitate the ordering of pizza, with minimal susceptibility to user distractions. If the user tries to discuss “sensitive topics,” such as race, gender, religion or politics, the service should refrain from engaging in much the same way as a human pizza server would. Otherwise, any biases in the bot’s algorithms could result in it learning unwanted behaviours from the customer.

The rest of the guidelines continue along similar themes. Overall, Microsoft stresses that bots can be good for society, if developers remain mindful of the potential issues. Organisations must be upfront about their use of bots. Any AI-powered services have to remain free of biases and respectful of cultural standards.

“In general, the guidelines emphasize the development of conversational AI that is responsible and trustworthy from the very beginning of the design process,” said Lili Cheng, Microsoft Corporate Vice President of Conversational AI. “They encourage companies and organizations to stop and think about how their bot will be used and take the steps necessary to prevent abuse. At the end of the day, the guidelines are all about trust, because if people don’t trust the technology, they aren’t going to use it.”

You can find the full list of guidelines over on Microsoft Research’s website. Although they’re not hard rules, they do encapsulate the basic ethical standards to consider when developing bots. Microsoft published the guidelines this week alongside a new set of AI technologies, including a quick-start toolkit for developers creating digital assistants.