Microsoft has launched Azure AI Content Safety, which uses text and image detection to identify and filter harmful content in applications and services.
South Australia’s Department for Education has introduced an AI-powered chatbot called EdChat in classrooms to assist students in research and educators in lesson planning. However, ensuring students’ safety from potentially harmful or inappropriate content that may arise from using a large language model is a significant concern.
The department used Azure AI Content Safety, provided by Microsoft, to establish guardrails for EdChat. These safety features block
– inappropriate input queries
– filter harmful responses
Allowing teachers to focus on the educational benefits of the technology. According to Simon Chapman, the department’s director of digital architecture, Azure AI Content Safety was essential for the project’s success.
Azure AI Content Safety is now available as a standalone product. The platform detects content related to hate, violence, sexual, and self-harm themes using advanced language and vision models.
When potentially harmful content is detected, it is marked with an estimated severity score, allowing organizations to tailor the service to block or flag content according to their specific policies.
Azure AI Content Safety is customizable for various use cases, and organizations can tailor their policies to align with their specific needs. This flexibility is crucial, as different industries and contexts may require distinct content standards.
Microsoft is working to improve Azure AI Content Safety through research and customer feedback to avoid potential challenges related to content safety as generative AI becomes more widespread. This includes enhancing detection capabilities for multimodal content combinations like images and text in memes.