Microsoft has added yet another AI powered weapon to its arsenal, announcing the Microsoft Security Copilot today in private preview. Designed for security professionals to “see what is happening in their environment, learn from existing intelligence, correlate threat activity, and make more informed, efficient decisions at machine speed.”
Running on Azure’s hyperscale architecture, Security Copilot combines the Large Language Model provided by OpenAI’s GPT4 with a security specific model built by Microsoft to deliver “an enterprise-grade security and privacy-compliant experience:”
When Security Copilot receives a prompt from a security professional, it uses the full power of the security-specific model to deploy skills and queries that maximize the value of the latest large language model capabilities. And this is unique to a security use-case. Our cyber-trained model adds a learning system to create and tune new skills. Security Copilot then can help catch what other approaches might miss and augment an analyst’s work. In a typical incident, this boost translates into gains in the quality of detection, speed of response and ability to strengthen security posture.
Security Copilot doesn’t always get everything right. AI-generated content can contain mistakes. But Security Copilot is a closed-loop learning system, which means it’s continually learning from users and giving them the opportunity to give explicit feedback with the feedback feature that is built directly into the tool. As we continue to learn from these interactions, we are adjusting its responses to create more coherent, relevant and useful answers.
Security Copilot will integrate with Microsoft’s security products, and over time will expand to include “a growing ecosystem” of third-party products.
You can learn more about Microsoft Security Copilot and watch a demo at the Introducing Microsoft Security Copilot website.