Microsoft's large-scale rollout of GPT-4 continues. This week: cybersecurity.
According to Microsoft, Security Copilot combines GPT-4 with a Microsoft-developed security-specific model that has a "growing set of security-specific capabilities". The model uses Microsoft's own security data and "65 trillion daily signals," the company says. Security Copilot was announced at Microsoft's Secure Event.
Security Copilot should be able to develop new capabilities
According to Microsoft, the security model has a built-in learning system that allows it to create and learn new capabilities, but enterprise data "is not used to train the foundation AI models."
"Security Copilot then can help catch what other approaches might miss and augment an analyst’s work. In a typical incident, this boost translates into gains in the quality of detection, speed of response and ability to strengthen security posture," Microsoft writes.
The models won't always get it right, and AI-generated content could contain errors, Microsoft writes. But the system is constantly learning from user input. It could defend against cyberattacks "minutes instead of hours or day," and can automatically generate visual reports or Powerpoint presentations that describe a vulnerability or document attacks.
The fact that Microsoft points out the flaws in the system in the announcement, but releases it anyway, underscores the company's risk-taking strategy of integrating large language models into as many products as possible, as quickly as possible, regardless of their flaws. At least the feedback process for model-generated errors is more extensive than the thumbs-up/thumbs-down vote in the Bing chat.
For now, Security Copilot integrates with the Azure cloud and with Microsoft's own end-to-end security products. In the future, Security Copilot will work with a growing portfolio of third-party applications, Microsoft said.