AI Agents Collaborate on Microsoft Security Copilot

Microsoft Security Copilot: Enhancing Security Through AI
Microsoft is introducing significant advancements to its Security Copilot, a tool designed to streamline various security tasks through artificial intelligence (AI). This update aims to foster better interaction with Microsoft’s security products and automate response to security threats.
The Evolution of Security Copilot
Originally launched in 2023, Security Copilot promised to automate the triage of security incidents within Microsoft Defender XDR. At a public event held on March 20, Vasu Jakkal, a corporate vice president at Microsoft, announced an enhanced launch strategy for Security Copilot. The tool is now integrated with 11 specialized AI agents that work alongside various products, including Defender, Purview, Entra, and Intune.
What Are AI Agents?
Jakkal highlighted the emergence of "agentic AI" in her discussions, elaborating on the proliferation of AI agents across the tech landscape. While she recognized the excitement surrounding these agents, she drew attention to the gaps in understanding their roles and responsibilities, particularly when they fail or require significant computational resources.
New AI Agents Introduced
The latest iteration of Security Copilot features agents that serve specific purposes. Five of these agents are developed by Microsoft, while the other six come from recognized security partners.
Microsoft-Made Agents
- Phishing Triage Agent (Defender) – This agent helps sort through phishing reports effectively.
- Alert Triage Agents (Purview) – These agents focus on managing alerts related to data loss and insider threats.
- Conditional Access Optimization Agent (Entra) – This agent monitors and reduces issues related to identity and policy.
- Vulnerability Remediation Agent (Intune) – It identifies and prioritizes vulnerabilities that need addressing.
- Threat Intelligence Briefing Agent – This agent curates and summarizes relevant threat intelligence.
Third-Party Contributions
Security partners of Microsoft have also contributed valuable agents:
- Privacy Breach Response Agent (OneTrust) – It provides guidance on handling data breaches.
- Network Supervisor Agent (Aviatrix) – Assists in root cause analysis for network problems.
- SecOps Tooling Agent (BlueVoyant) – Assesses the controls in security operations centers.
- Alert Triage Agent (Tanium) – Helps analysts prioritize alerts more effectively.
- Task Optimizer Agent (Fletch) – Forecasts and ranks threat alerts based on urgency.
An additional agent in Microsoft Purview aids data security teams in tackling data exposure risks, seamlessly utilizing generative AI capabilities for summarizing large volumes of information such as phishing alerts.
Enhancing Security Team Effectiveness
The introduction of these agents aims to improve the efficiency of security teams. Statistics shared by Jakkal indicate that organizations using Security Copilot have seen a 30% reduction in the time needed to respond to security incidents. Moreover, even novice security professionals have reported being 26% faster and 35% more accurate when using the tool, while experienced professionals saw a 22% speed increase and 7% accuracy gain.
Jakkal emphasized that the security landscape is rapidly evolving, leading to an increase in daily attacks, which have risen dramatically from 4,000 per second last year to 7,000 now—equating to over 600 million attacks each day.
Ensuring Security and Reducing Errors
Concerns around AI errors, often referred to as "hallucinations," were addressed by Tori Westerhoff from Microsoft’s AI safety team. She reassured attendees that the AI models incorporate safeguards, and extensive testing aims to mitigate potential risks before rollouts.
Nick Goodman, a product architect, provided insight into how the Phishing Triage Agent effectively alleviates the burden of high false-positive rates in phishing reports. He noted that, while agents can help generate information and filter content, human oversight is still critical as AI systems do not possess context-based understanding.
Regulatory Compliance Assistance
OneTrust’s Privacy Breach Response Agent exemplifies how generative AI can assist organizations in navigating complex privacy regulations following a data breach. This agent can generate a prioritized list of recommendations based on a company’s obligations without completing the notifications automatically, ensuring that human judgment is retained.
Microsoft’s ongoing development and refinement of AI agents showcase a commitment to integrating AI technologies into security protocols, while reinforcing the importance of human supervision and decision-making in managing cybersecurity threats efficiently.