A.5.23-AI Artificial Intelligence and Autonomous Agents
What is this control?
ISO 27001 control A.5.23-AI Artificial Intelligence and Autonomous Agents manages information security, privacy, and operational risks throughout the AI lifecycle. The control addresses unique risks of Generative AI including hallucination and data leakage, non-human identities with autonomous decision-making, and legal liability for automated decision-making. It distinguishes sanctioned enterprise AI like Microsoft 365 Copilot from shadow AI and classifies AI strictly as decision support systems requiring human verification.
How to implement in Microsoft 365
Implement A.5.23-AI with agent governance requiring all AI Agents to be onboarded via the supplier process, registered in the Agent Registry with verified Entra Agent ID, designated human sponsor, and use approved connectors respecting user context and ACLs. Agents operate under dedicated Entra Agent ID with Conditional Access gating Agent access with risk-based checks. Configure outputs from classified documents to inherit source sensitivity labels.
Block data labelled Highly Confidential from third-party agents and use Endpoint DLP to block clipboard paste of sensitive information into unsanctioned AI tools. Require explicit user confirmation for any Agent performing Write actions.
What an auditor looks for
Auditors will verify the Agent Registry showing 100% sanctioned agents with zero shadow agents and Entra Agent IDs assigned. They will check Conditional Access policies configured for workload identities enforcing risk-based access. Auditors will verify Copilot label inheritance is enabled in Purview settings.
They will review Endpoint DLP configuration showing unsanctioned AI domains listed as blocked services. Auditors will verify Viva Insights privacy settings showing minimum group size of 10 or more users. They will check GitHub organisation Copilot settings showing public code matching set to Blocked.