In the rapidly evolving business landscape, corporations are perpetually in search of innovative strategies that can amplify productivity and bolster security. Microsoft President Brad Smith wrote in his blog: AI advancements are revolutionizing knowledge work, enhancing our cognitive abilities, and are fundamental to many aspects of life. These developments present immense opportunities to improve the world by boosting productivity, fostering economic growth, and reducing monotony in jobs. They also enable creativity, impactful living, and discovery of insights in large data sets, driving progress in various fields like medicine, science, business, and security. However, the integration of AI into business operations is not without its hurdles. Companies are tasked with ensuring that their AI solutions are not only robust but also ethical, dependable, and trustworthy.
How Microsoft 365 Delivers Trustworthy AI is a comprehensive document providing regulators, IT pros, risk officers, compliance professionals, security architects, and other interested parties with an overview of the many ways in which Microsoft mitigates risk within the artificial intelligence product lifecycle. The document outlines the Microsoft promise of responsible AI, the responsible AI standard, industry leading frameworks, laws and regulations, methods of mitigating risk, and other assurance-providing resources. It is intended for a wide range of audiences external to Microsoft, who are interested in or involved in the development, deployment, or use of Microsoft AI. As Charlie Bell, EVP of Security at Microsoft describes in his blog, “As we watch the progress enabled by AI accelerate quickly, Microsoft is committed to investing in tools, research, and industry cooperation as we work to build safe, sustainable, responsible AI for all.”
The commitments and standards conveyed in this paper operate at the Microsoft cloud level – these promises and processes apply to AI activity across Microsoft. Where the paper becomes product specific, its sole focus is Microsoft Copilot for Microsoft 365. This does not include Microsoft Copilot for Sales, Microsoft Copilot for Service, Microsoft Copilot for Finance, Microsoft Copilot for Azure, Microsoft Copilot for Microsoft Security, Microsoft Copilot for Dynamics 365, or other Copilots outside of Microsoft 365.
At Microsoft, we comprehend the significance of trustworthy AI. We have formulated a comprehensive strategy for responsible and secure AI that zeroes in on addressing specific business challenges such as safeguarding data privacy, mitigating algorithmic bias, and maintaining transparency. This whitepaper addresses our strategy for mitigating AI risk as part of the Microsoft component of the AI Shared Responsibility Model.
The document is divided into macro sections with relevant articles within each:
- Responsible and Secure AI at Microsoft – this section focuses on Microsoft’s commitment to responsible AI and what this looks like in practice. The articles within address key topics including:
- The Office of the Responsible AI – read this to gain a deeper understanding of what comprises this division within Microsoft.
- The Responsible AI Standard and Impact Assessment – every Microsoft AI project must adhere to the Responsible AI Standard and have a valid impact assessment completed.
- Microsoft’s voluntary White House commitments – learn more about the commitments the White House made and how Microsoft shares these principles in our development and deployment practices.
- Artificial Generative Intelligence Security team – learn about Microsoft’s center of excellence for Microsoft’s generative AI security and the initiatives being driven by this team.
- Addressing New Risk – this section centers on the ways in which Microsoft is continuously improving its security practices and service design to mitigate new risk brought forth by the era of AI. As Brad Smith states in his blog, “Even as recent years have brought enormous improvements, we will need new and different steps to close the remaining cybersecurity gap.” This section addresses many actions Microsoft takes to address novel and preexisting risks in the era of AI. The articles within address salient topics including:
- The copilot copyright commitment – how Microsoft addresses the risk of customers inadvertently using copywritten material via Microsoft AI services.
- Updating the Security Development Lifecycle (SDL) to address AI risk – the ways Microsoft has adapted our SDL to identify and prioritize AI specific risks.
- Copilot tenant boundaries and data protection with shared binary LLMs – this article describes how your data remains protected and secured throughout the data flow process to the copilot LLMs and back to your end user in this multi-tenant environment.
- Copilot data storage and processing – this section answers the question, “what are the data storage and processing commitments applicable to Microsoft 365 copilot today?”
- AI specific regulations and frameworks for assurance – this section describes upcoming regulations relevant to artificial intelligence and how Microsoft plans to address each. Regulations and frameworks addressed include:
- European Union AI Act
- ISO 42001 AI Management System
- Cyber Executive Order (EO 14028)
- NIST AI Risk Management Framework
- Assurance Providing Resources – this comprises miscellaneous resources to providing customers assurance that Microsoft is mitigating risk as part of the shared responsibility model.
- Defense-in-depth: controls preventing model compromise in the production environment – this article outlines an entire Microsoft control set designed to mitigate model compromise through defense-in-depth.
As with everything Microsoft does, this whitepaper is subject to continuous update and improvement. Please reach out to your Microsoft contacts if you have questions regarding this content; thank you for your continued support and utilization of Microsoft AI.
Download the Whitepaper
We hope this whitepaper has provided you with valuable insights into how Microsoft delivers trustworthy AI across its products and services. If you want to learn more about our responsible and secure AI strategy, you can download the full whitepaper here: https://aka.ms/TrustworthyAI. This document will give you a comprehensive overview of the Microsoft promise of responsible AI, the responsible AI standard, industry leading frameworks, laws and regulations, methods of mitigating risk, and other assurance-providing resources. You will also find detailed information on how Microsoft Copilot for Microsoft 365 adheres to these principles and practices. Download the whitepaper today and discover how Microsoft can help you achieve your AI goals with confidence and trust.