The UK and US, as well as international partners in 16 other countries, released the securityAI (AI) A new guide to system development.
USAcyber securityAnd the Infrastructure Security Agency (CISA) said: "This approach prioritizes customer ownership of security outcomes, embraces radical transparency and accountability, and establishes an organizational structure with security by design as a top priority."
The National Cyber Security Center (NCSC) added that its goal is to improve cybersecurity for artificial intelligence and help ensure the technology is designed, developed and deployed in a secure manner.
The guidance also builds on the U.S. government’s ongoing efforts to manage the risks posed by artificial intelligence, ensuring that new tools are fully tested before public release, that appropriate safeguards are in place to address social harms such as bias and discrimination, and Privacy concerns, and establishing reliable ways for consumers to identify AI-generated material.
The commitments also require companies to commit to promoting third parties to discover and report vulnerabilities in their AI systems through a bug bounty system so that these vulnerabilities can be quickly found and fixed.
The NCSC said the latest guidance "helps developers ensure that cybersecurity is both an important prerequisite for the safety of AI systems and an integral part of it from the outset and throughout the development process, a so-called 'secure by design' approach."
This includes safe design,Safe development, secure deployment, and secure operations, covering all important areas in the AI system development lifecycle, requiring organizations to model threats to their systems and protect their supply chains and infrastructure.
The agencies note that they are also intended to combat adversarial attacks on artificial intelligence and machine learning (ML) systems that are designed to cause unintended behavior in a variety of ways, including affecting a model's classification and allowing users to perform unauthorized actions. and extraction of sensitive information.
NCSC states: “There are many ways to achieve these effects, such as hint injection attacks in the field of large language models (LLMs), or deliberate corruption of training data or user feedback (known as ‘data poisoning’).
executive Summary
This document provides guidance for providers of any systems that use artificial intelligence (AI), whether those systems are created from scratch or built on tools and services provided by others. Implementing these guidelines will help providers build AI systems that perform as expected, are available when needed, and work without leaking sensitive data to unauthorized parties.
This document is primarily intended for AI system providers that use models hosted by their organization or use external application programming interfaces (APIs). We urge all stakeholders, including data scientists, developers, managers, policymakers, and risk owners, to read these guidelines to help them make informed decisions about the design, development, deployment, and operation of AI systems.
About the guide
Artificial intelligence systems have the potential to bring many benefits to society. However, to fully realize the opportunities of AI, AI must be developed, deployed and operated in a safe and responsible manner.
AI systems present new security vulnerabilities that need to be considered alongside standard cybersecurity threats. When developments occur quickly (as is the case with artificial intelligence), security is often a secondary consideration. Security must be a core requirement, not only during the development phase, but throughout the system's lifecycle.
To this end, the guide subdivides four key areas within the development life cycle of artificial intelligence systems into: safe design, safe development, safe deployment, and safe operation and maintenance. For each section, we recommend considerations and mitigations that will help reduce the overall risk of an organization’s AI system development process.
1. Safety design
This section contains guidance applicable to the design phase of the AI system development life cycle. It covers understanding risk and threat modeling, as well as specific topics and trade-offs that need to be considered when designing systems and models.
2. Security development
This section contains guidance applicable to the development phases of the AI system development lifecycle, including supply chain security, documentation, and asset and technical debt management.
3. Safe deployment
This section contains guidance that applies to the deployment phase of the AI system development lifecycle, including protecting infrastructure and models from damage, threat, or loss, developing incident management processes, and responsible releases.
4. Safe operation and maintenance
This section contains guidance applicable to the security operations phase of the AI system development life cycle. It provides guidance on actions that are particularly relevant after system deployment, including logging and monitoring, update management, and information sharing.
The guidance follows a "secure by default" approach and is closely aligned with practices defined in NCSC's Secure Development and Deployment Guidelines, NIST's Secure Software Development Framework and the "Security by Design Principles" published by CISA, NCSC and international cyber agencies.
Factors to consider:
- Responsible for safety results for customers
- Embrace radical transparency and accountability
- Establish the organizational structure and leadership to make security by design a top business priority for the enterprise
Safe Artificial Intelligence System Development Guide Download:
Original article, author: lyon, if reprinted, please indicate the source: https://cncso.com/en/release-guidelines-for-secure-artificial-intelligence-system-development.html