The Biden-Harris Administration has revealed that it successfully obtained a second round of consensual safety commitments from eight renowned AI companies. This announcement took place at the White House and was attended by representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability.
These companies have made a promise to play a crucial role in fostering the development of reliable, secure, and trustworthy AI. The administration, under President Biden and Vice President Harris, is actively working on an Executive Order and bipartisan legislation to ensure that the United States leads responsibly in AI development, unlocking its potential while managing risks.
The commitments made by these companies revolve around three fundamental principles: safety, security, and trust. First and foremost, they commit to ensuring that their products are safe before introducing them to the public.
They emphasize rigorous internal and external security testing of their AI systems, including assessments by independent experts. Such measures guard against significant AI risks, encompassing biosecurity, cybersecurity, and broader societal effects. Moreover, they actively pledge to share information on AI risk management with various stakeholders, including governments, civil society, academia, and the industry.
Building systems with security as a top priority is another core commitment made by these companies. They vow to invest in cybersecurity and insider threat safeguards to protect their proprietary and unreleased model weights securely.
Recognizing the critical importance of these model weights in AI systems, they pledge to release them only when appropriate and after effectively addressing security risks. Additionally, the companies assure that they will facilitate third-party discovery and reporting of vulnerabilities in their AI systems. This proactive approach ensures the prompt identification and resolution of issues, even after an AI system is deployed.
Earning the public’s trust is a vital aspect emphasized by these companies. To enhance transparency and accountability, they will develop robust technical mechanisms, such as watermarking systems, to indicate when content is AI-generated. By doing so, they aim to foster creativity and productivity while minimizing the risks of fraud and deception.
Furthermore, the companies commit to publicly reporting on the capabilities, limitations, and appropriate and inappropriate use of their AI systems. This reporting will encompass both security and societal risks, including fairness and bias. They also emphasize the imperative of prioritizing research to address the societal risks posed by AI systems, specifically regarding harmful bias and discrimination.
These leading AI companies are committed to the development and deployment of advanced AI systems to tackle significant societal challenges. This extends to areas such as cancer prevention and climate change mitigation, contributing to the prosperity, equality, and security of all.
The Biden-Harris Administration’s engagement in these commitments goes beyond the borders of the United States. Consultations with international partners and allies are actively taking place, reinforcing the global nature of these initiatives. The commitments made by these companies align with other global efforts, including the UK’s Summit on AI Safety, Japan’s leadership in the G-7 Hiroshima Process, and India’s role as Chair of the Global Partnership on AI.
This announcement signifies a significant milestone in the journey towards responsible AI development. Industry leaders and the government are joining forces to ensure that AI technology benefits society while mitigating its inherent risks.