The U.S. AI Safety Institute Consortium (AISIC) has been launched by the Biden Administration, bringing together more than 200 representatives from major AI rivals such as Amazon, Google, Apple, Microsoft, OpenAI, and NVIDIA. This consortium aims to develop and deploy safe and trustworthy artificial intelligence, in response to President Biden’s Executive Order to set safety standards and protect the innovation ecosystem.
AISIC’s primary objective is to collaborate with international partners in order to develop interoperable tools for AI safety on a global scale. The Biden-Harris administration has enlisted the support of Big Tech companies and other stakeholders in the AI industry to address the safety and trustworthiness of AI development.
To fulfill the mandates outlined in President Biden’s AI executive order, the U.S. Department of Commerce has created AISIC. This consortium includes major tech players like OpenAI, Google, Microsoft, Apple, Amazon, Meta, NVIDIA, Adobe, and Salesforce. Additionally, academic institutes such as MIT, Stanford, and Cornell are also part of AISIC.
The consortium has set forth several goals, including the development of guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content. These guidelines will help ensure that AI systems are developed and deployed in a manner that prioritizes safety and addresses potential risks.
This move by the U.S. government is considered a significant step towards regulating AI and addressing concerns related to national security, privacy, surveillance, election misinformation, and job security. By bringing together major AI players and academic institutions, AISIC aims to foster collaboration and drive advancements in AI safety.
Overall, the establishment of the U.S. AI Safety Institute Consortium reflects the growing recognition of the need for robust safety measures in AI development. With the involvement of top industry leaders and academic institutions, this consortium has the potential to shape the future of AI and ensure its responsible and trustworthy use.