The Biden administration has issued an executive order on artificial intelligence (AI) with the aim of regulating AI safety and security. While the order directs various US government agencies to develop guidelines for testing and using AI systems, the implementation of these guidelines will require action from US lawmakers and the voluntary cooperation of tech companies.
One significant aspect of the executive order is the inclusion of foundation models – large AI models trained on massive datasets. If these models are deemed to pose a serious risk to national security, national economic security, or national public health and safety, companies developing such AIs will be required to notify the federal government about the training process and share the results of safety testing. This provision could potentially impact models like OpenAI’s GPT-3.5 and GPT-4, Google’s PaLM 2, and Stability AI’s Stable Diffusion.
However, there are concerns about the details and qualifications in the order. The definition of “foundation model” and who determines what qualifies as a threat are unclear. Additionally, the US currently lacks strong data protection laws, unlike the European Union and China, which have implemented specific laws addressing various aspects of AI.
Despite these challenges, the Biden administration aims to be seen as proactive in addressing AI regulation. However, the impact of the executive order alone is limited without bipartisan legislation and resources to support it, which may be unlikely during the upcoming 2024 US presidential election year.
Overall, the US is striving to establish itself as a leader in AI development, but concrete regulation and strong data protection laws are necessary to effectively regulate and uphold democratic values in the AI sector.