The Biden administration is seeking input from experts on what sorts of guardrails are recommended for open-source artificial intelligence models.
The National Telecommunications and Information Administration announced Wednesday that it is seeking public comment on the risks and benefits of using “open-source” AI systems, or AI programs that are publicly available for anyone to use or modify. Open-source AI software encourages cooperation and innovation by allowing the public to modify the model and implement innovations that may not be on the commercial market yet. But they’re also at risk of being used by malicious actors, which has created some hesitancy on the part of the government.
Open-source models, or “Models with Widely Available Model Weights,” the NTIA said in its announcement, have the potential to “transform research, both within computer science and through supporting other disciplines such as medicine, pharmaceutical, and scientific research.”
The public will have 30 days, starting on Wednesday, to provide public comment on the technology.
Advocates for the technology, such as Meta and IBM, have promoted open-source AI for years and released their own open-source models.
“We look forward to working with the Administration to share what we’ve learned from building AI technologies in an open way over the last decade so that the benefits of AI can continue to be shared by everyone,” Meta’s Vice President of Global Affairs Nick Clegg said in a statement sent to the Washington Examiner.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
Open-source AI models are particularly hard to set guardrails around because the developers could live anywhere in the world. Once a model is released, any person with access to the internet can change, update, or modify the model. Models could be used by bad actors, for example, to spread misinformation, or even, in one extreme scenario, to develop biological weapons. Such fears have led to scrutiny from Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-IL), who alleged that Meta’s open-source software could be used for “spam, fraud, malware, privacy violations, harassment, and other wrongdoing and harms.”
The European Union, in its AI Act, decided to exempt the technology from most of its reporting requirements unless it was considered “high-risk” and affected certain parts of the economy.