More than 200 of the world’s leading researchers in artificial intelligence (AI) have signed an open letter calling on big players in AI like OpenAI, Meta, and Google to allow outside experts to independently evaluate and test the safety of their AI models and systems.
The letter argues that strict rules put in place by tech firms to prevent abuse or misuse of their AI tools are having the unintended consequence of stifling critical independent research aimed at auditing these systems for potential risks and vulnerabilities.
Prominent signatories include Stanford University’s Percy Liang, Pulitzer-winning journalist Julia Angwin, Renée DiResta from the Stanford Internet Observatory, AI ethics researcher Deb Raji, and former government advisor Suresh Venkatasubramanian.
What are the AI researchers concerned about?
The researchers say AI company policies that ban certain types of testing and prohibit violations of copyrights, generation of misleading content, or other abuses are being applied in an overly broad manner. This has created a “chilling effect” where auditors fear having their accounts banned or facing legal repercussions if they push the boundaries to stress-test AI models without explicit approval.
“Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned types of research aimed at holding them accountable,” the letter states.
The letter lands amid growing tensions, with AI firms like OpenAI claiming that The New York Times’ efforts to probe for copyright issues in ChatGPT amounted to “hacking.” Meta has updated terms threatening revocation if its latest language model is used to infringe intellectual property.
Researchers argue companies should provide a “safe harbor” allowing responsible auditing, as well as direct channels to responsibly report potential vulnerabilities found during testing, rather than having to resort to “gotcha” moments on social media.
“We have a broken oversight ecosystem,” said Borhane Blili-Hamelin of the AI Risk and Vulnerability Alliance. “Sure, people find problems. But the only channel to have an impact is these ‘gotcha’ moments where you have caught the company with its pants down.”
The letter and accompanying policy proposal aim to foster a more collaborative environment for external researchers to evaluate the safety and potential risks of AI systems impacting millions of consumers.
Featured image: Ideogram
The post “200 AI researchers urge OpenAI, Google, Meta to allow safety checks” by Sam Shedden was published on 03/06/2024 by readwrite.com