OpenAI, GoogleDeepMind, and Meta Get Bad Grades on AI Safety

OpenAI, GoogleDeepMind, and Meta Get Bad Grades on AI Safety

The just-released AI Safety Index graded six leading AI companies on their risk assessment efforts and safety procedures… and the top of class was Anthropic, with an overall score of C. The other five companies—Google DeepMind, Meta, OpenAI, xAI, and Zhipu AI—received grades of D+ or lower, with Meta flat out failing.

“The purpose of this is not to shame anybody,” says Max Tegmark, an MIT physics professor and president of the Future of Life Institute, which put out the report. “It’s to provide incentives for companies to improve.” He hopes that company executives will view the index like universities view the U.S. News and World Reports rankings: They may not enjoy being graded, but if the grades are out there and getting attention, they’ll feel driven to do better next year.

He also hopes to help researchers working in those companies’ safety teams. If a company isn’t feeling external pressure to meet safety standards, Tegmark says,“then other people in the company will just view you as a nuisance, someone who’s trying to slow things down and throw gravel in the machinery.” But if those safety researchers are suddenly responsible for improving the company’s reputation, they’ll get resources, respect, and influence.

The Future of Life Institute is a nonprofit dedicated to helping humanity ward off truly bad outcomes from powerful technologies, and in recent years it has focused on AI. In 2023, the group put out what came to be known as “the pause letter,” which called on AI labs to pause development of advanced models for six months, and to use that time to develop safety standards. Big names like Elon Musk and Steve Wozniak signed the letter (and to date, a total of 33,707 have signed), but the companies did not pause.

This new report may also be ignored by the companies in question. IEEE Spectrum reached out to all the companies for comment, but only Google DeepMind responded, providing the following statement: “While the index incorporates some of Google DeepMind’s AI safety efforts, and reflects industry-adopted benchmarks, our comprehensive approach to AI safety extends beyond what’s captured. We remain committed to continuously evolving our safety measures alongside our technological advancements.”

How the AI Safety Index graded the companies

The Index graded the companies on how well they’re doing in six categories: risk assessment, current harms, safety frameworks, existential safety strategy, governance and accountability, and transparency and communication. It drew on publicly available information, including related research papers, policy documents, news articles, and industry reports. The reviewers also sent a questionnaire to each company, but only xAI and the Chinese company Zhipu AI (which currently has the most capable Chinese-language LLM) filled theirs out, boosting those two companies’ scores for transparency.

The grades were given by seven independent reviewers, including…

Read full article: OpenAI, GoogleDeepMind, and Meta Get Bad Grades on AI Safety

The post “OpenAI, GoogleDeepMind, and Meta Get Bad Grades on AI Safety” by Eliza Strickland was published on 12/13/2024 by spectrum.ieee.org