Future of Life Institute’s (FLI) has released findings from its 2024 AI Safety Index, which found six leading AI-focussed companies had not secured the relatively untried and untested technology.
The companies assessed by an independent panel of experts included Anthropic, Google DeepMind, Meta, OpenAI, x.AI, and Zhipu AI.
Specifically, the nonprofit organisation FLI which seeks to help mitigate against AI threats said that these prominent companies were “vulnerable to adversarial attacks” and had “no adequate strategy” to make sure that their AI systems continued to be beneficial and “under human control”.
One of the panellists reviewing AI safety, Stuart Russell, a Professor of Computer Science at UC Berkeley, shared his insights on the recent AI Safety Index:
“The findings of the AI Safety Index project suggest that although there is a lot of activity at AI companies that goes under the heading of ‘safety,’ it is not yet very effective.
“In particular, none of the current activity provides any kind of quantitative guarantee of safety; nor does it seem possible to provide such guarantees given the current approach to AI via giant black boxes trained on unimaginably vast quantities of data. And it’s only going to get harder as these AI systems get bigger.
“In other words, it’s possible that the current technology direction can never support the necessary safety guarantees, in which case it’s really a dead end.”
2024 AI Safety Index Findings
The review panel measured the AI safety levels of each company based on risk assessment, current harms, safety frameworks, existential safety strategy, governance and accountability, and transparency and communication.
They noticed that although some of these enterprises had shown excellent safety practice in certain areas, there were still a meaningful number of risk management differences between them.
Furthermore, all of the flagship models were shown to be at risk of attack and, despite their words to the contrary, there were no effective strategies in place to ensure that they behave as intended.
Panelist David Krueger, Assistant Professor at Université de Montreal and a core member of Mila, expressed his concern in no uncertain terms:
“It’s horrifying that the very companies whose leaders predict AI could end humanity have no strategy to avert such a fate.”
The six companies were given overall grades relating to their safety, which put Anthropic at the top of the list with a “C”, followed by Google DeepMind with a “D+”, OpenAI with a “D+”, Zhipu AI with a “D”, and x.AI with a “D-”.
Bottom of the list and most concerning of all, Meta received an F. The tech giant led by Mark Zuckerberg scored highest for “Current Harms”, which was a mere “D”, with its grade being dragged down by three “F” grades for safety frameworks, existential safety strategy, and transparency and communication.
FLI President Max Tegmark, a professor doing AI research at MIT, explained why it began this AI safety assessment: “We launched the Safety Index to give the public a clear picture of where these AI labs stand on safety issues.
“The reviewers have decades of combined experience in AI and risk assessment, so when they speak up about AI safety, we should pay close attention to what they say.”
In September, Microsoft, Google, and Amazon were two of 100 companies to have signed the EU’s Artificial Intelligence pact so far.
Interestingly, Meta – which received the lowest safety rating – was one prominent name that did not sign the commitment, although it has said it is open to discussions. Apple was another missing signatory from the “Big Four” technology companies.