• Wed. Apr 24th, 2024

AI experts challenge ‘doomer’ narrative, including ‘extinction risk’ claims

Bynewsmagzines

May 31, 2023
AI experts challenge 'doomer' narrative, including 'extinction risk' claims

[ad_1]

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Top AI researchers are pushing back on the current ‘doomer’ narrative focused on existential future risk from runaway artificial general intelligence (AGI). These include yesterday’s Statement on AI Risk, signed by hundreds of experts including the CEOs of OpenAI, DeepMind and Anthropic, which warned of a “risk of extinction” from advanced AI if its development is not properly managed.

Many say this ‘doomsday’ take, with its focus on existential risk from AI, or x-risk, is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications and cybersecurity. The truth is, most AI researchers are not focused on or highly-concerned about x-risk, they emphasize.

“It’s almost a topsy-turvy world,” Sara Hooker, head of the nonprofit Cohere for AI and former research scientist at Google Brain, told VentureBeat. “In the public discourse, [x-risk] is being treated as if it’s the dominant view of this technology.” But, she explained, at scientific conferences such as the recent International Conference on Machine Learning (ICML) in early May, which attracts researchers from all over the world, x-risk was a “fringe topic.”

“At the conference, the few researchers who were talking about existential threats said they felt marginalized because they were in the minority,” she said.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

Normalizing existential AI threats through repetition

Mark Riedl, associate professor at the Georgia Institute of Technology, pointed out that those concerned with existential risk are not a monolithic group — they range from those that are convinced we have crossed a threshold, like Eliezer Yudkowsky, to those who believe it is imminent and inevitable, like OpenAI’s Sam Altman, as well as those who are in a wait-and-see mode and those who don’t an obvious path to AGI without some new breakthrough. But, he said, statements by prominent researchers and leaders of prominent tech companies seem to be receiving an outsized amount of attention in social media and in the press. 

“Existential threats are often reported as fact,” he told VentureBeat. “This goes a long way to normalizing, through repetition, the belief that only scenarios that endanger civilization as a whole matter and that other harms are not happening or are not of consequence.” 

Yacine Jermite, a machine learning research at Hugging Face, pointed to a tweet by Timnit Gebru yesterday which likened the constant x-risk narrative to a DDoS attack — that is, when a cyber attacker floods a server with internet traffic to prevent users from accessing services and sites.

There is so much attention flooded onto x-risk, he said, that it “takes the air out of more pressing issues” and insidiously puts social pressure on researchers focused on other, current risks and makes it hard to hold those focused on x-risk accountable.

It also plays into issues of regulatory capture, he added, pointing to OpenAI’s recent actions as an example. “Some of these people have been pushing for an AI licensing regime, which has been rightfully attacked on grounds of pushing for regulatory capture,” he said. “The existential risk narrative plays into this by [companies saying] we’re the ones who should be making the rules for how [AI] is governed.” At the same time, OpenAI can say it will leave the EU if it is “overregulated,” he explained, alluding to last week’s threats from CEO Sam Altman.

Drowning out voices seeking to draw attention to current harms

Riedl admitted that the authors of the Statement on AI Risk acknowledge that one can be concerned about long-term, low-probability events and also be concerned about near-term, high-probability harms, But this overlooks the fact that the “doomer” narrative drowns out voices that seek to draw attention to real harms occurring to real people right now, he explained.

“These voices are often from those in marginalized and underrepresented communities because they have experienced similar harms first-hand or second-hand,” he said. Also, outsized attention on one aspect of AI safety indirectly affects how resources are allocated.

“Unlike worry, which is in infinite supply, other resources like research funding (and attention) are limited,” he said. “Not only are those who are most vocal about existential risk already some of the most well-resourced groups and individuals, but their influence can shape governments, industry, and philanthropy.”

Cohere for AI’s Hooker agreed, saying that while it is good for some people in the field to work on long-term risks, the amount of people is currently disproportionate to the ability to accurately estimate that risk.

“My main concern is that it minimizes a lot of conversations around present day risk and in terms of allocation of resources,” she said. “I wish more of the attention was placed on the current risk of our models that are deployed every day and used by millions of people. Because for me, that’s what a lot of researchers work on day in, day out and it displaces visibility and resources for the efforts of many researchers who work on safety.”

The bombastic views around existential risk may be “more sexy,” she added, but said it hurts researchers’ ability to deal with things like hallucinations, factual grounding, training models to update, making models serve other parts of the world, and access to compute: “So much of researchers’ frustration right now is about how do they audit, how do they participate in building these models?”

‘Baffled by the positions these prominent people are taking’

Thomas G. Dietterich, a machine learning pioneer and emeritus professor of computer science at Oregon State University, was blunt in his assessment of yesterday’s Statement on AI Risk. “I am baffled by the positions these prominent people are taking,” he told VentureBeat. “In the parts of AI outside of deep learning, most researchers think industry and the press are wildly over-reacting to the apparent fluency and breadth of knowledge of large language models.”

Dietterich said that in his opinion, the greatest risk that computers pose is through cyberattacks such as ransomware and advanced persistent threats designed to damage or take control of critical infrastructure. “As we figure out how to encode more knowledge in computer programs (as in ChatGPT and StableDiffusion), these programs become more powerful tools for design — including the design of cyber attacks,” he said.

So why are industry leaders and prominent researchers raising the specter of AI as an existential risk? Dietterich noted that the organizations warning of existential risk, such as the Machine Intelligence Research Institute, the Future of Life Institute, the Center for AI Safety, and the Future of Humanity Institute, obtain their funding precisely by convincing donors that AI existential risk is a real and present danger.

“While I don’t question the sincerity of the people in these organizations, I think it is always worth examining the financial incentives at work,” he said. “By the same token, researchers like me receive our funding because we convince government funding agencies and companies that improving AI software will lead to benefits in furthering scientific research, advancing health care, making economies more efficient and productive, and strengthening national defense. While the warnings about existential risk remain extremely vague, the research community has delivered concrete advances across science, industry, and government.”

Other prominent AI leaders are speaking out

Many other prominent AI researchers are speaking out, on Twitter and elsewhere, against the ‘doomer’ narrative. For example, Andrew Ng insisted yesterday that AI will be a key part of the solution to existential risks:

Meanwhile, AI researcher Meredith Whittaker, who was pushed out of Google in 2019 and is now president of the Signal Foundation, recently said that today’s x-risk alarmism from AI pioneers like Geoffrey Hinton alarmism is a distraction from more pressing threats.

“It’s disappointing to see this autumn-years redemption tour from someone who didn’t really show up when people like Timnit [Gebru] and Meg [Mitchell] and others were taking real risks at a much earlier stage of their careers to try and stop some of the most dangerous impulses of the corporations that control the technologies we’re calling artificial intelligence,” she told Fast Company in an interview.

How to handle the ‘doomer’ narrative

For Riedl, there is room for concern for existential AI risks, though he emphasized that he has “personally yet to see claims or evidence that I find highly credible.”

However, “if only the existential risk facet of AI safety receives attention and resources, then the ability to address current, ongoing harms will be negatively impacted,” he said.

Hugging Face’s Jacine said that it is “tempting” to draft a counter-letter to the Statement on AI Risk. But he added that he won’t do that.

“The statements have so many logical holes and we can spend so much of our time and energy trying to poke holes in each of those statements,” he said. “What I found both most useful and best for my mental health is to just keep on working on the things that matter,” he said. “You can’t [push back] every five minutes.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Leave a Reply

Your email address will not be published. Required fields are marked *