• Fri. Apr 19th, 2024

Top AI researcher dismisses AI ‘extinction’ fears, challenges ‘hero scientist’ narrative

Bynewsmagzines

Jun 1, 2023
Top AI researcher dismisses AI 'extinction' fears, challenges 'hero scientist' narrative

[ad_1]

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Kyunghyun Cho, a prominent AI researcher and an associate professor at New York University, has expressed frustration with the current discourse around AI risk. While luminaries like Geoffrey Hinton and Yoshua Bengio have recently warned of potential existential threats from the future development of artificial general intelligence (AGI) and called for regulation or a moratorium on research, Cho believes these “doomer” narratives are distracting from the real issues, both positive and negative, posed by today’s AI.

In a recent interview with VentureBeat, Cho — who is highly regarded for his foundational work on neural machine translation, which helped lead to the development of the Transformer architecture that ChatGPT is based on — expressed disappointment about the lack of concrete proposals at the recent Senate hearings related to regulating AI’s current harms, as well as a lack of discussion on how to boost beneficial uses of AI.

Though he respects researchers like Hinton and his former supervisor Bengio, Cho also warned against glorifying “hero scientists” or taking any one person’s warnings as gospel, and offered his concerns about the Effective Altruism movement that funds many AGI efforts. (Editor’s note: This interview has been edited for length and clarity.)

VentureBeat: You recently expressed disappointment about the recent AI Senate hearings on Twitter. Could you elaborate on that and share your thoughts on the “Statement of AI Risk” signed by Geoffrey Hinton, Yoshua Bengio and others? 

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

Kyunghyun Cho: First of all, I think that there are just too many letters. Generally, I’ve never signed any of these petitions. I always tend to be a bit more careful when I sign my name on something. I don’t know why people are just signing their names so lightly. 

As far as the Senate hearings, I read the entire transcript and I felt a bit sad. It’s very clear that nuclear weapons, climate change, potential rogue AI, of course they can be dangerous. But there are many other harms that are actually being made by AI, as well as immediate benefits that we see from AI, yet there was not a single potential proposal or discussion on what we can do about the immediate benefits as well as the immediate harms of AI.

For example, I think Lindsey Graham pointed out the military use of AI. That is actually happening now. But Sam Altman couldn’t even give a single proposal on how the immediate military use of AI should be regulated. At the same time, AI has a potential to optimize healthcare so that we can implement a better, more equitable healthcare system, but none of that was actually discussed.

I’m disappointed by a lot of this discussion about existential risk; now they even call it literal “extinction.” It’s sucking the air out of the room.

VB: Why do you think that is? Why is the “existential risk” discussion sucking the air out of the room to the detriment of more immediate harms and benefits? 

Kyunghyun Cho: In a sense, it is a great story. That this AGI system that we create turns out to be as good as we are, or better than us. That is precisely the fascination that humanity has always had from the very beginning. The Trojan horse [that appears harmless but is malicious] — that’s a similar story, right? It’s about exaggerating aspects that are different from us but are smart like us. 

In my view, it’s good that the general public is fascinated and excited by the scientific advances that we’re making. The unfortunate thing is that the scientists as well as the policymakers, the people who are making decisions or creating these advances, are only being either positively or negatively excited by such advances, not being critical about it. Our job as scientists, and also the policymakers, is to be critical about many of these apparent advances that may have both positive as well as negative impacts on society. But at the moment, AGI is kind of a magic wand that they are just trying to swing to mesmerize people so that people fail to be critical about what is going on. 

VB: But what about the machine learning pioneers who are part of that? Geoffrey Hinton and Yoshua Bengio, for example, signed the “Statement on AI Risk.” Bengio has said that he feels “lost” and somewhat regretful of his life’s work. What do you say to that? 

Kyunghyun Cho: I have immense respect for both Yoshua and Geoff as well as Yann [LeCun], I know all of them pretty well and studied under them, I worked together with them. But how I view this is: Of course individuals — scientists or not — can have their own assessment of what kinds of things are more likely to happen, what kinds of things are less likely to happen, what kinds of things are more devastating than others. The choice of the distribution on what’s going to happen in the future, and the choice of the utility function that is attached to each and every one of those events, these are not like the hard sciences; there is always subjectivity there. That’s perfectly fine.

But what I see as a really problematic aspect of [the repeated emphasis on] Yoshua and Geoff … especially in the media these days, is that this is a typical example of a kind of heroism in science. That is exactly the opposite of what has actually happened in science, and particularly machine learning. 

There has never been a single scientist that stays in their lab and 20 years later comes out saying “here’s AGI.” It’s always been a collective endeavor by thousands, if not hundreds of thousands of people all over the world, across the decades.

But now the hero scientist narrative has come back in. There’s a reason why in these letters, they always put Geoff and Yoshua at the top. I think this is actually harmful in a way that I never thought about. Whenever people used to talk about their issues with this kind of hero scientist narrative I was like, “Oh well, it’s a fun story. Why not?”

But looking at what is happening now, I think we are seeing the negative side of the hero scientist. They’re all just individuals. They can have different ideas. Of course, I respect them and I think that’s how the scientific community always works. We always have dissenting opinions. But now this hero worship, combined with this AGI doomerism … I don’t know, it’s too much for me to follow. 

VB: The other thing that seems strange to me is that a lot of these petitions, like the Statement on AI Risk, are funded behind the scenes by Effective Altruism folks [the Statement on AI Risk was released by the Center for AI Safety, which says it gets over 90% of its funding from Open Philanthropy, which in turn is primarily funded by Cari Tuna and Dustin Moskovitz, prominent donors in the Effective Altruism movement]. How do you feel about that?

Kyunghyun Cho: I’m not a fan of Effective Altruism (EA) in general. And I am very aware of the fact that the EA movement is the one that is actually driving the whole thing around AGI and existential risk. I think there are too many people in Silicon Valley with this kind of savior complex. They all want to save us from the inevitable doom that only they see and they think only they can solve.

Along this line, I agree with what Sara Hooker from Cohere for AI said [in your article]. These people are loud, but they’re still a fringe group within the whole society, not to mention the whole machine learning community. 

VB: So what is the counter-narrative to that? Would you write your own letter or release your own statement? 

Kyunghyun Cho: There are things you cannot write a letter about. It would be ridiculous to write a letter saying “There’s absolutely no way there’s going to be a rogue AI that’s going to turn everyone into paperclips.” It would be like, what are we doing?

I’m an educator by profession. I feel like what’s missing at the moment is exposure to the little things being done so that the AI can be beneficial to humanity, the little wins being made. We need to expose the general public to this little, but sure, stream of successes that are being made here.

Because at the moment, unfortunately, the sensational stories are read more. The idea is that either AI is going to kill us all or AI is going to cure everything — both of those are incorrect. And perhaps it’s not even the role of the media [to address this]. In fact, it’s probably the role of AI education — let’s say K-12 — to introduce fundamental concepts that are not actually complicated. 

VB: So if you were talking to your fellow AI researchers, what would you say you believe as far as AI risks? Would it be focused on current risks, as you described? Would you add something about how this is going to evolve? 

Kyunghyun Cho: I don’t really tell people about my perception of AI risk, because I know that I am just one individual. My authority is not well-calibrated. I know that because I’m a researcher myself, so I tend to be very careful in talking about the things that have an extremely miscalibrated uncertainty, especially if it’s about some kind of prediction. 

What I say to AI researchers — not the more senior ones, they know better — but to my students, or more junior researchers, I just try my best to show them what I work on, what I think we should work on to give us small but tangible benefits. That’s the reason why I work on AI for healthcare and science. That’s why I’m spending 50% of my time at [biotechnology company] Genentech, part of the Prescient Design team to do computational antibody and drug design. I just think that’s the best I can do. I’m not going to write a grand letter. I’m very bad at that.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Leave a Reply

Your email address will not be published. Required fields are marked *