Click Here for more inforamation
  • Thu. Mar 28th, 2024

Apocalyptic panic and AI doomerism need to give way to analysis of real risks

Bynewsmagzines

Jun 5, 2023
Apocalyptic panic and AI doomerism need to give way to analysis of real risks


Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


The rapid advance of generative AI marks one of the most promising technological advancements of the past century. It has evoked excitement and, like nearly all other technological breakthroughs of the past, fear. It is promising to see Congress and Vice President Kamala Harris, among others, taking the issue so seriously.

At the same time, much of the discourse on AI has been tilting further towards fear-mongering, detached from the reality of the technology. Many favor narratives that latch on to familiar science fiction narratives of doom and destruction. The anxiety around this technology is understandable, but apocalyptic panic needs to give way to a thoughtful and rational conversation about what the real risks are and how we can mitigate them. 

So what are the risks of AI? 

First, there are fears that AI could make it easier to impersonate people online and create content that makes it hard to differentiate between real and false information. These are legitimate concerns, but they are also incremental challenges to existing problems. We, unfortunately, already have a wealth of misinformation online. Deep fakes and edited media exist in abundance, and phishing emails started decades ago.

Similarly, we know the impact that algorithms can have on information bubbles, amplifying misinformation and even racism. AI could make these problems more challenging, but it hardly created them, and AI is simultaneously being used to mitigate them.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

The second bucket is the more fanciful realm: That AI could amass super-human intelligence and potentially overtake society. These are the kind of worst-case scenarios that have been imbued in society’s imagination for decades if not centuries.

We can and should consider all theoretical scenarios, but the notion that humans will accidentally create a malevolent, omnipotent AI strains credulity and feels to me like AI’s version of the claim that the large hadron collider at CERN might open a black hole and consume the earth.

Technology always wants to develop

One proposed solution, slowing technological development, is a crude and clumsy response to the rise of AI. Technology always continues to develop. It’s a matter of who develops it and how they deploy it. 

Hysterical responses ignore the real opportunity for this technology to benefit society profoundly. For example, it is enabling the most promising advances in healthcare that we’ve seen in over a century, and recent work suggests that the productivity increase to knowledge workers could match or exceed history’s greatest leaps in productivity. Investment in this technology will save countless lives, create extraordinary economic productivity and enable a new generation of products to come to life.

The nation that limits its citizens and organizations from accessing advanced AI would be the equivalent of denying its citizenry access to the steam engine, the computer or the internet. Delaying the development of this technology will mean millions of excess deaths, a major stall to relative national productivity and economic growth, and the ceding of economic opportunity to the nations that do enable the technology’s advance.

Responsible, thoughtful development

Moreover, democratic nations encumbering the development of advanced AI offer autocratic regimes the opportunity to catch up and reap the economic, medical and technological benefits earlier. Democratic nations must be the first to advance this technology and must do so in concert with the teams best equipped to deliver the technology, not in opposition to them.

At the same time, just as it would be a mistake to try to deny technological advancements, it would be equally foolish to allow it to develop without a responsible framework. There have been some productive first steps towards this, notably The White House’s AI Bill of Rights, Britain’s “pro-innovation approach,” and Canada’s AI and Data Act. Each effort balances the imperatives of driving progress and innovation with ensuring it occurs in a responsible and thoughtful manner. 

We must invest in the responsible development of AI and reject doomerism and calls for halts to progress. As a society, we must act to protect and support the domestic projects that are most likely to deliver compelling systems of AI. Leaders who know the technology best should help dispel misguided fears and refocus discourse on the current challenges at hand.

This technology is the most exciting and impactful of the coming decades. Giving language — something long considered the sole domain of humanity — to our technology is an extraordinary human achievement. It’s crucial for us to have constructive and open conversations about the potential ramifications, but it’s equally important that the dialogue is sober and clear-eyed and for the public discourse to be led by reason.

Aidan Gomez is CEO and cofounder of Cohere and was a member of the Google team that developed the backbone of advanced AI large language models.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Leave a Reply

Your email address will not be published. Required fields are marked *