john torrison president
   
  • Club Home
  • Club Members
  • Listen with Bill
    • Bill's History
  • Turntable
    • TT History
  • The FlipSide
  • Picturesque!
  • Skips Corner
  • Gulliver's Travels
  • The Club Pub
    • Sucks News
  • Harv's Corner

The Club PUBlication  06/12/2023

6/12/2023

0 Comments

 
Picture
Picture

AI poses ‘risk of extinction,’ industry leaders warn
Fears grow that technology could soon spread misinformation, eliminate white-collar jobs.
​

By KEVIN ROOSE • New York Times

Picture

​
A group of industry leaders warned Tuesday that the artificial intelligence technology they were building might one day pose an existential threat to humanity and should be considered a societal risk on a par with pandemics and nuclear wars.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war," reads a one-sentence statement released by the nonprofit Center for AI Safety. The open letter was signed by more than 350 executives, researchers and engineers working in AI.

The signatories included top executives from three of the leading AI companies: Sam Altman, CEO of Open- AI; Demis Hassabis, CEO of Google DeepMind; and Dario Amodei, CEO of Anthropic.

Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered "godfathers" of the modern AI movement, signed the statement, as did other prominent researchers in the field. (The third Turing Award winner, Yann LeCun, who leads Meta's AI research efforts, had not signed as of Tuesday.)

The statement comes at a time of growing concern about the potential harms of AI. Recent advancements in so-called large language models — the type of AI system used by ChatGPT and other chatbots — have raised fears that AI could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.

Eventually, some believe, AI could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, although researchers sometimes stop short of explaining how that would happen.

These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.

This month, Altman, Hassabis and Amodei met with President Joe Biden and Vice President Kamala Harris to talk about AI regulation. In Senate testimony after the meeting, Altman warned that the risks of advanced AI systems were serious enough to warrant government intervention and called for regulation of AI for its potential harms.

Dan Hendrycks, executive director of the Center for AI Safety, said in an interview that the open letter represented a "coming out" for some industry leaders who privately had expressed concerns about the risks of the technology they were developing.

"There's a very common misconception, even in the AI community, that there only are a handful of doomers,"  Hendrycks said. "But, in fact, many people privately would express concerns about these things."

Some skeptics argue that AI technology is still too immature to pose an existential threat. When it comes to today's AI systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.

But others have argued that AI is improving so rapidly that it has already surpassed human-level performance in some areas and that it will soon surpass it in others. They say the technology has shown signs of advanced abilities and understanding, giving rise to fears that "artificial general intelligence," or AGI, a type of AI that can match or exceed human-level performance at a wide variety of tasks, may not be far off.

In a blog post last week, Altman and two other OpenAI executives proposed several ways that powerful AI systems could be responsibly managed. They called for cooperation among the leading AI makers, more technical research into large language models and the formation of an international AI safety organization, similar to the International Atomic Energy Agency, which seeks to control the use of nuclear weapons.

In March, more than 1,000 technologists and researchers signed another letter calling for a six-month pause on the development of the largest AI models, citing concerns about "an out-ofcontrol race to develop and deploy ever more powerful digital minds."

That letter, which was organized by another AI-focused nonprofit, the Future of Life Institute, was signed by Elon Musk and other well-known tech leaders, but it did not have many signatures from the leading AI labs.

The brevity of the new statement from the Center for AI Safety — just 22 words — was meant to unite AI experts who might disagree about the nature of specific risks or steps to prevent those risks from occurring but who share general concerns about powerful AI systems, Hendrycks said.

The statement was initially shared with a few high-profile AI experts, including Hinton, who quit his job at Google this month so that he could speak more freely, he said, about the potential harms of AI. From there, it made its way to several of the major AI labs, where some employees then signed on.

The urgency of AI leaders' warnings has increased as millions of people have turned to AI chatbots for entertainment, companionship and increased productivity, and as the underlying technology improves at a rapid clip.

"I think if this technology goes wrong, it can go quite wrong," Altman told the Senate subcommittee. "We want to work with the government to prevent that from happening."

0 Comments



Leave a Reply.

    Archives

    July 2025
    June 2025
    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    December 2024
    November 2024
    October 2024
    September 2024
    August 2024
    July 2024
    June 2024
    April 2024
    March 2024
    February 2024
    January 2024
    December 2023
    November 2023
    October 2023
    September 2023
    August 2023
    July 2023
    June 2023
    May 2023
    April 2023
    March 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018

    RSS Feed