Notes about “The A.I. Dilemma”

Pedro Alvarado
3 min readApr 23, 2023

I find it interesting that despite my Twitter feed being full of AI stuff, I haven’t seen a lot of people talking about “The A.I. Dilemma”. This talk raises very interesting ideas and thoughts about what is happening in the field of AI. The talk spin around the question of “what does responsibility look like?” Because with a technology like AI, accountability is of utmost importance.

I invite you to watch “The A.I. Dilemma” for yourselves as I feel it is very important for all people to be aware of these things. Since a technology like AI will affect everyone sooner or later, the question of whether it does so positively or negatively is up to us.

In this article I share with you my notes on the things I found most interesting about the talk.

The A.I. Dilemma
  • 50% of AI researchers believe there is a 10% or greater chance that humans will become extinct due to their inability to control AI.
  • When you create a new technology, you unlock a new class of responsibilities.
    If technology confers power, then start a race.
    If uncoordinated, the race ends in tragedy.
  • The first contact between humanity and AI was social networks. This in the sense that the algorithms of social networks are designed to make people spend more time on them.
    Social networks benefited their users in some ways but harmed them in others.
    The race in social media is a race for people’s attention.
  • The second contact between humanity and AI is being generative AI.
    Like social networks, AI benefits in the sense that we can be more efficient, write faster, write code faster, and many other things.
    But AI also presents problems such as collapse of reality, falsify everything, collapse of trust, automated cyber weapons, automated code exploitation, among others.
    With these new technologies it is possible to clone a person’s voice with only a few seconds of examples.
  • Until not so long ago, the different subfields of AI were kept separate. But in 2017, with the creation of transformers, all fields are being united by language.
  • Large language models (LLMs, a type of AI model) have emerging capabilities that their programmers did not program.
    As you increase the size of these models, they become capable of things they were not programmed or trained to do, and no one knows why.
    And also these systems are better at those tasks for which they had not been trained or programmed, than what they had been programmed for those tasks.
  • AI makes AI stronger. If you run out of data to train the AI, then you can use the AI itself to generate that data so it can continue to train, improve and get better.

Tracking progress is getting increasingly hard, because progress is accelerating.

This progress is unlocking things critical to economic and national security — and if you don’t skim (papers) each day, you will miss important trends that your rivals will notice and exploit.

— Jack Clarke, Co-founder at Anthropic, former policy director at OpenAI

  • There is a race by technology companies and AI labs to deploy LLMs in the global infrastructure as quickly as possible.
    But are we at least slowly deploying this technology to the public to test it safely? Most AI researchers concerned about the safety of these systems don’t think so.
The graph shows how long it took different technology products to reach 100 million users. It took chatGPT 2 months. From “The A.I. Dilemma”.
The graph shows how long it took different technology products to reach 100 million users. It took chatGPT 2 months. From “The A.I. Dilemma”.
  • We can still choose what future we want with this technology.

Don’t get humanity on the (AI) plane without democratic dialogue.

  • We can selectively slow down the public deployment of AI LLMs.
  • Even greater AI developments are coming. And faster.

To think:

Much of the innovation in the last few decades was in search, social, crypto, and AI, because pure math is the last unregulated frontier.

Invite the regulators in and they’ll freeze innovation here just as they did in healthcare and energy.

— Naval

--

--