AI Explainer


bla...

A short explainer on AI and its key potential dangers.

Narrow AI vs. General AI

It is important to distinguish between different types of AI…

  • Narrow AI: AI that can only perform specific tasks (e.g. AlphaGo, AlphaFold),
  • General AI: AI that can perform many different tasks in many different domains (e.g. AI ChatBots like ChatGPT, Claude, Gemini, etc…).
  • Artificial General Intelligence (AGI): A form of general AI that can perform any cognitive task a human can perform, as well or better than a human can perform them.
  • Artificial Super Intelligence (ASI): Like AGI, but can absolutely destroy humans at any task.

Narrow AI systems are much safer than General AI systems

While the idea of General AI sounds great initially, the problem with generality is that not only do General AI systems have many, many possible benefits, they also have many, many possible dangers.

Society can benefit greatly from Narrow AI systems. We have already been benefitting from them for many years, and Narrow AI systems are still becoming more and more sophisticated. One of the AI systems that has been most beneficial to society so far is AlphaFold AI, developed by Google DeepMind over the last few years. AlphaFold is a Narrow AI that has effectively solved protein folding for biologists, leading to many breakthroughs in biology and medicine.


Despite the risks, Big AI is currently racing to create General AI systems

Many Big Tech companies have stated goals to create Artificial General Intelligence (AGI), and some have stated goals to create Artificial Super Intelligence (ASI). This is despite that fact that the CEOs of many of these companies believe that General AIs could become so dangerous that they could represent an existential risk to humanity.


How dangerous could AI potentially become?

Many Computer Scientists and AI experts believe that it is possible that AI could become incredibly dangerous to humanity, potentially threatening our existence. Many such experts signed the following statement in 2023:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

See (here)[https://aistatement.com/] for the full list of signatories, but it includes people like Bill Gates, two of the three “godfathers of AI” (Geoffrey Hinton and Yoshua Bengio), and CEOs of several of the Big Tech companies that are currently racing to develop AGI:

  • Demis Hassabis (CEO of Google DeepMind, makers of Gemini AI),
  • Sam Altman (CEO of OpenAI, makers of ChatGPT),
  • Dario Amodei (CEO of Anthropic, makers of Claude AI).

Striving for technorealism, rather than technophobia or technooptimism

Is this fear of the potential impacts of AGI simply a case of technophobia? Or ludditism?

Consider that the CEOs of Big AI companies are neither luddites, nor technophobes - in fact they are usually technooptimists. Yet many of them have acknowledged the immense dangers posed by AGI.

Ideally I think we should strive to be technorealists, rather than being technophobic or technooptimistic. We should strive to understand the societal impacts of AI, whether good or bad, and not be afraid to point out when the impacts are likely to be net negative. We should strive to predict the likely future impacts of AI and AGI as accurately as possible, while trying to avoid our own personal biases as much as possible.

[Should I mention research on getting estimates of p-doom from computer scientists….?]