We should be calling it Pseudo-Intelligence, not Artificial-Intelligence
Fugayzi, fugazi. It’s a whazy. It’s a woozie. It’s fairy dust. It doesn’t exist.
A glimpse into the post-cyberpunk future
Here’s an excerpt from The Diamond Age, a post-cyberpunk sci-fi novel by one of my favourite authors, Neal Stephenson. Emphasis is mine.
“Oh, P.I. stuff mostly,” Hackworth said. Supposedly Finkle-McGfraw still kept up with things and would recognize the abbreviation for pseudo-intelligence, and perhaps even appreciate that Hackworth had made this assumption.
Finkle-McGraw brightened a bit. “You know, when I was a lad they called it A.I. Artificial intelligence.
– Neal Stephenson, The Diamond Age
Perhaps in the future, we might (or should) be more honest about what we’re working with and use a more accurate term for what we currently label Artificial Intelligence (AI).
What we currently label “AI” has proven to be useful in a wide range of difficult tasks such as image/object recognition, language translation, game playing, voice transcription and synthesis etc. But these “AI’s” are not intelligent, the models behind them rely purely on statistical relationships in observed data and obtain no ability to reason or understand what they are doing.
It’s not artificial intelligence if it’s not even intelligence
… all the impressive achievements of deep learning amount to just curve fitting.
– Judea Pearl
There are numerous, very impressive, seemingly intelligent AI models. GPT-3 can produce convincing, grammatically correct text in a surprisingly wide range of domains. Amongst many cool applications, it can write stories, blog posts, poems, summarise articles, answer questions and provide rational medical diagnoses given a list of symptoms. It can even write functioning code and produce answers to simple logic puzzles. However, despite how convincing, even reasoned the model outputs are, there there is no reasoning occurring. GPT-3 merely predicts the next word in the sequence given the previously observed words. It doesn’t reason mathematically about whether “2+2=4”, it merely recognises that usually, in the data it was trained on, “4” typically comes after “2+2=”. If there is any suggestion of reasoning in GPT-3’s output, it’s because some collection of words resembling a pattern of reasoning were statistically relevant in the training data for similar prompts or contexts. GPT-3 (or newer language models) might even be able to convincingly pass the Turing test and give an impression of consciousness, but it is just a Chinese room that lacks understanding and intentionality and is thus not actually doing any ‘thinking’.
DALL·E is another impressive model that can generate realistic and convincing images given a text description. Consider the example below:
Source: OpenAI.
On the surface, it might appear that DALL·E is being creative and using its understanding of what an avocado is, what an armchair is, or even what a shape is. But again, the model output is based purely on the statistical relationships observed in the large corpus of text-image pairs that the neural network was trained on (with a very clever selection of neural net architectures and loss functions).
MuZero, AlphaStar and OpenAI Five are examples of AI’s that have demonstrated that reinforcement learning techniques can be used to build agents that are capable of superhuman performance in a range of extremely complex games1. These AI’s exhibit traits of intelligence, they appear to plan and have long-term goals, they can update their strategies and determine new, effective courses of action given new information. But still, these learned behaviours are the result of observed statistical relationships in observed state, action and reward combinations. These agents do not have general models of the world and cannot climb to the top of Pearl’s ladder of causation which would be essential for any intelligent agent to properly imagine and reason about scenarios it hasn’t seen2.
Furthermore, all of these AI’s are examples of Weak or Narrow AI’s. They are only useful or effective in the domains are they specifically trained for.
Is there any reason to pretend it’s not Pseudo-Intelligence?
In the future, we may produce a conscious intelligence that can reason, imagine, plan, have goals & desires, continually learn in any domain, and also have whatever other properties deemed necessary to be a true artificial intelligence. Until then, we’re just making computers apply sophisticated functions to some data or set of inputs and pretending the program is ‘intelligent’.
More
- Great video explanations of GPT-3, DALL·E, MuZero and AlphaStar by Yannic Kilcher.
- Artificial Intelligence is stupid and causal reasoning won’t fix it, John Mark Bishop.
- The Book Of Why, Judea Pearl and Dana Mackenzie.
Before the deep learning revolution, it was thought that this level of performance was at least a decade of algorithm and hardware improvements away. ↩︎
I believe some claim that reinforcement learning based agents can climb to the second rung. ↩︎