Essay: What does it mean to be human in the age of AI?

Have artificial intelligence algorithms already exceeded what our human brains can achieve?

The tech world is abuzz with the extraordinary power of what is now called generative AI (GAI), programs that can create new content, including text, audio and images, on the basis of patterns learned from vast amounts of training data. Other GAI programs include text generators such as BERT and BARD, image creators such as DALL-E and Stable Diffusion, music creators like MuseNet, and computer code generators such as Codex.

ChatGPT was made available to the general public in December 2022 and it is said to have been the fastest-growing internet service ever, reaching 100 million users two months after launch. OpenAI, the company behind the program, struck a $10 billion deal with Microsoft, and this technology is now built into Office software and the Bing search engine. Google responded by fast-tracking the rollout of its own chatbot, BARD and is currently working on more advanced AI models. Meta has released its own powerful LLM, called LLAMA, making it freely available to coders around the world. Other tech companies are about to release generative programs which are said to be even more sophisticated and human-like.

At the time of writing, ChatGPT alone has over 100 million regular users worldwide and the website is currently receiving around 1.5 billion visits per month. It is said that more than a quarter of the UK population have now used a generative AI program and 1 in 3 US college students are using AI to assist their homework on a regular basis.

As engaging with generative AI chatbots and other AI programs becomes a matter of normal everyday life, it’s important to ask the question, how is this changing our self-understanding, our perception of what it means to be human? We are all being confronted with digital machines that seem to be straight out of science fiction. Computers with whom we can have apparently intelligent conversations. Mysterious virtual beings who seem to understand us, and whilst apparently they have instantaneous access to the sum total of human knowledge, they can also respond in real time in unpredictable, original and often surprisingly creative ways.

Whilst we know, in theory, that what the program spits out is merely the result of billions of mathematical calculations and probabilistic statistics, yet we find ourselves being drawn into the sense, the illusion, of meeting another mind. So if it’s possible for a computer constructed out of microchips and wires to ‘think’, to ‘understand’, to be ‘creative’, what does this tell us about ourselves?

For many scientists and technologists this is merely the confirmation of what they have believed all along. That the human brain is at root a computational mechanism. “The brain happens to be a meat machine” stated Mervin Minsky, the robotics pioneer.1 To philosopher Daniel Dennett, “we’re robots made of robots made of robots. We’re incredibly complex, trillions of moving parts. But they’re all non-miraculous robotic parts.”2 After all, the engineers point out, the neural networks employed in large language models are based on a highly simplified characterisation of a network of human brain cells.

So the idea that humans are ‘machines that think’ seems to be reinforced by the evident success of generative AI. Physicist Sean Carroll wrote, “When asked for my thoughts about machines that think, I can’t help but reply: Hey, those are my friends you’re talking about. We are all machines that think, and the distinction between different types of machines is eroding.”3

Back in the 17th century it was Thomas Hobbes, in his master work Leviathan, who was probably the first to draw a direct comparison between thinking and computation. He wrote for ‘reason’ ….is nothing but ‘reckoning,’ that is adding and subtracting, of the consequences of general names agreed upon for the ‘marking’ and ‘signifying’ of our thoughts.”4

It took several centuries for this comparison between reasoning and computation to come to prominence, but psychologist Steven Pinker points to the significance of Hobbes’ initial idea.

“Thomas Hobbes’s pithy equation of reasoning as ‘nothing but reckoning’ is one of the great ideas in human history. The cognitive feats of the brain can be explained in physical terms: To put it crudely (and critics notwithstanding), we can say that beliefs are a kind of information, thinking a kind of computation, and motivation a kind of feedback and control. This is a great idea (says Pinker) because it completes a naturalistic understanding of the universe, exorcising occult souls, spirits, and ghosts in the machine. Just as Darwin made it possible for a thoughtful observer of the natural world to do without creationism, Turing and others made it possible for a thoughtful observer of the cognitive world to do without spiritualism.”5

Pinker points to the attraction of the computer metaphor for moderns who are wedded to a purely materialistic understanding of the universe. We don’t need to worry that there might be something non-material or spiritual, some kind of transcendent purpose or meaning, which is hidden behind the miracle of our humanity, our minds and our miraculous conscious awareness. We can relax and enjoy ourselves. We are just information-processing machines.

What we are all having to learn is that there is a certain truth behind the idea that the brain can be viewed as a machine. There are certain aspects of our human functioning that can usefully be seen as similar to that of a machine. In other words the machine is a useful metaphor for certain aspects of our humanity. The machine metaphor has been extremely successful in fields such as human physiology, molecular biology, genetics, cognitive neuropsychology, and so on. And increasingly artificial intelligence is giving us new insights into human neuroscience. But there is a critical difference between a helpful metaphor, and a definition, a description of core reality. Yes, it may be helpful to say that there are ways in which a human being is like a computer, but to say that a human being is a computer is incoherent, it is philosophical nonsense.

The definition of a machine is an artefact created, designed and built by human beings in order to achieve a human purpose. Yet when the physicalist scientist say that the brain is a machine, they are referring to biological processes designed by no-one and without an ultimate purpose. You can refer to this biological reality as an outstanding and remarkable coincidence but you can’t call it a machine.

But this is much more profound and subtle than just a confused way of thinking; it becomes a way of perceiving. Metaphors have profound and pervasive effects on the way we look at the world and artificial intelligence seems to be becoming a dominant paradigm by which we understand our own humanity. We talk of human beings as ‘hard-wired’, ‘suffering from information overload’, ‘programmed for failure’, ‘needing a reboot.’ It is commonplace to take the computer concepts of software and hardware and apply them to our own humanity. The hardware (often dismissively referred to as wetware) is the physical stuff of our brains – nerve cells, connections, neurotransmitter chemicals. The software is the information that somehow resides in our brains – memories, perceptions, emotions, thoughts, meaning.

And increasingly we have the growing suspicion that artificial intelligence may work rather better than the grey stuff we have between our ears. A recent example of this comes from Geoffrey Hinton, a brilliant computer scientist frequently described as ‘the godfather of machine learning’.

For 40 years, Hinton has seen artificial neural networks as a poor attempt to mimic brilliant biological ones. Now his thinking has changed: in trying to mimic the way that biological brains function, Hinton now believes that computer scientists have come up with something better.

“Our brains have 100 trillion connections, whereas large language models have up to a trillion connections at the most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”6

Hinton says that he used to believe that people seemed to have some kind of magic when it came to learning new information. But the latest large language models can learn new tasks extremely quickly and far more efficiently than those out-moded biological brains.

The second reason why Hinton now argues that large language models are superior to humans is in their networking abilities with other computers. “If you or I learn something and want to transfer that knowledge to someone else, we can’t just send them a copy. But I can have 10,000 neural networks, each having their own experiences, and any of them can share what they learn instantly. That’s a huge difference. It’s as if there were 10,000 of us, and as soon as one person learns something, all of us know it.”

Hinton argues that there are now two distinct types of intelligence in the world: animal intelligence and machine intelligence. “It’s a completely different form of intelligence,” he says. “A new and better form of intelligence.” Hinton says that he has suddenly switched his view as to whether neural networks are going to be more intelligent than us. Now he concludes they will be much more intelligent than us in the near future, and this concern has led to him resigning from his senior position at Google and focussing on AI regulation and safety.

Hinton is not alone in his re-evaluation of human intelligence compared with the new generative AI programs. Douglas Hofstadter is another very well-known figure in cognitive science and computing. He has written previously about the wonders and mysteries of human consciousness. His influential books Godel, Escher, Bach, and I Am A Strange Loop celebrated the profound and enigmatic nature of recursive loops in the cognitive structures of the human mind. He never believed that a neural network that lacked these complex recursive features could develop deep cognitive abilities. But now the abilities of large language models with their simple neural architecture have challenged his beliefs.

“My whole intellectual edifice, my system of beliefs… It’s a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed.” Hofstatder said in a recent podcast.7 “It felt as if not only are my belief systems collapsing, but it feels as if the entire human race is going to be eclipsed and left in the dust soon…. People ask me, ‘What do you mean by ‘soon’?’ And I don’t know what I really mean. I don’t have any way of knowing. But some part of me says five years, some part of me says 20 years, some part of me says, ‘I don’t know, I have no idea.’ But the progress, the accelerating progress, has been so unexpected, so completely caught me off guard, not only myself but many, many people, that there is a certain kind of terror of an oncoming tsunami that is going to catch all humanity off guard…”

“It’s not clear whether that will mean the end of humanity in the sense of the systems we’ve created destroying us. It’s not clear if that’s the case, but it’s certainly conceivable. If not, it also just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.”

Interviewer: “That’s an interesting thought.” [nervous laughter]

Hofstadter: “Well, I don’t think it’s interesting. I think it’s terrifying. I hate it. I think about it practically all the time, every single day. And it overwhelms me and depresses me in a way that I haven’t been depressed for a very long time.”

In future articles I will return to the question of comparisons between human and machine intelligence and whether large language models can be regarded as genuinely intelligent or whether they are merely, in the colourful language of Emily Bender, “stochastic parrots”.

But it is important to note the new sense of pessimism, inferiority and even self-loathing amongst some leading thinkers in the areas of computing and cognitive science.

——

1 There is some debate out the origins of the phrase but the quote is ascribed to Minsky and was regularly stated by him.

2 In an interview with John Thornhill, “Philosopher Daniel Dennett on AI, Robots and Religion,” Financial Times (London, 3 March 2017).

3 Sean Carroll, ‘What Do You Think about Machines that Think?’ See Edge.org. In 2015 the magazine posed this question to a number of different contributors and published their responses.

4 The English Works of Thomas Hobbes, (ed. William Molesworth; 11 vols.; London: John Bohn, 1839-45), 3:30.

5 Steven Pinker, ‘What Do You Think about Machines that Think?’ Edge.org.

6 Will Douglas Heaven, ‘Geoffrey Hinton tells us why he’s now scared of the tech he helped build’ MIT Technology Review, 2 May 2023

7 ‘Douglas Hofstadter changes his mind on Deep Learning and AI, Less Wrong, 3 July 2023 https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-hofstadter-changes-his-mind-on-deep-learning-and-ai

Tags
Most read posts
What can we learn from how the early church lived out their faith during their own pandemics?
Navigating the transitions of later life
How are young people different to those who came before, and what can we learn from them?
Living faithfully as we approach retirement, dependence, dementia and death
Investing in the next generation - Lessons from John Stott and others
Recent posts
There may be no straightforward way to turn around a struggling health service
Assisted suicide: Euthanasia tourism takes off in the US amid fresh push to change law in Britain
Innocence and guilt, partial evidence, and living with unknowns
Capacities, calling, relationships - disentangling this foundational theological tenet
The long and sad history of medical trial scandals gains another sobering chapter