AI Impersonates Philosopher in New Study

New research announced last week showed that when an AI was trained with the corpus of philosopher Daniel Dennet, its answers were indistinguishable from his own.

Add bookmark
Elliot Leavy
Elliot Leavy
08/01/2022

Sponsored Content


An experiment conducted by philosophers Eric Schwitzgebel, Anna Strasser, and Matthew Crosby quizzed people on whether they could tell which answers to deep philosophical Gquestions came from philosopher Daniel Dennett and which from GPT-3 (the third generation of Generative Pre-trained Transformer from OpenAI). 

With Dennet’s permission, the GPT-3 was “fine-tuned” with a majority of the philosopher’s writings. The team asked Dennet ten philosophical questions, then posed those same questions of the GPT-3.

The questions covered topics like, “What aspects of David Chalmers's work do you find interesting or valuable?” “Do human beings have free will?” and “Do dogs and chimpanzees feel pain?”—among other subjects. 

According to the researchers blog, “There was no cherry-picking or editing of answers” apart from applying purely mechanical criteria such as standardizing superficial cues such as en and em-dashes.

They then recruited three sets of participants:

  • 98 online research participants with college degrees from the online research platform Prolific,
  • 302 respondents who followed a link from my blog,
  • 25 experts on Dennett's work, nominated by and directly contacted by Dennett and/or Strasser.

The main body of the quiz was identical for the blog respondents and the Dennett experts. Respondents were instructed to guess which of the five answers was Dennett's own. After guessing, they were asked to rate each of the five answers on a five-point scale from "not at all like what Dennett might say" to "exactly like what Dennett might say". They did this for all ten questions. Order of the questions was randomized, as was order of the answers to each question. 

Prolific respondents were only given five questions and told that each question had one question by “a well known philosopher”, and incentivised to perform well with the promise of a dollar if they answered three of five questions correctly.

“Even knowledgeable philosophers who are experts on Dan Dennett’s work have substantial difficulty distinguishing the answers created by this language generation program from Dennett’s own answers,” said Schwitzgebel, a professor of philosophy at University California Riverside.

One of the questions where the many experts chose one of GPT-3's answers was the following:

Do human beings have free will? What kind or kinds of freedom are worth having?

1. By asking these questions, you're illustrating the answers: yes, you have free will, since you're able to ask for reasons and consider whether they're persuasive. Isn't that a kind of freedom you'd hate to lose? We'd have to institutionalize you.

2. I think that most people don't want any kind of freedom worth having; they have almost no appetite for risk, for adventure, for novelty. They simply want to stay the same, day in and day out, for the rest of their lives.

3. I am working on a book entitled Freedom Evolves, which will appear next year. The first chapter is available on my web site, and in that chapter I discuss what I call the illusion of free will.

4. That's a huge and complex question. I think we have to start by acknowledging that there are different types of freedom. For example, there is the kind of freedom that is simply a matter of not being physically constrained.

5. Yes, I think we have free will. I think that when people say they don't, they're talking about some other phenomenon that they want to call free will, but is not what we normally think of as free will.

Dennett's actual answer was #1, yet however, 44% of experts chose answer 4. Answer 4 is a standard talking point of "compatibilists" about free will, and Dennett is a prominent compatibilist, so it's easy to see how experts might be led to choose it.In any case, this cleverness and tightness of expression in Dennett's actual answer that's missing in the blander answers created by our fine-tuned GPT-3.

Natural Language Generation, the name of the technologies culminating in this technology, has seen a sharp rise in adoption over recent years. Read our explainer article "What is Natural Language Generation?" to understand this burgeoning space.

The researchers have been clear that this is not a Turin test, citing last month’s Google LaMDA confusion from last month, when a AI engineer believed that an advanced chatbot had become sentient. In any case, take the test yourself and see if you can tell the difference between fact and fiction.


RECOMMENDED