This week on the Mixtape with Scott, I have a very special guest. Adam Smith, the so-called founder of economics, and author of two best selling books, The Theory of Moral Sentiments published in 1759 and An Inquiry into the Nature and Causes of the Wealth of Nations (buy it now for $2800 here at eBay!) published in 1776.
I know what you’re thinking. “But Scott, that would make Adam Smith very old, even probably dead, wouldn’t it?” And you’re right on both counts! Adam Smith was a moral philosopher born in 1723 in Scotland so it literally makes him 300 years old, and yes, very dead. But I decided to push through that anyway and a few months ago asked ChatGPT-4 to essentially pretend to be Adam Smith for my podcast without any awareness or surprise. This podcast is somewhere between a seance and a play. It is the ghost in the machine — literally. I did a one hour interview with ChatGPT-4 who played the part of Adam Smith using the same style of interviewing I do with all the economists on the show — personal stories. This was all done in the ChatGPT-4 browser, and it was then recorded using Amazon AWS Polly “text to voice” using a British male’s voice named “Arthur”.
This is part of a class assignment I have been doing this semester at Baylor University in my History of Economic Thought class. I got the idea to do this earlier this summer when I saw that the economist, Tyler Cowen, had interviewed Jonathan Swift using ChatGPT-4. So I decided to build into my classes an assignment where the students had to do it too. My students had to interview four 18th to early 20th century economists, with the final project being a recorded interview much like I did, and to show them it could be done, I interviewed Adam Smith. And boy was it fun. It was fun because of how novel it was, but it was also fun because of how thought provoking it was for me to learn about Smith’s first book Theory of Moral Sentiments, and listening to ChatGPT-4 speculate about the book’s connections to other ideas. I was mesmerized by the entire experience and really didn’t know what to make of it. After all, language models hallucinate; I already knew this. But then it dawned on me — this entire interview is a hallucination. What does it mean for a large language model to “be” Adam Smith when in fact Adam Smith never said any of these words? It means for ChatGPT-4 to hallucinate. Question is, though: is this a good hallucination or is it a bad one, and how to we judge that and should we even care? I wonder if hallucinating is a feature, not a bug, of ChatGPT-4.
Is this any good? Is it something useful? I think so. Students seemed to have gotten a lot out of it. It requires the suspension of disbelief but then so does watching fantasy, or ready science fiction. Your mileage may vary on how much you enjoy it, and maybe the things we discuss aren’t so profound but I didn’t know a lot about him before doing this. So it was just nice to listen and learn more about the man, though a Smith scholar will need to tell me what’s accurate and what isn’t (as I said, technically it’s inaccurate from start to finish by definition).
My PhD student, Jared Black, is in my history of economic thought class and has enjoyed being able to interrogate these old economists and their ideas. He decided to create his own GPT chatbot using OpenAI’s builder environment and said I could share it.
Ask to talk to Bentham or Nassau or Senior or Say or Marx. Just remember to be polite. A recent RCT found that if you’re nice to ChatGPT-4, it tends to perform tasks better. I swear I saw that study, but now I can’t find it, but it seems true so I’m going to cite it.
Thanks again for tolerating me on this podcast. Even though this may seem gimmicky, in a way it is fully consistent with the shows premise. The show is about the personal stories of economists and the hope that by simply listening to economists’ stories, we can better understand our own story. The hope, too, is that in the long run, we hear a story of the profession itself. After all, we use stories to navigate our lives, and though stories like models are in some sense “wrong”, sometimes they are useful. This story is wrong, too, but maybe it’ll be useful. Peace!
Scott's Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.