This semester I’m prepping from scratch a new class called in the course catalog “Economics of AI”. Economics at my university (Baylor) is in the business school, so it’ll select mainly on economics majors, but I’ve made the prereqs very minimal so that it can include anyone who has taken a non-major version of principles of economics called “Issues in Economics” too. I also am allowing the masters students to take it for graduate credit, and I think anyone wanting in, I’ve just let in anyway even if they don’t have the Econ prerequisite. To me it doesn’t matter, because I’m going to have to reteach the economics anyway, and am largely going to be focused on a few topics in economics anyway.
I’m going to share here what I’ve opted to do in it so that others can see. It’s “beta” version of this class, so I’m hoping it’s successfully executed on the important dimensions, exciting for the students, practical, and intellectually deepening along several dimensions — both the economics dimensions and the AI dimensions too. And I ran the coin flip exercise again using python (15 coin flips) to determine whether or not to paywall this post. I got 10 tails and 5 heads, and so this one is free!
# Simulate 15 trials of flipping a coin (Bernoulli distribution)
trials_15 = np.random.binomial(1, 0.5, 15) # 1 for heads, 0 for tails
# Count heads and tails
heads_count_15 = np.sum(trials_15)
tails_count_15 = len(trials_15) - heads_count_15
# Data for the figure
counts_15 = [heads_count_15, tails_count_15]
# Create the bar plot
plt.figure(figsize=(8, 6))
plt.bar(labels, counts_15, color=['skyblue', 'salmon'], alpha=0.7)
plt.title('15 Coin Flips to Determine to Paywall the Post', fontsize=16)
plt.ylabel('Count', fontsize=12)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.grid(axis='y', alpha=0.3)
# Display the plot
plt.tight_layout()
plt.show()
Structure of the class
The class has three parts: the economics, the AI and practical skills training. So let me explain each of them separately.
The Economics.
What is economics? I will be using a definition you don’t often see anymore which is based on Adam Smith and Paul Samuelson. I will define economics, as a science, as the asking and answering of three questions:
What will we produce? (“production”)
How will we produce it? (“technology, labor, capital and land”)
How will we share what we produced? (“income distribution and allocation”)
You could define economics in other ways — my least favorite being “economics is whatever economists study” — but I chose to go this way as its historic and classical, and I think it will help me unite the class around the fields of macroeconomics and labor economics, as well as production theory. Those are in a way the core topics I’ll draw from.
The readings for this will cover a few areas, and let me list them now:
Acemoglu’s task-based model. There’s several core contributions to the economics of AI that Daron Acemoglu has participated in. His current model of AI called “Simple Macroeconomics of AI” is based on several articles on a task-based model of the economy written with Restrepo. We’ll be learning that model, and that model will take me probably 2 full days, maybe 3 days, to teach.
Industrial Revolution. I’m going to also teach them about scientific discovery as it relates to the economic activity. Acemoglu also has a 2024 article with Simon Johnson on AI that gets into historic material such as the productivity of textile workers, and it is very deep and rich. I’ll be probably using that as my skeleton, since it’s about AI, for going into the Industrial Revolution, but I will also be going into the hypothesis that has been championed by Robert Allen on the role of energy and capital prices relative to wages, too (though I know it has been the subject of pointed criticisms).
Principles Level. These will likely use chapters from principles textbooks, as the goal is fairly rudimentary. When there are models in the class, they will almost always be graphical if possible.
Economic Growth. I’ll be teaching them how GDP is measured, as well as the role of labor productivity in driving it. I’ll also be noting the correlation between GDP and living standards. And this will be how I help students understand the broader context in which labor productivity, measured crudely as Y/L, is thought to cause overall increases in living standards, with important caveats laid out by David Ricardo in his third edition of his book as well as Acemoglu’s own writings on the topic. I won’t get into the work of technology and economic growth at the level of a PhD class, nor will I explicitly teach the Solow growth or Romer, but I will be trying to convey this to students in a way that can be put on a test. Some graphical models they can learn is the hope and prayer. I really need to find a copy of Frank and Bernanke but I don’t have my copy anymore.
Production Functions. I’m not yet sure precisely what I want to say, except that I think I want to help them understand that inputs turn into outputs according to recipes and that technology shapes those recipes. But the inputs are capital and labor (and land and energy but I’m going to probably touch on land and energy later). I think what I will probably be doing is use the production functions to draw isoquants to illustrate the substitutability of capital for labor maybe. I’m not 100% sure. Ultimately I need students to better understand the way in which technologies both increase demand for certain kinds of labor and cause substitutions away from certain kinds of labor, and I’m thinking isoquants could be the way to do it, but as they’re not the only way, and frankly the investment it takes just to draw isoquant curves may not be worth it. I know that utility and indifference curves usually have not paid off at all except when I taught grad micro, and so I worry that could be here. But I’m still working on that lecture.
Short-run versus Long-run. Largely just needing to help students understand basic economics and those two are important.
Income inequality and skill-biased technological change. This will cover Goldin and Katz first of all. I’ll be handing out some chapters, and we’ll be focused on really awesome figures and tables (but I’ll mainly focus on the figures and means wherever possible). I may also teach from Katz and Murphy, Katz and Autor, as well as Juhn and Murphy. Plus some of Autor’s other work and Acemoglu’s work too. These are hard papers, so I’ll explain later how I’ll tackle them.
Some miscellaneous material on AI. This will cover energy costs, empirical papers on labor. There’s not a lot of credible empirical work yet, or they’re just short-run, so I’ll be just focused on some things I have in a collection of papers.
The goal is around 8 papers to study closely, though. And the focus as you can see is on maybe what you might think of as macro and labor, both their integration and them separately. And it’s very narrow. This is not a survey course of either; it’s only about AI.
And I also am probably de-emphasizing to some degree the product markets, and moreso the input markets. The focus will be on the structure of wages, as wages are how we “share” the output, but obviously the lines between the input markets and output markets are fairly blurry and it’s hard to talk about one or the other. It’s just for talking about the product markets, I will be doing that more experientially when we work with the actual AI products, and when we learn about the history of AI use cases in firms and science.
Understanding of categories of AI, and its history.
For this I have two books we are reading which I can show you. They’re good at being balanced, skeptical and critical as well as discussing history of AI in commercial and scientific work predating ChatGPT. They each are books you might think of as “books written by thought leaders aimed at intelligent readers in a general audience”. In other words, the books are written by computer scientists who specialize in AI to a broad population of non-computer scientists who don’t specialize in AI. The books are:
Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell published in 2020 through Picador publishing. Mitchell is prominent and authoritative with a career of work in this area, and the book is critical, skeptical, covers history, covers use cases, I think is balanced. And as I said she’s an expert. It’s also not a “pop AI book” which I don’t want to have on the syllabus.
AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and Sayash Kapoor, both computer scientists in academia who specialize in AI, though I think Kapoor has also been in tech industry. This book is brand new published in September 2024 through Princeton University Press. It’ll cover a lot more about generative AI, is also skeptical and critical, explanation based, historical use cases and again, written by computer scientists with authoritative opinions.
That’s going to be a bit of a challenge to develop lectures from, and they seem to overlap a bit, and I’m not exactly sure about how to handle that. But I am doing it for two reasons. First, I need something with use cases, categorization of AI (e.g., neural nets, gen AI, vision, etc.), and history. And I need something balanced, that is aware of the hype and can draw a line through it, even if that is ultimately an opinionated one.
But the second reason is that studies find college students no longer read books, and I suspect going forward with gen AI products like NotebookLM and whatever else is coming, that’s only going to worsen. Therefore they’re going to read two books that are written at a level a college student should be able to read. I once heard a mother say that she got her kids to eat veggies by putting broccoli on pizza. This is kind of like that — AI is the class, but reading a book is the broccoli so I’m bundling them. I think reading is important, in other words, regardless of what you’re reading.
Ultimately, though, these two books will be covering AI but not the sort of AI you’d get in a computer science major. It’ll help them understand things like TikTok’s algorithms and machine learning more generally, and autonomous vehicles, as well as generative AI, natural language processing more generally, AlphaGo, and recent work awarded in the 2024 Nobel Prizes. Again, narrow but as deep as we can, with distinct classifications, history and use cases. Enough to be literate, but still shallow enough that AI seems like magic. Any technology that you do not understand is sufficiently like magic, anyway.
Practical skills using AI.
Finally, the course is skills training. They will be learning to use AI tools like ChatGPT, Claude, NotebookLM and other things through repetition and the completion of tasks. But even more than that, they will be using these tools to teach themselves as opposed to using these tools merely to automate the completion of learning tasks.
The skills will extend, though, to programming. I am going to see if I can get students to teach themselves python this semester using generative AI, like Claude, ChatGPT and Gemini. They will learn python and they will learn to do various increasingly difficult forms of coding and data analytics as the semester goes on using what I am going to call “prompt coding”. That is, working with Claude/Gemini/ChatGPT to undertake complex tasks, as opposed to automating it with AI.
Remember, these are undergrads mostly, and the prerequisite is principles of economics. Some will also be non-majors and many will be business school students. They will mostly not have taken econometrics and so have little knowledge of data analytics. So my goal is not to teach them python; my goal is for them to teach themselves python using ChatGPT/Claude/Gemini. And I think with the correct assignments that increase in difficulty as the semester progresses, it will work.
This part of the class will also have a book. It’s an illustrated book, too, but it is deceptively deep and pedagogically phenomenal. It is:
The StatQuest Illustrated Guide to Neural Networks and AI: With hands-on examples in PyTorch!!! by Josh Starmer. It just came out last week and is self-published.
Starmer is basically if you could somehow combine Kahn Academy, Pete Seeger and Mister Rogers and had that merged person teach on Youtube about machine learning and statistics. He’s a major influencer on Youtube who has been explaining statistics and machine learning for a very long, but now he has a new book about neural nets and artificial intelligence. I had his old book on machine learning and loved it. And I really love his whole energy of trying to take complicated technical topics and make it as accessible as possible. He’s a real role model and source of inspiration for me in how I approach teaching causal inference, in fact.
I have been corresponding with Josh for several months and he sent me an advance copy of the book in pdf form. My physical book gets here any day now. This will be our book we use to learn about neural nets and AI at a slightly more technical level. Josh has suggested to me specific chapters that he thinks fit the larger class, and my assignment goals for the class which I’ll explain below. And so I think that by doing those things that support my assignments, I will be able to ignore some material and focus just on a few things. Plus, by including material on python (e.g., PyTorch), I think I can just use this as my de facto python book more or less, especially when bundled with some YouTubes of his.
Assignments
There are four different ways that students will be assessed for grades. And I designed this with the full awareness and eyes wide open that students will use AI to complete assignments. And because I feel that students will go forward using gen AI to “complete learning tasks” (e.g., homework), or in other words to automate the completion of learning tasks and thus allocation zero minutes of time to learning otherwise, and because I have my own theory of human capital in which knowledge is function of both the quantity and quality of time use spent learning, I am wanting to guard against it. So how will I do so?
The main thing I’m doing in this class is only having assignments outside of class for grade that are actually based on students learning to use AI. That is, I will grade them on using AI in outside of class assignments. But I will not evaluate them outside of class on the economics material nor any of the actual “knowing about AI and neural nets” material, as that’s a fool’s errand given they will simply use gen AI to automate the completion of the learning task, and thus substitute away from learning altogether. I’ll explain what those are in a moment though.
The way in which I will evaluate their understanding of the books and readings as well as the lectures will be in class quizzes. The point I will be making is I actually will be encouraging them to use AI outside of class to actually learn the material. I am not, note, encouraging them to use AI to “complete learning tasks” (e.g., homework). Rather I am encouraging them to use AI to do everything they can outside of class to actually teach themselves the material.
In other words, use the same amount of time, if not more, studying, but with the assistance of AI, but then evaluate their success at it in class where they cannot use AI. This is my pedagogical experiment this semester, so we will see how it goes. But the point is that I am moving away from homework entirely as in a world rich with AI, I think homework is probably going into the fossilized records. Going forward, students will be evaluated in class and use AI to teach themselves material. Learn how to study. Use Notebook LM to break down hard papers. Create study guides. Learn how to conversationally have a study partner in ChatGPT. Create GPTs to help answer your own questions. I want them to use AI in a robust way that goes far, far beyond the silly ways of just “Q&A” and “rewrite my emails”. So in class evaluations, as opposed to homework, are how I will do so, and I’ll use many quizzes and drop one so that the learning curve can be figured out sooner than later.
I basically am expecting them to use AI to augment their own production functions associated with the creation of knowledge endowed inside their own minds. I am warning them about the dangers of thinking you can “automate knowledge creation” as humans still must and always will have to exert labor in the form of time use to learn things. The question is whether they will figure out how to do so. It’s not a question of whether AI can do this; it’s a question of whether they will figure it out. And I think this can be part of the value of the course — skills at becoming more self taught, so that we can do higher level educating in the class. And I’ll design it so that there’s as little risk of substituting away from learning as I can possibly do.
Assessments
So then, how will I precisely assess them so that they have a grade in the class? I will be using four types of assessment tools. They are:
20% total of final grade. Weekly assignments outside of class where they are instructed to use Claude/ChatGPT/Gemini to do certain things, NotebookLM to break down papers and so forth, as well as instructions to install python, undertake basic programming tasks, work on Google Collab, moving through simple handling of data, more and more advanced calculations concluding with a final assignment wherein we will fit our own neural net using a reasonably large and somewhat complicated dataset.
I have not done this before. I am not entirely sure if I can pull it off, but I think I can. They will be learning to both use gen AI on a regular, intensive basis, particularly to help them learn python. It will therefore require a subscription. They can pick between ChatGPT and Claude, but they have to choose one of them for the semester. But the point is this is the one time there are homework assignments, and the homework assignments will be to show evidence they used AI to do things that are otherwise not straightforward (like coding in python and showing me the output).
25% of final grade. Quizzes based on outside readings and their work.
The inside class quizzes will be fairly regular, perhaps weekly, and will cover the readings and the previous lectures. The focus will be on them having successfully used AI outside of class, probably, to help them master material that is otherwise quite difficult to master. Learning Acemoglu’s task model is going to be a challenge, for instance, without the use of ChatGPT, Claude and NotebookLM, but even if they do use those to write summaries does not mean they learn even that simplified level of knowledge. So I’m putting a lot on them to learn the tools to teach themselves. I’ll be trying to provide as much guidance as I can on that, but that’s the goal.
45% of final grade. Three exams each worth 15%.
I’ll be giving three in-class exams. Again, AI cannot help them on this.
10% of final grade. Class participation
I’ll be taking roll. They get three absences for free and each one after that will reduce their class participation points at an increasing rate (i.e., rising marginal costs). So the fourth absence will reduce points by 1, fifth by 3 and sixth by 6. At that point, they lose their credit.
I think this is important frankly because last semester I had a few who really got caught in the equilibrium of chronic absenteeism. Plus, if I take daily roll, it helps me learn their names. And this is also a way for me to help pad the final grades which given this is such an experimental class, I think that’s going to be crucial.
No Computers or Tablets in Class
The last thing I will add is that despite this class being a fairly technologically intensive class, I will not be allowing them to use their computers or tablets in class. They can do it outside of class but not in class. And I’ll go over with them why early on and allow them to opt out of the class which isn’t required anyway, and there’s plenty of alternative classes for them to take. Instead they’ll have to take notes using pencil and paper like our ancestors used.
Conclusion
And that’s it. It’s about learning economics as it relates to AI, it’s about learning about AI itself (without reference to economics) enough to become a literate citizen going into the workforce, and it’s about gaining practical skills in machine learning and data analytics using AI, as well as self teaching using AI tools.
I’m exciting about teaching it. I think it’s going to really be a valuable course for our students, I’m going to learn a ton in teaching it which excites me, and I’m thinking it can help with student research even, which Baylor is really wanting us to push for more. Even if it’s just deepening their knowledge of python and programming and handling of data, it can help with research, but I think the AI tools more generally will help level up our students on that dimension too.
So wish me luck! I’m super fired up about this. Please share and follow and subscribe if you like this content (or even if not)!
I love point 5. No computers or tablets in class! That is the challenge! The course sounds amazing. Lucky students.
It's an exciting time--not only are students now going to be able to build cooler functioning things earlier in their education, but I feel like lectures and class time could become more engaging experiences. I wonder whether the class experience will feel more "flipped" (in the Sal Khan sense). Given we can't guarantee homework provides signal of effort/learning so much anymore, more in-class assessment seems reasonable. I wonder if we are going to regain memory skills as a people #MnemotechnicsUnite #RateMyMemoryPalace #FlashcardsAreBack #DustOffThatLeitnerBox.