You may have or you may have not seen this article. A PhD student in health economics at Minnesota was dismissed from the program after a university review. Faculty suspected he had used AI on his preliminary exam using a language model such as ChatGPT. The article is long (it’s maybe even longer than it first appears due to some placement of ads and pictures), and I encourage you to read it, but that is not the point of this substack. The point of this substack is to share my syllabus on the “Economics of AI” which I think I’m finally finished with, and offer up some reflections, as well as share how I decided to incorporate some things on the ethics around work that this article made me think of.
Some of what the professors claimed they uncovered with this student was pretty similar to things that happened to me last semester which I documented in this old substack. The difference was that faculty reported numerous alleged violations across a few classes and with a few different faculty, whereas mine was in a single class. Still, one of the things that comes up in the article about the PhD student has to do with how can professors determine the use of LLMs where it directly violates prohibited use cases in their classes and program? Is it even possible and what then should be done?
I take away three things from this article that I want to discuss and they are this:
Create an explicit AI policy that you’re comfortable with
Consider how these technologies may have changed the ability of traditional learning tasks to create knowledge
Encourage students in ways that help me better understand the value of work so as to contextualize AI
Let me share my thoughts on each of these now.
Create an Explicit AI Policy and Put It in The Syllabus
The first most obvious thing, which is echoed in the article above, is that faculty should have an explicit AI policy in the syllabus. We all have heard it said or have said it so ourselves that the syllabus is a contract between the students and the professor. If the explicit AI policy is not something the student is comfortable with, then they should have the freedom to sort into a different class.
I encourage you to frame the AI policy to the students in a simple way. First, have a section stating precisely what the reasons for your AI policy is. I think the reason should be something about your own values as a professor, and what you see as the goal of classroom education — at least as it relates to your own class. So something that is related to your own philosophy about education and pedagogy is probably a good idea before listing the use cases and the prohibited use cases.
Second, I would encourage you to use the language of “allowed AI use” and “prohibited AI use”. It’s very clear language that students will understand, plus it allows you to state things positively and negatively.
Third, you should have the courage to trust your judgment as you do this. It is not your job to do things that make the students like you. It is your job to do things that are consistent with your own objectives in that class. You are the professor. You have gone to university and graduate school, too. You got a job. You are competent. You should trust, therefore, your own judgment. If you are comfortable with use cases ABC, but want to prohibit use cases XYZ, then that’s your call. This is your class. Lay down the rules of the game in clear terms, and let yourself feel that freedom, as you were hired presumably at your job because of your competency.
It’s my strong opinion that if a professor feels strongly today that under no circumstances will they allow AI in their class, then they should do that. I don’t allow electronics to be used during class, but other professors allow it. Who’s right and who’s wrong? Each person is right. It’s their class, and if they feel that they need a setup to be a certain way in order to achieve their goals, then they absolutely should do that, whatever it may be.
Education vs Learning Tasks and how AI Highlights the Differences
I am not an expert on education or pedagogy, even though I think about it a lot. I am not sure, for instance, if the way that I teach is the maximally effective way to teach, but each semester, I do try to push myself to improve. Still, what I am about to say may be wrong, but it’s what I think. Until I find a better name, I will just call this my “human capital theory of learning based on time use under pressured work”.
First, I am an economist, and economists think in terms of models. These models are stylized descriptions of the real world, and to quote George Box, they are not true. They either are or are not useful. And usefulness, when it comes to theories, has to do with what the aim of the actions will be that are dependent on the predictions of the theory. So then what is my actions? To design my classes in such a way that students are empowered to learn as much as they can on the content we are covering. But how is that accomplished? That is where my economic model comes in.
Assume that there people learn by spending time on learning tasks that are difficult. The learning tasks cannot be too difficult, nor can they be too easy, but rather appropriate for the students that are in your class, and no one else’s. Various technologies assist the student, too. For instance, in a math class, pencils and paper help students learn mathematics for without both, it is very hard to imagine the typical student progressing. That is because mathematics requires doing problems and learning rules repeatedly. Very few of us can simply listen to the lecture and immediately comprehend it without practice and taking notes.
So, if we mix our time with these tools, then we can learn, but those must be directed on designed learning tasks, like homework, in order to do so. If we call the “learning task” (e.g., homework) something the student produces, then we can think of this entire process using one of the jargons from economics called “production functions”. So then what is a production function? I’ll show you an example:
That’s a production function. A student produces the output from a learning task, such as a completed homework assignment, using tools, labeled as k (“capital”), and time l (“labor”). If we say that you need to have at least some capital and some labor, otherwise you cannot make anything, then it means exerting zero time on learning tasks will not produce a completed learning task, except in the cases of cheating. Furthermore, if we assume for simplicity that you also need some tools, like pencils, then we are saying that the production function requires at least some tools.
I won’t write down a particular equation for this production function, but note there are many different equations that describe many different production functions from economics. Ideally they describe the literal way in which capital and labor combine to make some thing. Sometimes that is easy to describe, even if it does require an engineer, but usually it is not. Certain production functions are thought to approximate the output of a country, but others may be more useful at understanding micro level behavior by firms or households.
The q in this case represents the number of learning tasks. You spend more time and more capital on learning tasks, at least in this case, you make more learning tasks. The shape of that growth as you vary l and k is determined by the precise mathematical form, but for now I will just say that the hope is that this is true:
where E is a student’s own amount of intangible education and learning from spending time in costly work on learning tasks, q.
Well, assume that to complete 1 learning task — that is, q=1 — that they have to spend 10 hours on it at minimum (i.e., l=10). Furthermore, assume that this is a great learning task (i.e., excellent homework assignment) and that time is a productive input in producing learning in the student. In other words, not all learning tasks are the same. Some can with costly time produce knowledge in the student, and some cannot.
But what then is time? In this case, think of it as “time under pressure”. That is, think of it like time in the gym lifting weights. If you do not engage in exercises that are difficult, then you cannot create muscle. If they are too difficult, you cannot complete the reps. So they must be designed to be appropriate, but the point here is that the professor is herself designing the learning task to be completed, q, in such a way that time under pressure is required.
Large language models, and specifically ChatGPT and Claude, allow you to complete certain learning tasks even while setting l=0. That is, you can complete certain learning tasks, even at the college and higher levels, using zero of your own time. How so? Consider this isoquant, which is the slice of a three dimensional production function, in which one can produce, not education, mind you, but the output of a learning task. For the sake of illustration, I’ll say that at fixed k (capital) and with the aim of producing a single finalized homework assignment q=1, contemporary chatbots are perfect substitutes for a human’s time. When that is the case, the slice of the production function at q=1, called the “isoquant”, is linear and looks like this:
This straight line means that you can produce the learning tasks by spending any combination of a person’s time on that line, including no time at all, because a linear isoquant means that machine time and human time are perfect substitutes.
When a student choose l=0 on learning tasks such that they produce q=1 learning tasks completed, it means that automated the learning task entirely by using a chatbot to do the entire task. I put the amount of time the chatbot spent on it to be beta. How large is beta? As of January 20th, 2025, it’s basically almost not time at all.
So how much does economics say that the student will spend on the learning task? Well, if they are utterly without any personal reservations about completely automating the production task, they will choose a point on that isoquant that has the lowest cost. The cost of their own time, for instance, is the foregone value measured in the next best alternative, which may be studying in a different class, leisure or work-related tasks. So, let’s say then that with complete specialization of l=5, the cost to the student is $100. Maybe for simplicity just assume they have a part-time job that earns them $20/hour and thus had they not been focused on homework, they would’ve worked and earned that much. But that’s just meant to produce a number — in reality, it’s just the sacrifices, and how they think of those sacrifices, and how they weight them in the calculus of their own minds, that determines the cost of 5 hours of their own time and zero hours of a chatbot’s time on producing that single assignment.
But, what if they spend zero hours on the task and set C=beta. What happens then? First, according to this isoquant, setting l=0 and C=beta will produce 1 single learning task (i.e., q=1). But at what cost? Well, here’s the trick — they do not rent these chatbots on a piece rate. Rather they at most pay will be paying $20/month for a very large number of queries within a few hours time. Which means that the rental rate on chatbots is $0, and thus C=$0 for the production of a single task. In other words, the $20/month is a fixed cost, not a marginal cost. And for the production of a single completed learning task, it costs them $0 by freeing up their own time entirely (i.e., l=0) and using a robot that they do not have to rent out.
If you can produce the same learning task at either $0 or $100, economics predicts that rational students will pick the cheaper machine time to person time (i.e, C/l) ratio which in this case is to have a chatbot entirely automate the production of the learning task. I call this automated learning tasks. The only thing a rational student would do is if they are personally constrained in some way, such as through an ethical work based on their own private beliefs, or perhaps an honor code that prohibits that form of specialization. But since detection is near impossible except for the marginal low effort cases where there are clear signs of use (i.e., leaving your prompt in the learning task can be pretty damning evidence, or maybe hallucinated cites, which is what happened in my class last semester).
But then why does this matter at all? What is wrong with automation? Well, first of all, if you look through my references below, you’ll find 50 articles and books for this “Economics of AI” class, many of which are about technologies and automation having an ambiguous effect on the overall economy and the structure of wages and employment. So right off the bat, from a macro perspective, automation does not necessarily increase GDP. And one of the things thought to increase GDP is human capital and education.
Well, what if E (“education and learning”) is not merely based on learning tasks q, but is based on human time use specifically. That is, what if the production of learning tasks is not itself productive, but rather learning tasks are merely the endogenously designed sequence of tasks that at a given point in history, with a particular technological environment, will produce E? What if E=q[k, l=0, C=beta]=0, but E=q[k, l=5, C=0] does produce an education. Then complete automation of learning tasks will produce learning tasks and produce no human capital.
But then, what about this possibility. Since chatbots are free, but learning is based on human time use, what if you were to set l=5 hours, and yet still C=beta? What then? Well, it’s possible, then, that the time use spent studying may allow you to complete the learning task and learn more too, such that this might (or might not) be true:
In other words, it’s at least hypothetically possible that chatbots make one’s own time use more productive such that even with completing the learning tasks, they learn more, despite spending the same amount of their own time on the learning task.
I think this is what professors and students both should be reflecting on. The goal is not and never has been to produce homework. Homework is not socially valuable. No one cares about completed homework assignments. There is no market for homework except to allow a student to have someone else do that homework for them. Black markets for homework would not exist, in other words, in a world where professors did not require them. Homework is not the point. Education is the point.
Ethics of Work
The last point I want to make is from Adlerian psychology. Adlerian psychology states that we all have goals, and we produce feelings even in order to achieve those goals. But here’s the thing — we all subjectively create those goals, and they may not be at all what a person thinks they are. It’s entirely possible that a person’s goal is to not have relationships with people, for instance, and thus produces anger so that they can alienate people with insults and inappropriate behavior that in fact does alienate people.
Economics also believes, not so much that people have goals, but rather that people have preferences. These preferences are entirely the domain of the person and are as far as economists concerned a complete black box. Thus if a person prefers to be alone, and needs anger in order to make it happen, then that’s just the reality.
The one thing that economists really don’t have, that psychology does have, is a theory of change as it relates to those private preference.s. They are in fact, in Becker and Stigler’s highly influential article De Gustibus non Est Disputadum, explicitly said to be exogenous, meaning never changing. This was designed to be a methodology that would promote the Beckerian imperialism wherein economics could become a fully formed behavioral science. But put that aside for now. The point is, economics has its own way of framing things, and not all of that is consistent with psychology, even when methodologically they do seem a bit similar.
Adlerian psychology breaks down things, interestingly, into tasks, just like Acemoglu and Restrepo. These tasks come in three forms and are called “life tasks”:
Work
Love
Community / Friendship
And in each of these, one of the things that comes up is that they are not overlapping at all. So that is interesting — these are distinct spheres of life and we have tasks we are responsible for in each one.
Second, putting aside for now that we may produce our own subjective states of mind, including emotions, in order to achieve subjective goals, as that’s in and of itself quite provocative, Adlerian psychology is extremely focused on freedom and courage. The freedom that Adler described, though, was the freedom to be disliked, hence the name of the popular book that came out in 2018 but was updated in 2024. This freedom requires courage and it’s the courage to be disliked because it states that it is not our responsibility to ensure that others like us, and in fact all attempts to make that your goal are doomed to imprisonment. What people think of us and how they treat us are their tasks, not ours.
If you listen to the audible of the book, if nothing else, it’s pretty fascinating, because not only is it pretty powerful as a way of thinking, it also routinely contradicts economic theories of behavior. Specifically, economics is very focused on incentives and competition, and Adlerian psychology warns about both. Economics has a black box called “utility” which quite literally has no meaning other than to be a number on a number line ranking one’s preference over bundles of goods and services. But Adlerian psychology states that all of our problems are interpersonal problems.
Where am I going with this? Where I’m going with this is that I decided that in addition to having public policy related topics covering a range of matters related to “the ethics of AI”, I will also be talking to students about different viewpoints over work itself. Ultimately, the freedom that I think I do agree with Adler on is that they should feel the freedom to do what they want without concern for what others think. That is not a get out of jail free card to engage in malfeasance, but since so much in the Adlerian view of human nature is that our goals are already about trying to gain favor and popularity within some given reference category, the point is to find a way to address that.
Now I don’t know if bringing this into my class will help, but I keep thinking to myself about the case of the PhD student. Ultimately, if one has no scruples about violating an AI policy, because after all it is really an honor code and future low effort students will probably learn from failed cases (putting aside this student whose situation I know nothing about) to cover their tracks. Our ability to “catch” people violating the AI policy is probably due to cohort effects and novices. In time, that won’t be true because not only will the technology improve, but cohorts will come to college with significant more human capital in using AI.
Which brings me back to deciding to bring in alternative perspectives on work and critical material about human nature. First, I think it’ll help to have something else that’s credible to contrast with economic theories of work and consumer theory. That alone I hope can help elevate those discussions. But second, I think more than likely it’s appropriate to bring a discussion of AI ethics that is more than just public policy. The one thing people can absolutely change is themselves. Most people cannot change tax policy or antitrust laws, though. Which is all the more reason that students begin thinking about work in ways that can help inform their own use of AI because if I’m right that there are tremendous upside and downside uses to AI, including the temptations to violate AI policies, set l=0 and C=beta for the sole purpose of getting credentialing, not human capital, then why not encourage reflection and critical discussion about that too?
Conclusion
Here is my syllabus for the “Economics of AI” class I’m teaching this semester (starting tomorrow!). You’ll see I have 50 articles and books, but if you read the fine print, I will only be having them read around 10 or so. The class focuses on economics, AI and skills development in self learning and python programming with the assistance of AI. There is an explicit subsection in 6.4 called “AI Policy” if you want to see that. The only part that I have fully figured out yet are the topics. So ignore the course outline itself as I’m still working on it. But here is everything up to the references.
And then here are the references. It’s 50 articles and books like I said. This is mainly for me. I will be framing the long run class around these readings for the most part. This semester it will be more constrained since it’s a new prep. But you can see that my class will be heavy on macroeconomics and labor economics, linking both to technology, covering economic history of Industrial Revolution, automation and computerization of work, skill biased technological change, and the task model I mentioned. It also will be covering some empirical papers, though I am moreso interested in descriptive papers showing trends and differences than I am in RCTs given we are really in the short-run at the moment, and it’s the long-run effects of these technologies that I think are the more relevant question and not the short-run ones. After all, the effect of technology is very different with inputs are fixed than when all inputs are variable. Still I will discuss some RCTs too.
And of course I will be relying heavily on Mitchell’s 2019 book about AI, as well as some surveys on LLMs to update it, to help students learn about the history of AI, the different applications, the different fields within it, and way that she thinks critically about it. And, I will also be requiring some self teaching on both these hard articles using tools like NotebookLM but also how to program and work with data using python. How far I go on the latter is to be determined.
But that’s it! I hope this was helpful. I wrote this entirely stream of consciousness in one sitting without any edits! Though technically it is also a second draft from a substack I wrote yesterday that I ultimately scrapped. Good luck everyone! I wish you all the best on your academic journeys this semester!
Thanks for another very thought-provoking post. I agree with much of what you wrote, and your discussion of the production function left me thinking:
It is worth encouraging my students to engage in enriching and fruitful discussions with ChatGPT that don't necessarily yield polished homework assignments (b/c I want them to maximize learning not homework assignment production). So assign them that, routinely: i.e., have them submit the transcript of a 10-min discussion with the AI diving deeply into an interesting class-related topic of their choice, and then ask them to write a few sentences of ex-post reflection.
Very helpful line of thought. I doubt I'm alone in wishing you could record your lectures and post them to the paid substack... 😀