Effect of AI on learning versus effect of AI on completed learning tasks
I’m all in on large language models (LLMs). I think they’re obviously very important, and the tens of billions in social resources being poured into them just this year alone suggests they are likely becoming increasingly a major part of our life. They are becoming inextricably connected with the production processes, and their impact on labor is already being felt, after a short time. But elasticities grow in the longrun as fixed inputs become variable and there is no telling other than with a crystal ball (aka an economic model) just where this is going.
That said, though, I worry about it too. I am excited thinking that maybe it will be a tool, like a wand, that if I just learn the spells, I can do things with it I wouldn’t have ever or could have ever done. But I also worry because this tool allows me to achieve tasks with fewer time inputs, and that includes “learning tasks” like reading an article or working through a proof. If reducing the time inputs needed leads me to greater understanding, that’s awesome because maybe I will become, like Neo in the Matrix, rapidly accumulate knowledge in a shortened period of time that would have taken me much longer to learn. But what if it’s the time inputs that you have to use to learn at all? What if LLMs allow me to accomplish a learning task without learning that material?
In my model, learning tasks (eg homework) and learning were so closely intertwined that you couldn’t do one of them without doing the other. Historically, the time inputs needed for learning tasks created human capital bc human capital was and I suspect remains a function of time inputs. Do enough homework problems and eventually you learn the material.
The separation of “learning task” as an objective function and “learning” as a verb would’ve been impossible to separate a year and a half ago. But now they are separated. And the question is,
Does AI reduce the time inputs necessary for accomplishing learning objectives (eg completing homework)?
Does that reduced cost of creating a finalized learning product itself mean more, less or the same learning (ie human capital) was accomplished?
And when should we care and when should we not care?
Learning tasks and learning are not the same
My behavioral model is a version of Gary Becker’s many writings about human capital and time use. I believe that learning requires time inputs. Both quantity and quality time inputs. And it builds on prior time inputs — call that human capital. These time inputs have alternative uses (ie opportunity cost of time). And for some reason or another unspecified here, I’ll also just assume that there is a point at which each additional (marginal) input of time begins to exhibit rising marginal costs of learning things.
As I said, when I think of human capital, I think of that malleable and invisible constructs of knowledge in the persons mind. And human capital increases with directed time inputs on the path to achieving learning goals. Maybe that learning goal is getting a high score on the exam. That requires studying. The task — high score — requires using time inputs intensively, way beyond I suspect what any rational person would ever choose to put them through even. And so to create the output (a high score), you used time in directed ways that coincidentally also created human capital.
This is not necessarily obviously still the case with LLMs because LLMs reduce the cost of learning tasks but may not therefore create human capital. You can do the homework using fewer time inputs, but if the exam is in person, will the person exhibit signs of human capital accumulation? Maybe, maybe not.
The only prediction I feel truly comfortable saying is that the cognitive load of using time to perform learning tasks falls with generative AI has fallen. But what I’m not as sure about is if that means that LLMs necessarily increase a persons stock of human capital. If human capital is a function of intensive time inputs used for learning tasks, and you now use fewer time inputs for those tasks, then you will complete more learning tasks and reduce your human capital.
If it helps, you might use the phrase “free riding off the LLM”. Think of the task as getting to the top of the mountain. And maybe there are two different sets of tools to do so that we could provide a person with. I could give a person durable ropes, carabiners, clamps and chalk. Those forms of capital don’t “do” anything. But they can be used to create skill — skills at mountain climbing.
Now consider an elevator that someone builds into the side of a sheer mountain wall. This elevator rises and lowers on a durable unbreakable thread. It too can get someone to the top of the mountain. It too is technology. It too is capital. But it won’t create the same form of human capital, and in fact, it may not make any human capital. It is in fact a substitute for human capital, whereas chalk and carabiners are a complement. The latter raises the marginal product of time for creating skill, but the mountain elevator doesn’t do that.
Well, is the elevator technology good or bad? It’s neither. It helps people get to the top of the mountain using fewer time inputs and less energy allowing someone to have more time and energy. If the goal is “get to the top of the mountain at lowest time use”, then it might be the more efficient technology. But if the goal is “teach a person to become a skilled mountain climber”, it won’t.
The point is — we historically had it where the learning task and the human capital were more or less highly correlated because you couldn’t achieve the task itself without the human capital. You could not get to the top of the mountain except by climbing it, so tolls that made you a better climber also increased human capital in climbing.
But with AI, that’s not the case. The human capital isn’t necessary for certain learning tasks, and if we aren’t aware of that, and if we aren’t 100% clear what we as educators want for our students (or ourselves), then we will set up objectives that ironically reduce human capital even while making successful completion of learning tasks more likely, and maybe even of higher quality.
It’s likely too early to say what my learning tasks are — but the distinction is helpful. I think I have always been driven by the sheer joy of learning. And I want that for my students. I want them to experience learning. Some of the most valuable, transformative intellectual experiences for me in my life came from painstaking time spent on a problem, slamming my head against failure repeatedly, maybe even for months, until I had a break through. It wasnt merely that I found some solution; it was the personal growth, the change. It seemed to change me. I will never be the same, for instance, bc of something that happened during my micro prelim where I solved a problem I’d never seen before, and the finding utterly surprised me — actually, the solution astonished me. But that was only something that could have happened because I’d spent 6 weeks, 6 days a week, 8 hour days studying for that prelim doing nothing but old problem sets, old exams, and most importantly, doing tons of questions in the back of our book where I had no access to the answer.
And that is what concerns me. In economics, my field, it’s the personal growth, the awe and wonder, not from learning facts, but having to solve so many problems after a long time of trying but failing.