Scott's Mixtape Substack

Scott's Mixtape Substack

Share this post

Scott's Mixtape Substack
Scott's Mixtape Substack
A Simple Explainer of Acemoglu’s Simple Macroeconomics of AI

A Simple Explainer of Acemoglu’s Simple Macroeconomics of AI

scott cunningham's avatar
scott cunningham
Apr 16, 2025
∙ Paid
14

Share this post

Scott's Mixtape Substack
Scott's Mixtape Substack
A Simple Explainer of Acemoglu’s Simple Macroeconomics of AI
3
5
Share

I recently taught a new paper by Daron Acemoglu titled “The Simple Macroeconomics of AI” in my “Economics of AI” class, and I think the paper is of enough general interest that I thought I’d write an explainer for my substack. If you want to see the deck of slides, you can here though dropbox is acting weird so it may let you only view it but not download it. Anyway, the way I am going to go about the substack today is based on my class’s emphasis on the historic skill biased technological change literature in labor and macroeconomics, as well as my own idiosyncratic interests in people’s biographies within economics.

The goal is to highlight its core ideas and implications leaving the reader a decent handle on the paper’s key contributions without getting overwhelmed by details. To do that I’m going to focus on a few things. Acemoglu’s paper provides a realistic method for forecasting AI’s impact on economic growth, grounded firmly in three foundational pillars:

  1. His adaptation of the task-based framework introduced by Autor, Levy, and Murnane (2003) with notable contributions from his own work, particularly Acemoglu’s work with Pascual Restrepo.

  2. Hulten’s theorem, a simple yet powerful device for mapping micro-level productivity improvements directly to total factor productivity and thus macro-level economic growth

  3. Current empirical productivity estimates that can be then plugged into Hulten’s theory in order to make predictions about AI’s effect on GDP.

Thus the paper really has two components. It’s Acemoglu’s own task model, based on Autor, et al. (2003) and his own work with Restrepo, and Hulten’s theorem which he uses to make predictions. You understand those two things, you understand the paper pretty well. But the first, at least, is more easily understood if you understand the broader context on skill-biased technological change, which is why I start there.

To keep with my Substack experiment tradition, I asked Cosmos to flip a virtual coin to decide if this post would be paywalled—heads for paywall, tails for free. After two heads and one tail (best two out of three), the cruelty of chance dictate that this post would indeed be paywalled.

But, it you’re not already a paying subscriber, now’s a great time to become one! I genuinely appreciate the support. For my current subscribers: thank you deeply; your support means everything. And for my non-paying subscribers, thank you deeply too! I also appreciate your support! I guess I appreciate all support and being able to contribute to the community. And to show my appreciation, here is a video of Alice In Chains “Don’t Follow” from their 1994 album, “Jar of Flies”, which for many of you is the best album you haven’t thought about in decades, or it’s the best album you’re about to listen to for the first time.

Scott's Mixtape Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Intellectual Origins of Acemoglu’s Simple Model

The paper’s author, Daron Acemoglu, hardly needs an introduction—but just in case: he was recently awarded the 2024 Nobel Prize in Economics, along with Simon Johnson and James Robinson, for their groundbreaking work on how historical political institutions shape the long-run economic growth of nations. That prize came as no surprise to anyone who has followed Acemoglu’s career. He has distinguished himself with field-defining contributions not just in institutional economics, but also in growth theory, labor markets, and the economics of technology and automation.

Much of that expansive vision is captured in Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, a book he co-authored with Simon Johnson and published just last year. It’s an accessible synthesis of a core argument he’s developed over decades: that technological progress doesn’t automatically translate into shared prosperity—it depends on the institutions and power structures through which it’s deployed. Acemoglu’s ability to connect micro-level insights (like tasks, technologies, or labor flows) with macro-level consequences (growth, inequality, institutional change) has made him one of the most influential economists alive.

I’m not in the fringe group who thinks he might become the first person to win a second Nobel Prize in economics—I’m squarely in the mainstream of that group.

So where does this new AI paper fit in Acemoglu’s intellectual trajectory? It lands squarely at the intersection of two rich traditions in labor economics: the study of skill-biased technological change and the task-based modeling of labor and automation. To appreciate the depth of his contribution, it’s worth briefly revisiting how economists have come to understand technology’s impact on labor markets over the past few decades.

Starting in the late 20th century, economists Claudia Goldin, Larry Katz, Alan Krueger and other labor economists spearheaded research on skill-biased technological change (SBTC), a powerful explanation for rising income inequality observed post-1975. SBTC captured how technological advancements disproportionately favored skilled labor, amplifying inequality as educational attainment lagged.

David Autor, Frank Levy, and Richard Murnane’s (2003) “task-based model” evolved from this insight, shifting our understanding from broad skill categories to the specific tasks workers perform. They showed that computers automated routine tasks, leading to the well-documented “hollowing out” of middle-class jobs. To see part of their evidence for this hollowing out caused by the computerization of routine work, I’ve reproduced a figure from the paper below.

As an aside, Autor et al.’s task framework remains one of my favorite papers. It brilliantly illuminating how production is best thought of as tasks rather than simple labor aggregates. David Autor’s recent 2024 paper, in fact, builds on that earlier 2003 and informs his optimism that generative AI might reverse the impact that computers had had on labor market inequality in the late 20th century. He believes that generative AI, specifically, can "democratize elite expertise” used in mass production by re-empowering the middle class to engage in non-routine cognitive work that otherwise is inaccessible because of obstacles like education and various frictions in the labor markets. But that’s a post for another day.

So to summarize, I think the way to read Acemoglu’s “Simple Macroeconomics of AI” is to place him squarely in the wake of that much longer literature on skill-biased technological change and the task model that Autor, et al. (2024) developed, as well as his own work with Pascual Restrepo on automation and technology (which I haven’t yet done a deep dive on, but will soon).

Keep reading with a 7-day free trial

Subscribe to Scott's Mixtape Substack to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 scott cunningham
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share