16 Comments
User's avatar
Ksander's avatar

Many interesting ideas starting by the gap between perceived and actual value: this highlights that people who will get the most of AI are those with a strong sense of imagination and visualization. A skill that universities have not been particularly great at teaching (writing paper makes you a specialist, not a guaranteed creative thinker). And while decks is indeed a great idea to get the ball rolling, reading your (excellent and not manic at all) essay, I am convinced faculties will have to start by reinventing, reimagining themselves from head to toe to drive an effective AI adoption. Or else ... it will come from the students anyway: they won't wait for their teachers to tell about how great Claude Code is because they will be using it since they are 14, if they have parents working as engineers or at the very least, with a strong perception of the value of the tools. And if it doesn't adapt ... could it simply die and become obsolete? I do think there should be a strong sense of urgency from deans all over the world to understand what a future driven by agent looks like for the institution. If not, they may be the biggest losers of that next round of innovation "Grandpa ! You used to go at the university ??? And listen to a professor for hours?? OMG ..."

scott cunningham's avatar

Yes totally. I think decks draw someone in bc they almost can’t get worse, and it allows people to paint ideas with their words. So it taps a lot of things. But it’s a high value, high time use, high cost task, with large returns, which faculty do not do well, and probably won’t be ashamed to admit they want to do better. So I think it’s win win, plus in cowork mode you can limit the damage. But I agree it’s just the gateway drug.

Dr Sam Illingworth's avatar

Yes Scott! Thank you so much for writing. Why do you think AI triggers repugnance? I have two theories: 1 it challenges the very nature of universities (why pay $50,000 for this piece of paper) and 2 it levels hierarchies in such a way that upsets the way in which sadly many academics like to teach (one way deficit model rather than Vygotsky's argument that the more knowledgeable other is transient). The more I spend with Claude Code in particular the more I realise that this has (for me at least) changed research for ever. And I say that as a Full Professor without any hyperbole at all. 🙏

scott cunningham's avatar

I don’t know. I don’t know if the uncanny valley is real or just a funny joke in 30 Rock, but I suspect there’s something disgusting about relationships with inanimate objects that are almost human. Though we do tolerate pets, but maybe pets are clearly not humans

Richard Devine's avatar

You touch on it in this post. Submissions will obviously increase, potentially by quite a bit (5-10 tenfold+?). I've been talking with colleagues about what the new status quo will be for research. Is archival research dead? If attention, prestige, and traditional peer review are the bottlenecks that cannot be appropriated by AI, then what, an even greater premium on expensive experiments, special access proprietary data, and elite connections? I'm not sure that these AI agents will result in leveled hierarchies.

scott cunningham's avatar

Yeah — seems like the diminishing returns concepts are useful. What is it that can and cannot be replicated at scale through sheer repetition? Avoid them. Why? Bc there’s only marginal gains to be made and thus the marginal value is low even if the marginal cost is too. I doubt ten thousand minimum wage studies gets us anywhere anyway tbh. So where are the ones way up the demand curve where the marginal benefits are super high? It’ll at minimum be things that cannot be reproduced at AI scale, but it doesn’t mean it’s valuable either just bc they cannot do it. May put a huge premium on admin datasets which already have a huge premium. The traditional bottlenecks are likely not resolved with Claude code bc he mainly best I can tell scrapes the web with lots of datasets. But I think you assume efficient markets there — anything not written in six months where the data is laying around, I suspect the market has already spoke. All of this feels like gaming and over thinking things too but the thing is you make decisions to invest in topics, which take years of your life with real uncertainty on the back end. Plus tbh, the noise is going to be impenetrable if there’s a 5-10 fold increase in submissions even if it’s mostly left tail. That’s still a busy editor rejecting at the desk at a rate they’ve never had to deal with. I think Katz reads those papers before he rejects at the qje. But that’s at historic levels. Can he scrutinize at the same perceptive level if he’s not got to reject at 5-10x higher? There aren’t enough hours in the day at some point — at some point he has to hire someone else just to desk reject! So what does that do to the process? Only can make it noisier I think. I don’t see how it makes it better.

Richard Devine's avatar

Thanks for that, Scott. With that said, it's clear that AI is helpful for productivity and can mitigate many of the more tedious tasks that we've found ourselves doing (e.g., prepping decks and curating datasets), which is great on one hand, but what makes you optimistic (if you even are) for it being a net good in the research game for quantitative social scientists?

scott cunningham's avatar

It’s the equilibrium. What happens when everyone increases their marginal productivity by 5-10x? What does that even mean? Well some of it is finishing the same work faster. What does that person do with the extra time? Do they write more papers? Sure assume that. But some do automation work. They produce more papers, most likely drawn from the lower end of the distribution. That is until automation improves and shifts the automation discoveries right. But then if it moves it right, what role does the PhD play? At some point it’s unclear they play a role at all. But some of what I’m saying is mixing quantities. One of them is that in equilibrium, we end up with more lower quality papers depending on time use per paper and the skill per minute that even matters. So what does the distribution become? Well feom the perspective of science, it seems like this gets pretty deep. How well does the current production function handle this? It seems plausible the papers are both better than they’d be without AI and yet the mean falls and the variance rises due to these new entrants at the left tail from pure AI automation, and I bet we see a huge factory model of papers. How could we not given we already see p hacking and complete fabrication plus low quality work anyway? Some people will become mills — which isn’t bad but it’s not clear it helps humanity too.

I mean I need to just think all this out a bit, but it seems like so much of these has to do with the production functions of skill and these new tech, whether skill can be maintained with these new tech, whether ppl will invest in the same skills if their time use really does drop to negligible levels, and how well the quality of work is on average and its spread. It’s not clear except that it does seem probable the publishing will get harder and noisier. And that seems like it could amplify biases.

Richard Devine's avatar

Nice points. The volume of low quality work will no doubt increase (it has become costless, right?), but I believe that's already happened to some degree. But I imagine the volume of high quality work will increase quite a bit too, which definitely increases the noise. The most productive will become even more productive.

Since attention is a limiting factor, one possible reality is that academia could become more insular. Top tier work usually has to be vetted in some way by established scholars. If AI makes data work and beautiful cheaper, it stands to reason that those established scholars will have more demands on their time, perhaps causing them to become more insular, only working with their closest collaborators. I hate to sound pessimistic because AI is clearly a net good in a lot of ways. But I do worry about it reinforcing existing inequalities in academia.

scott cunningham's avatar

Yeah totally. I also don't know why you need massive labs of predocs tbh with this. It's not like those are cheap and resources are scarcer than ever. I think even in partial equilibrium, this is super disruptive even before we get to some weird new place of wherever this is going and how it brings us all along in its wake. I absolutely worry about inequalities too. So strange that one product -- Claude Code and its similar products like Codex -- could be so likely disruptive when it seemed like ChatGPT was going to be. But I really think this is the real deal and ChatGPT was just a chatbot you talked to. Which is great -- but I think this is borderline disconnected to that things usefulness.

Jason Gantenberg's avatar

This is an interesting take and rings mostly true. I admit to being a skeptic of AI, not so much as a tool with obvious uses but as a seductive means to offload the responsibility we have as researchers to think. It's too tempting to offload intellectual struggle (for those who view the struggle as an obstacle rather than the very point of the thing), and so, like you said, there are security risks in the intellectual sense. (For my part, I love to code, so I view my becoming a fossil in that department with a lot of regret and resentment. So it goes.)

I think you were hinting at these other risks but didn't mention them explicitly. Researchers are generally pretty bad at computing security, and I suspect we're already swimming in an ocean of FERPA, HIPAA, and other violations attributable to people uploading that stuff through AI on the web. This is a big practical obstacle for people in the life and social sciences who deal with actual participant data.

scott cunningham's avatar

I think this is a nightmare until it's fixed. It's going to get fixed, 100%, because the gains are just too large for the market and agencies not to fix it, but I bet it's going to absolutely suck until we bet to that point.

Tyler Ransom's avatar

Thanks, Scott! I have been using CC for about 1 month now, and I'm still on the $20/mo plan and it's working out well so far. We'll see how long I last...

Raymond Guiteras's avatar

Hey Scott, can you recommend a source - either one of your own posts or something from someone else - on the first N things a total novice should do to get started?

And I mean *total* novice, as in "first, install this program in that directory".

Raymond Guiteras's avatar

Hang on, this looks like it might be useful, is it a reliable source though? ;)

Commit 7fbc4eb

Rewrite skills README for uninformed laypeople

Explains Claude Code from scratch, adds prerequisites section,

explains the two-directory structure in its own section, removes

YAML jargon from the main flow. Written for someone who has never

used Claude Code but is eager to try.

scott cunningham's avatar

I think the first thing to do might be to have cowork on Claude code organize something for you. Just to be safe. This isn’t what you’re asking for, but it can explain cowork vs code.

https://cornwl.github.io/files/claude-academic-guide.html

I think cowork might be a good way to get your feet wet. So if you scan there, find a simple set of things, go into the app, connect cowork to a local directory, and ask for it. You mainly need to just try safe things a little and get a feel for it.