How Claude Code Has Changed My Work (Part 4-ish): More about Claude Code, its Creator, and Latent Knowledge
Sorry I’m so late on updating my Claude Code series. If you’ve been following the news, you’ve probably seen a ton of articles the last couple weeks, though, about Claude Code and what a revolutionary piece of software it is for programmers.
The thing that I think is worth noting is that those pieces are written more by software developers than empirical social scientists or economists. In fact, I think very little of what I have seen even comes close to being the kind of worker that I see as being the target audience and regular reader of my substack. And I think that’s because so far, if you read closely between the lines of all the alleged productivity gains from AI for programmers, it in general actually has been the computer science tribe.
Which is not to say, though, that empirical social scientists aren’t using AI, as they for sure are. I just mean that on the gradient of the sort of use that you see presented at large and the type of worker and work being purported in the realm of social science, I think there is enough of a gap that it warrants separate explanations if only to translate what use cases (beyond trivial use) there are. So I’m going to try and do that more.
This will be a rambling post. I keep trying to think of a way to organize it, but it’s too much work. I’m just going to therefore write little sections.
Boris Cherny, an Economics Major, Invented Claude Code in Early 2024
Before I dig into the actual workflow stuff, let me tell you what I have learned about the creator of Claude Code. Yes, it was created by Anthropic, but it was accidentally created too. The person who built Claude Code is named Boris Cherny. Here’s what I’ve learned about Boris Cherny.
Boris wasn’t an AI researcher.
He studied economics at UC San Diego, graduating in 2011.
He taught himself to program, started working at startups when he was 18, eventually wrote a well-regarded book on TypeScript for O’Reilly
He spent eight years at Meta, rising to Principal Engineer—a senior individual contributor role.
He led engineering for Facebook Groups.
He joined Anthropic in September 2024. And it was not to build Claude Code. Rather, he joined to work on the Claude chatbot more generally.
If he wasn’t hired to make Claude Code, and he made Claude Code, then what happened? Well that is an interesting story in and of itself. From what I have been able to gather, what happened next came from a habit Boris has talked about in interviews: he builds side projects. He’s said that most of his career growth came from tinkering on things outside his main job. When he hires people, he looks for the same pattern—people with hobbies, side quests, passion projects. “It shows curiosity and drive,” he’s said.
First, let me just say that that actually was super encouraging to hear because I also build side projects. Mixtape Sessions is a side project. My podcast is a side project. This substack is a side project. I have way too many side projects to list. When people ask me what my hobbies are, I basically sheepishly will say something like “I am trying to build an academic genealogy of Orley Ashenfelter, a labor economist at Princeton’s Industrial Relations Section …” Many of these I just have to work on otherwise I will die. So it’s good to know that some think it’s actually a good thing,
Anyway, when Boris got to Anthropic, he immediately started tinkering with Claude. He wanted to learn the Claude API, so he built a little terminal tool that connects to Claude. And initially, the first version of Claude Code could tell him what song was playing on his computer.
Then he had a conversation with a PM at Anthropic named Cat Wu, who was researching AI agents. And that conversation sparked an idea. What if he gave Claude access to more than just the music player? What if he gave it access to the filesystem? To bash?
So he tried it. I’ll paraphrase and dramatize what happened next.
“The result was astonishing. … Claude began exploring my codebase on its own. I would ask a question, and Claude would autonomously open a file, notice it imported other modules, then open those files too. It went on, until it found a good answer. … Claude exploring the filesystem was mind-blowing to me because I’d never used any tool like this before.”
Look at that closely. He was surprised by what he did. Claude surprised him. Why? Because he did not teach Claude how to navigate his codebase. He didn’t program anything algorithmic at all. He didn’t write “when you see this import statement, open that file.” Rather, he just gave Claude access to the filesystem, which gave Claude the ability to read files, and Claude immediately knew what to do with it.
So, how does Claude know how to read the files in the filesystem if Claude was not designed to do that, and no one had ever programmed him to do that? That’s the million dollar question. And the answer appears to be hidden in plain sight.
Claude was trained on billions of lines of code. But it is not just the code as syntax. This is the key, and it’s connected to something David Autor has written about regarding the computerization of work, the ability of computers to outperform humans when the work can be written down as a series of steps, and that AI (or LLMs rather) cannot do algorithmic work well whatsoever.
But, it can do the sort of work well that cannot be written down which is the sort of work based on a type of knowledge that is latent but not able to be communicated between humans. Autor calls this the Polyani Paradox — we know more than we know how to explain.
Well, here’s the deal — LLMs can’t follow algorithms at all well. Which is why when people basically ask it do stuff that are tasks which are more or less algorithmic in nature, it sucks at it. Find me the cites for this and then it comes back with hallucinated texts. But ask it to try and uncover the meaning in something, and it can. Why?
Because, embedded in human speech are several things — there’s the syntax, but there’s also the inchoate meaning behind the words. Humans pick that up — and apparently, so does Claude, so does ChatGPT. Many of us knew that with the chatbots which was what made them all seem so human-like, but apparently because Claude was trained on billions of lines of code, something like that is going on with regards to projects as well.
Code is more than just syntax. It’s not merely documentation for Stata and R. Rather, code is in context. It’s tutorials, documentation, Stack Overflow posts, Stata listserv posts, Github repositories with their full history. Claud has seen it all — countless examples of how programmers actually work. In fact, things related to work that even the programmers themselves may not truly recognize as the work. Claude sees them opening files, seeing imported things, following those things, understanding their various dependencies, then go back. Back and forth a hundred times. Claude saw it all.
He saw not just the syntax of the code. He saw the project. Code is never the goal in anything. The project is the goal. And Claude has reviewed code, but more important than that, Claude has reviewed the projects.
This is the knowledge that Autor has emphasized AI and LLMs in particular accesses — the latent knowledge contained in human speech. And if you have the latent knowledge, and you also have the syntax of that, whatever it is, whatever the medium, then you may have a very large share of what is required to complete a project.
Conclusion
I’m going to stop for there. I think these posts need to be digestible, and this is an easy history piece as well as a conceptual piece about Claude Code, but I want to just stop for now so that the next posts can focus more on my own particular workflow. I want to continue to emphasize to readers, though, that Claude Code is not merely the chatbot Claude, even though the chatbot Claude and Claude Code are both based on 4.5, which is a very powerful LLM.
I also want to emphasize that Claude Code is not just another version of Github Copilot, nor Cursor AI, both of which some of you have probably heard of but didn’t want to yourself invest time into. So you’ve been doing more of the copy-paste method using ChatGPT and Claude to “do stuff”. If the AI agent is not rummaging around your files on your computer “doing stuff”, like reading things, writing things, and even running regressions, then you have not experienced this yet.
Claude Code is an experience good. Until you experience it, you will not appreciate how revolutionary it is. But, once you do experience it — which trust me, you will. You will, and most likely very soon. Once you experience it, you will like me realize that there is no turning back. And all the complaining about how AI is destroying world will become something you are mildly curious about and mostly resigned to. You will switch. You have to experience it first to know that I’m right, though, but if all you have as a conceptual mental model of what Claude Code is and can do is a chatbot, and you’ve been particularly bullish about chatbots ability to do creative work, first of all I will just say I think you are confusing user error with chatbot error in most cases. I have rarely heard someone say they could not get a chatbot to do something that I have found I have had it do a hundred times over. Usually it is just complaining for the sake of complaining.
But put that aside. It doesn’t matter. Until you see Claude Code fire up a directory of one of your projects, and run around, you won’t know. The real app killer, though, are the decks Claude Code will make for you. I am positive that for many people, when they see it make a deck in beamer for them, with them only describing the deck they want in words like,
“I want you to make the most original, beautiful deck, with beautiful figures, and beautiful tables, following an unknown latent concept of the rhetoric of decks themselves, which I know you know since you have literally read every single deck written in the history of humanity, about my paper and my code and my tables and my figures. I want this to be a deck that anyone, an intelligent layperson, would want to pay attention to. You can use whatever theme you want, but I want the final product to be so original and unique to this project that no one can even detect what that original theme even was.”
When you see the deck that comes out of that, you will say, “Anthropic, take all my money.”
I’ll talk more about this later, and show some decks I feel comfortable sharing, but trust me — 2026 is going to be for you the year of Claude Code.



would recommend the /agents feature. you can just make up project-specific subagents for claude to invoke. it's really easy, you just write a short description (no prompt engineering) of what you want them to do and when they should be used.
i find it's particularly helpful when iterating on something visual. loading images into context again and again eats tokens and also seems to make claude stupider. but if you can just spawn an agent to look at the image, give some text output on whatever issue you care about, and terminate, this usually gets the job done just as well.
I particularly enjoyed this post Scott! Would love to see more posts centered on AI from the perspective of an academic user.