Scott's Mixtape Substack
The Mixtape with Scott
[Rerun] Tymon Słoczyński, Econometrician, Brandeis University
0:00
-1:22:51

[Rerun] Tymon Słoczyński, Econometrician, Brandeis University

Greetings from San Sebastián Spain where I am on holiday with my daughter for another couple of weeks. I have still not done any new podcasts as I realized only after I left that I did not pack my microphone. And, I didn’t want to buy a new one, and I wasn’t really 100% positive if using my Apple AirPods would work well. All of that is to say — excuses.

So, this week we are going back down memory lane to an interview I did 1-2 years ago with one of my favorite young up and coming econometricians, Tymon Słoczyńsi from Brandeis University. Tymon is the author of a wonderful 2022 article on OLS models with, I’ll call it, “additive and separable” covariates under unconfoundedness. Autocorrect wanted that to be “addictive” instead of “additive”, which would’ve been a really clever Freudian slip.

Tymon’s interview was one of my favorites. I know I say that about every interview, but they all feel like that, but let’s just this one really really feels that way. And I think you’ll feel the same way.

One of the things I love about Tymon’s articles is how excellent the writing is. His paragraphs oftentimes feel like the kind of paragraphs that you can tell he wrote, and rewrote, and rewrote, and rewrote like a hundred times. It amazes me that English is not his first language and he writes this well. I don’t even mean this clear — I mean it’s beautiful writing. Here’s a paragraph I think is outstanding, for instance:

“To aid intuition for this surprising result, recall that an important motivation for using the model in equation (1) and OLS is that the linear projection of y on d and X provides the best linear predictor of y given d and X (Angrist & Pischke, 2009). However, if our goal is to conduct causal inference, then this is not, in fact, a good reason to use this method. Ordinary least squares is “best” in predicting actual outcomes, but causal inference is about predicting missing outcomes, defined as ym = y(1) × (1− d ) + y(0) × d. In other words, the OLS weights are optimal for predicting “what is.” Instead, we are interested in predicting “what would be” if treatment were assigned differently.”

A lot of his sentences are sentences that are so precise, so insightful, that I wish I could have written it. It’s superb, he’s superb, and if you haven’t listened to this, I hope you do, and if you already have listened to it, then I hope you listen to it again.

Thanks again for all your support. Wish me luck as I wrap up my summer in Europe, start making my plans to move to Boston, teach new students, meet new colleagues, and make new friends. And get some new clothes to replace the ones the gentleman who stole my luggage on the train in Switzerland is now in possession of.

Scott's Mixtape Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Discussion about this episode

User's avatar