It got below freezing last night, and I stayed up late watching Slow Horses on Apple TV, which means I was up late binging it, and I slept in as late as I could. So I’m a little late posting the weekend list of links, so apologies for that. (That image is what Dall-e 3 made for me when I copied this entire substack into here and asked it to make an image based on it). Part of this is behind the paywall, but the first few are free. And if you have any questions for Monday’s Mixtape Mailbag, shoot me an email and I’ll put them in the queue!
“Decomposing Triple Differences Regression Under Staggered Adoption” by Anton Strezhnev
Maybe one day soon, I’ll redo my “first and second waves of diff-in-diff” series to check just when triple differences gets adopted. It’s created in 1994 by Jon Gruber in an issue of AER as I discussed here, but that’s not the same as saying that it got swept up in the waves of diff-in-diff usage. My hunch, though, is there has not been very many papers using it. I once discussed a paper claiming to use triple differences that very clearly did not have the correct regression specification. There is a lot about diff-in-diff, bottom line, that we we know now in great detail that has yet to spill over to triple differences and I suspect that that is because it is just not a very common design, viewed most likely as even less credible when you learn, as I documented here, that it uses as its identification assumption “parallel bias”, or what Anton calls “identification under a constant violation of conditional parallel trends”. I actually have never even seen someone do a triple differences event study to check for whether the identifying assumption, that until recently (Olden and Møen 2022) had never even been clearly spelled out formally with potential outcomes notation, until I did one the other day in a simulation. My hunch is that we are probably seeing a little spurt of activity trying to clarify precisely what triple differences is, when to use it, and what exhibits to examine to smell for whether it’s assumptions hold. And Aton Strezhnev’s working paper is the newest one I’ve seen outside of my substacks to do that. This is like the triple differences analog of Andrew Goodman-Bacon’s now classic 2021 article in Journal of Econometrics. It’s a decomposition of the performance of a regression specified triple differences model under staggered adoption. I am going to leave this one open for now, but I think it’s likely one to review whenever you are thinking of using triple differences.
Culturally speaking, it does seem like Americans starting with my generation have had crushes on tech CEOs. And sometimes those crushes border on infatuation. There’s Bill Gates, Steve Jobs, Mark Zuckerberg, Elon Musk, and then this last year, Sam Altman, OpenAI’s charismatic CEO who was ousted by the board almost two months ago for reasons that are not yet clear. The original claims, I think, that he was not taking heed that ChatGPT was evolving into SkyNet from Terminator was probably, if I had to guess, Altman’s own PR team trying to make the board look like idiots. My hunch is that the key part of their statement — that he had not been truthful in his communications with the board — is really the key thing to hone in on. I believe there is some kind of investigation ongoing, so maybe we’ll learn soon or maybe we’ll just have to wait until the inevitable Michael Lewis/Walter Isaacson biographies and Aaron Sorkin biopic comes out to know. Until then, we must indulge our tabloid tastes and rely on bits here and there. The most recent spate of news has been about Altman’s vast network of stratospheric friendships that he’s collected along the way and how they have, time and time again, gotten him out of some hairy jams, including the most recent one. These articles were interesting to read, thus breaking the rule that the articles on Saturday have to have been unread, because you learn more about Altman’s career, as well as his days at Stanford. His time at Y Combinator at its President, though, I suspect was a place where his rolodex grew very large and he accumulated an even bigger following. This particular jam with the board he got help from Microsoft, but he also got help from Brian Chesky, Airbnb’s billionaire CEO, a firm that had gone through Y Combinator. None of this actually comes across as scandalous, to me, as the innuendo that kind of layers on top of the two articles, though. It just reminds you that Eigenvector centralities are probably real phenomena, and the most powerful people are probably the people closest to those powerful people, and whatever Altman’s is, it’s probably the largest of anyone in Silicon Valley.
Keep reading with a 7-day free trial
Subscribe to Scott's Mixtape Substack to keep reading this post and get 7 days of free access to the full post archives.