Greetings from Cordova Tennessee where I am today and for a few days visiting my parents, and picking up my brand new 2002 Ford Explorer that my mom gave me so that me and my son can drive across the country and so I don’t have to use my precious 1991 Volvo station wagon. It’s actually pretty cool to have this thing, and I’m looking forward to taking off on the 26th with him (giddy even), but until then, I have to get ready by closing all the tabs I’ve left open in my browser this week.
Willie Nelson eats the same breakfast every day. You may not know this, but Willie attended Baylor University (but didn’t finish). The other day I took my daughter to get driver license and saw this awesome mural. It was in another town, not Waco.
New paper on diff in diff regarding covariates detailing an explicitly stated assumption they call “the common causal covariate assumption”. It looks interesting and matches well with the way I think and wrote the second edition of the mixtape. Even has DAGs like I have to explain “the nature of covariates” (their phrase). But theirs includes an estimator so I’m curious the problem with double robust.
Long article about what is and how much of it is in these large language models training data. It casually mentions too that the case against OpenAI brought by several authors (eg Sarah Silverman) has been partially dismissed which I didn’t knot.
About 8 years ago, Spotify was accused of making up fake artists but they denied it.
I’m so deep into superhero fatigue, so absolutely far down the demand curve that I thought I was done for. Then James Gunn releases the new Superman trailer and it’s like I felt alive again. The stone was stony again. Can’t wait. Krypto, Guy Gardner, one of the Hawks (Hawgirl?), Mr Terrific also shows up. But Krypto!
OpenAI wrapped up its “12 in 12” days of Christmas on Friday with an announcement of a new chain of reasoning LLM called o3. It’s said to be impressive for math, physics and programming. I continue to suspect I’m not the intended audience for these COR models.
The Atlantic says legalizing sports betting was a bad idea.
ChatGPT studies have been showing it can boost academic performance, motivation, and higher-order thinking skills, while reducing mental effort. However, most of the evidence comes from university-level experiments with notable methodological gaps, like insufficient sample sizes and short-term assessments. This article says that to move forward, researchers should focus on long-term impacts, rigorous evaluation methods, and more complex, skill-based assessments to truly understand ChatGPT’s potential in learning. I remain convinced our entire set of assignments must change, many classic ones dumped entirely and for forever, and probably change even what our learning objectives are. I bet we will though.
This article explains how a 1980s group managed to get more radio airtime. Honestly I’m not even sure that’s what this article was about.
I’ve known for awhile that the Coen Brothers’ father, Edward Coen, was a professor of economics at Minnesota, but I’ve had trouble finding much about his scholarship. But ChatGPT search amazingly found him in the acknowledgements of this 1969 article. How in the world did Cosmos do that? I have no clue.
Making new friends costs money. This one I couldn’t read because I don’t have the Bloomberg subscription, but the opening paragraph reminded me of this skit.
More about the hack that may have done something to hack communications between iPhones and droids. I told my friends with green bubbles we cannot be friends anymore, so now I’ll probably have to pay the most expensive tier on that previous article if I want any new ones.
My son is a massive Tupac fan. He seems to know every conceivable detail, too, about him and Biggie. He explained a lot of their history, not just the stuff I knew, but also how they met and their prior friendship. This article was part of my late night searching afterwards. It shared how they met and how much they liked each other initially. This happened because we watched half of the Notorious BIG biopic. Can’t believe that his son played him. Kind of wild, and he perfectly nailed the part too, tbh.
Top 50 albums of the year. Number 2 is especially supposed to be great.
Nate Bargatze has a new Xmas show out, which I may watch tonight. Here’s a little about him and how he has a super clean style (no joke). I didn’t quite realize what a huge boost he maybe got from the Washington skit on SNL, but I’m jot surprised. I really like his skit, though, when he explains that digging holes is a lot harder than it looks.
This 10 degree bag at Dicks sporting goods has a good review on Youtube (but not worth posting) but they’re out of stock in Waco, so don’t even try. I ordered this one on Amazon instead (0 degree) that is supposed to be here on Christmas Eve.
OpenAI’s new “deliberative alignment” training teaches models to reason through safety policies explicitly before responding, leading to safer and more precise outputs. Their o1 model, trained with this method, significantly outperforms GPT-4 and other leading models on safety benchmarks, showcasing how enhanced reasoning can improve both safety and performance. There interesting examples at the page (it’s the OpenAI link to the white paper) that give examples.
Interesting article at Reason magazine. It’s not that Reason, or any magazine, surprises me because they disagreed with an academic study. That’s all the time. Rather, this Reason journalist didn’t just disagree with a study; they disagreed by showing using diff-in-diff that the authors were wrong, using the data of the original study to dispute it! That’s hard core. That’s a level of diff in diff literacy among journalists I’d never seen before. Maybe nature is healing.
A man lost around 150 pounds give or take by only eating red meat and maybe eggs for a year. I think more than that but I already closed the tab so I’m not sure. I asked Cosmos about that, and he said it’s not super healthy sounding. I didn’t push it.
A friend shared this with me. This report sheds light on Minnesota’s implementation of the 340B Drug Pricing Program, a federal initiative designed to support safety-net healthcare providers by offering discounted medications. It highlights the program’s lack of transparency, revealing that while hospitals and clinics collectively generated $630 million in net revenue, questions remain about how these funds are used and their broader impact.
Apparently Warren Buffett once said he would’ve bought hundreds of thousands of US houses if he could’ve. Notice he said “hundreds of thousands”. I guess that means two things: if held as a large portfolio, the returns were large and second, it must not be there is a real estate mechanism to do this. I’ve never thought about it before; I may email my colleague and ask her to help me understand what keeps even someone like him from doing this.
Googles AI video generator is probably better than OpenAI’s Sora. Probably Sora isn’t going to be as interesting for me as I thought. I played around with it but my videos are pretty lame and I don’t care enough to learn how to do it better. Anyway here’s a video I made. That girl turning into a dog was, can you believe it, not in my instructions.
A guy who ran an early social media (which I was unfamiliar with) says Bluesky won’t make it. He says none of these platforms do what they were created to do which is create friendships, plus friendship creation isn’t a viable business model anyway.
A lot of people a year ago were hoping causal AI could mean you could automate causal inference using generative AI somehow. Usually it was some version of AI coming up with a DAG and selecting covariates to satisfy unconfoundedness. But how could you know if you had? You’d need to know the ground truth in the first place to say whether the method “worked”.
Nick Huntington-Klein and Eleanor Murray have a new paper that evaluates whether LLMs can automate causal inference by identifying confounders, using the Coronary Drug Project as a test case. Much like LaLonde’s 1986 AER paper revealed the limitations of non-experimental methods for estimating causal effects, this study highlights the shortcomings of LLMs in reliably identifying confounders, even when the necessary information is present in their training data. The inconsistent and mediocre performance across prompts and models underscores that LLMs are far from replacing expert judgment in causal inference.
More character.ai stuff in the news. Previously it had been young people falling in love with their chatbots. One young man with autism fell in love with his and committed suicide and his family filed a lawsuit. This new story is probably not great for the company — there’s a lot of school shooter chatbots on there and people are spending a lot of time talking to them.
Jonathan Meer Texas A&M explains charitable giving on a local television channel. Look at that guys gorgeous hair! Deepfake obviously.
To quote “I think you should leave with Tim Robinson,” the Metta meditation is pretty much in my “Q zone”. Won’t surprise me if I come back from my two week road trip with my son to the Redwood forest with it tattooed on me. All my tattoos I realized are words and sentences and not art - not pictures I mean.
Friendship after 50 has value for living. <insert here comment about homophily>. Can’t take friends with you, and we all die alone.
Sam Altman throws his ante in the pot. Little does he know Elon deals from the bottom of the deck.
Interesting though that OpenAI chose to produce this timeline of Elon wanting OpenAI to be for profit. I keep thinking they are feeling desperate. Like something behind the scenes is not boding well for them and the new administration could go either way for them.
Here’s Altman talking more about Elon. I don’t see what Sam gains by talking about Elon this much tbh
Rather than have a New Year’s resolution to read a large number of books, this guy says make your goal to read 100 pages. Interesting idea. Reminds me of when a marriage therapist told us “just make sure the room is slightly cleaner when you leave it than when you got there”. I said “oh right. In longrun equilibrium your house is always clean”. Probably same here: 100 pages a day is a book goal but isn’t. I’m thinking maybe I’ll choose the standard length of an article so that at worst it’s a paper but at most it’s a book. 35 pages a days.
Now this is super cool. I actually did read a little of this. The poet Emily Dickinson never published much when she was alive. I think maybe 1 or 2? Most of it was published posthumously. We’ll check this out: someone found a new poem of hers that had been lost. But they found it two degrees removed from her papers. Imagine Dickinson wrote a friend, and included the poem. That person then wrote someone else with the poem copied and said that Dickinsons handwriting was so bad she couldn’t make out one of the words, but otherwise shared it. They found the poem in that third persons papers, which is why it took a while. Here it is:
A practical guide to shift-share instruments for those interested by three leading econometricians in the area.
Al Green’s cover of REM’s “Everybody Hurts”. Man, this one hits me right in the feels.
Vanity Fair interviews Billie Eilish every year and has ever since she was 15. Saw this on and the previous Green-REM link on Kottke. This is the 8th year. This is a great idea to do with one’s kids, and I when I get back I may.
And that’s it with links. Now for a little person stuff. I picked up the SUV yesterday after getting to Memphis to see my parents. I asked my dad to rank the top five baseball players of all time to test his Parkinson’s toll on his reasoning. He said (in no particular order):
Babe Ruth
Mickey Mantle
Ty Cobb
Ted Williams
Willie Mays
I asked him what about Hank Aaron and he told me he had too many outfielders in the list and so didn’t feel good about putting another one. Seems like he was trying to have some kind of balanced list.
The SUV has 130,000 miles and is probably reliable for the trip. I’ll run by the tire store though and just double check the spare is good and that I’ve got everything I need to change it. I have a portable electrical device that I can use to jump it if necessary.
But I opted not to get a new stereo for the trip. They couldn’t get it done before the day after Xmas which is when I’m leaving from Waco. I just decided why put a good stereo in this truck I won’t be driving often? So I’m going to have a plan B.
I need to book the first night though at El Cosmico in Marfa, Texas. I’m going to get a teepee for my son and me. If ever you have the chance to visit Marfa, it’s a funky little town. Wish us luck! Countdown to the 26th has begun. Then we pull up the anchor and ship out.