ChatGPT-4 speculates about what happened to OpenAI this weekend
I copied text describing the implosion of OpenAI this weekend into ChatGPT-4 and asked it to write an essay that answered several of my questions.
Breaking news: Sam Altman, who had been fired by the board of OpenAI on Friday, is back. The initial board composition changed in the meantime. Old board member Adam D’Angelo, who ChatGPT discuses later, will remain, but economist and former president of Harvard among other things, Larry Summers, is coming. Brett Taylor will also serve. This just broke at 3am and I didn’t see it before this substack posted but fyi! And OpenAI is probably going to survive so this article I wrote already is outdated!
I thought this might be a good time to summarize all that has happened with OpenAI’s train wreck of an explosion that happened this past weekend. I would have posted it sooner but I already posted on Monday, and Tuesdays are reserved for new podcast episodes. Plus, yesterday’s guest was an economist who specializes in artificial intelligence, Avi Goldfarb at the University of Toronto, which I didn’t want to push off. So I waited until today, Wednesday, though by now it seems ancient given the rapid changes that have happened to the world’s most impactful company in technology, OpenAI, that now may very well have disappeared after all the damage done.
I’ll try to briefly summarize at a high level what happened over the weekend up to Monday night when I wrote this. Then I’ll provide a great essay that was written by “Understanding AI”, the substack, that I thought was the best thing I had read yet. And then third, below the paywall, I’ll post an essay I got ChatGPT-4 to write after feeding in numerous articles I’d collected the last week or so and asking it a lengthy prompt to guide its essay construction. So here goes!
Brief history of OpenAI’s implosion (by me)
Last Friday afternoon, some time in the afternoon, OpenAI, creator of ChatGPT and one of the fastest growing companies in the modern era projected to be worth over $80 billion imminently, announced that they had just terminated their celebrity CEO, Sam Altman. Altman was arguably the most famous CEO since Steve Jobs, and it was remarkable they fired him given even by their own admission there was no malfeasance and no scandal. Plus, Altman was had taken them from making $25m in 2022 to over $1b in 2023 in revenue. It was hard for even OpenAI’s closest partners to comprehend. The fault fell squarely on the shoulders of OpenAI’s board — a board of four people. In addition to firing Altman, they forced Greg Brockman, its President, to resign from the board, but Brockman responded by immediately quitting himself.
And as if that was not enough, the drama that transpired afterwards was really surreal for anyone fascinated and enthralled by “Big AI” — i.e., a collection of several firms responsible for the technological advances in artificial intelligence that broke into the public consciousness the last year. Immediately, news spread that three top executives and elite level research scientists, one of who was a principal that designed GPT-4, resigned. Rumor had it that Microsoft had invested $10b in OpenAI in various in-kind transfers (e.g., Azure credits) and investments had only been given about a minute heads up before the decision. Microsoft’s CEO Satya Nadella, was reportedly livid about the board’s actions as Altman had been instrumental in the partnership with OpenAI and has been seen as indispensable, so barring a major scandal, it would be good to be kept in the loop. If $10b doesn’t afford more than a phone call 60 seconds before the CEO is fired, then what does?
After word got out that he’d been sacked, an outpouring of praise for Altman came out seemingly everywhere. Former Google CEO, Eric Schmidt, called him “his hero” for what he had accomplished at OpenAI. That was echo’d seemingly everywhere.
But then it was learned that Microsoft was with others pushing to get him rehired, not even 24 hours after his termination, and it appeared to be working. Another executive level OpenAI employee made a statement that in fact they were close to securing the rehiring, and it looked like it would happen. Then Monday morning fairly early news came that it did not happen and would not happen, as the board was firm in their refusal to hire Altman back. And seemingly that was it.
Then two things happened. In no particular order, the CEO of Microsoft announced that they had hired both Altman and Brockman to start a company within the Microsoft ecosystem, noting they would be given latitude to form their own corporate identity much like Linkedin and GitHub had successfully done. Two, over 700 OpenAI employees out of 800, including the very founder and board member, Ilya Sutskever, who had allegedly instigated the board’s termination of Altman in the first place, had weirdly enough wrote on Twitter that he regretted what he’d done, and joined the employees demands that the board resign, and if the board did not resign and if Altman and Brockman were not immediately rehired, then all 750+ employees who had signed the document would quit and go to Microsoft where they had been been given guaranteed jobs. The board had apparently said to someone that even if their actions had destroyed the company, that was consistent with the firms mission. It was tense.
While the true story behind this is complex, it doesn’t seem controversial to suggest that there is probably a dozen ways you could have fired your CEO better than how they did it, but almost no way they could have done so worse. For they have left OpenAI in a position where this juggernaut may literally not exist by the end of the year. One would be hard pressed to find an example as bizarre and consequential as this one.
But as it turns out, the story may be both more innocent than it sounds and has a story to it. And the article to explain it all is below. It gets into the weird nonprofit / for profit parts of the firms governance, but it also touched on the two ideologies that form this corporate culture, and maybe AI as a whole — effective altruism and a kind of doomsday dystopian worry about dangerous AI which was interestingly enough the very reason OpenAI was begun in the first place. Both are built into the very dna of the company. For those of us far removed from either one of those, it may have seemed like a curiosity when we heard effective altruism was at the center of this. I only learned of effective altruisms through Sam Bankman-Fried’s destruction of his own multi-billionaire firm. In fact though it shows up again here, either coincidentally or not, and combined with another ideology that created a very complex world within the firm.
More thorough essay on what happened (from the substack, “Understanding AI”)
So with that said, I want to give you that history here in an article written by the substack “Understanding AI”. I have been glued to this story now for 72 hours as of the time of this writing on Monday night and this is the most comprehensive thing I’ve read yet so if you read one thing, I’d recommend this.
My prompt for ChatGPT-4 to explain and speculate on what happened
But if you read two things, then I’d recommend what is below. I asked ChatGPT-4 the following prompt after first copying and pasting a ton of articles about OpenAI, the events, and the people involved into ChatGPT-4 and then I asked this prompt. It went through several prompts so this is the last one.
These paragraphs are too short. Each section should be no fewer than three paragraphs. Please rewrite it. But it needs to be THOROUGH. Exhaustive. Remember my text (repeated below) in what I'm asking for. I want you to imagine something written by George Orwell, Michael Lewis, and Truman Capote.
Okay here is the assignment. I want you to write a very interesting article, in the artful form of a NYT piece or maybe even New Yorker (Truman Capote, Michael Lewis, George Orwell, or some combination) if you think that's more appropriate. Your essay should tell the "complete story of OpenAI's rise and fall". It needs to be both about the rise and fall of OpenAI, the rise and fall (or has he fallen?) of Sam Altman, and the rise of fall of all of artificial intelligence (is it too soon?). Please offer analysis of the events described in the above articles I just posted. Your analysis should do the following.
Broadly speaking, find a narrative and a voice that draws the reader in. I want you to both tell a compelling story, a factual story grounded in what you've analyzed as well as what's in your training data, and a story that speculates based on reasonable conjectures.
As you consider the "rise and fall" theme, select which agents have "risen", which agents have "fallen", and which agents are on their "hero's journey" about which we don't yet know enough to say as the story is not yet finished for them. Where are we, in other words, in this story?
First confirm which parts of the persons article are fact and summarize it as you understand it. I need you to write from a foundation of sound analysis, and then if you need to venture into more speculation, integrate it into the essay in a way that it's clearer to the reader. I would enjoy hearing your speculation when it is grounded in reason and evidence and anything you think is reliable.
Second confirm what role plausibly does effective altruism play? And using your training data, explain why effective altruism would be involved at all in this since nothing abt effective altruism even connects with AI. Please draw the sociological and historical connections, including anything not mentioned in the article, between EA and AI firms. When does it start and why does it start and who are the influencers and why are these ideas influential at all?
Third, please elaborate on the role of dystopian like views of AI and whether those have any basis at all empirically from what is known about AI. In other words, is that dystopian viewpoint driven by ideologies or science? And are there alternative views to AI risk that are in fact present within the culture of these AI communities and what are they and how did this bio graphically and sociologically penetrate OpenAI?
Fourth, why did Elon musks leave in the first place? And how does his departure connect with the subsequent evolution of OpenAI as a firm? What is his relationship with the board, with Ilya, with the company, with any machinations that may be realistic yet unproven?
Fifth, are there personality issues and culture issues that you think are important that are not discussed above? Personal politics? Who are the main players? Who are the secondary players? What is Microsoft's role in this?
Sixth, the governance structure -- nonprofit vs for profit -- comes up a lot. What's it doing? Last, please provide a plausible conjecture that explains what appears to have happened using the text I posted in the first place that answers these questions.
What will come next? Where are we? What impact should this have on other parts of the AI ecosystem? It's okay to speculate. Make it accurate, include conjectures that you think are likely, and fit this into a broader narrative of modern tech eco system and economy but also make it enjoyable to read. I don’t know how long the essay should be, only that it should be thorough and provide a coherent and compelling narrative that answers my questions well. It needs to be something a person WANTS to read. Remember, no fewer than three paragraphs per section but no wasted words.
I think I ChatGPT-4 answer to my prompt below.1
Keep reading with a 7-day free trial
Subscribe to Scott's Mixtape Substack to keep reading this post and get 7 days of free access to the full post archives.