Sam Altman might have a *little* lying problem.
...and by *little*, I mean humongous. We really don't need another Elon.
I’m back, y’all! You might’ve heard that I had an excruciating and invasive surgery a couple of months ago. While my body has almost completely recovered, my brain still struggles to get back into *work* mode. Thankfully, OpenAI CEO Sam Altman’s slow unraveling inspired me to jolt this newsletter back to life. Let’s talk about the scathing details a former OpenAI board member divulged about him this week.
Former OpenAI Boardmember Reveals Why Sam Altman Was Fired · The TED AI Show
To Sum It Up:
Helen Toner was one of the OpenAI board members who voted to remove Sam Altman as CEO in November 2023. In an interview with the TED AI Show, she alleges Altman engaged in a pattern of lies and misrepresentations about his financial interest in the company, the organization's safety processes, product launches, and Toner's academic work on ChatGPT's flaws. Altman got fired but returned to the company within 48 hours, and Toner resigned from the board.
🔥 Quotes:
Toner on the board's decision to fire Altman:
...We had been working at the board level as best we could to set up better structures, processes,[and] all that kind of thing to try and improve these issues that we have been having at the board level. But then, in October of last year, we had this series of conversations with these executives, where the two of them suddenly started telling us about their own experiences with Sam, which they hadn't felt comfortable sharing before. Telling us how they couldn't trust him [and] about the toxic atmosphere he was creating. They use the phrase "psychological abuse." Telling us they didn't think he was the right person to lead the company to AGI. Telling us they had no belief that he could or would change. [There was] no point in giving him feedback. No point in trying to work through these issues.
They've since tried to minimize what they told us, but these are not casual conversations. They were really serious to the point where they actually sent us screenshots and documentation of some of the instances they were telling us about of him lying and being manipulative in different situations. So this was a huge deal. This was a lot, and we talked it all over very intensively over the course of several weeks, and ultimately just came to the conclusion that the best thing for OpenAI's mission and for OpenAI as an organization would be to bring on a different CEO."
Toner on Why Employees Wanted Altman Back:
...This is obviously the elephant in the room. And unfortunately, I think there's been a lot of misreporting on this. I think there were three big things going on that helped make sense of what happened here. The first is that really pretty early on, the way the situation was being portrayed to people inside the company was you have two options: either Sam comes back immediately, with no accountability [with a] totally new board of his choosing, or the company will be destroyed.
Those weren't actually the only two options. And the the outcome that we eventually landed on was neither of those two options. But I get [how] not wanting the company to be destroyed got a lot of people to fall in line, whether it because they were, in some cases, about to make a lot of money from this upcoming tender offer, or just because they love their team. They didn't want to lose their job. They cared about the work they were doing. And, of course, a lot of people didn't want the company to fall apart, us included.
Toner on Altman's Sketchy, Damning Work History:
The second thing I think it's really important to know, that has really gone under reported is how scared people are to go against Sam. They experienced him retaliate against people retaliate against them for past instances of being critical. They were really afraid of what might happen to them.
So when some employees started to say, 'Wait, I don't want the company to fall apart. Let's bring back Sam,' it was very hard for those people who had had terrible experiences to actually say that for fear that if Sam did stay in power, as he ultimately did, that would make their lives miserable.
I guess the last thing I would say about this is that this actually isn't a new problem for Sam, and if you look at some of the reporting that has come out since November, it's come out that he was actually fired from his previous job at Y Combinator, which was hushed up at the time. And then his job before that, which was his only other job in Silicon Valley [at the] startup Loopt. Apparently, the management team went to the board there twice and asked the board to fire him for what they called, 'deceptive and chaotic behavior.'
If you actually look at his track record, he doesn't exactly have a glowing trail of references. This wasn't a problem specific to the personalities on the [OpenAI] board as much as he would love to portray it that way.
My Take:
This is a safe space, so I'll admit that years ago, when I first read Elon Musk desribe his commitment to transforming the world for good with Tesla and SpaceX, I fell for it. But a war criminal once said, "if you fool me once, you can't get fooled again."
I’m not prepared to hand out any more “genius passes” to overly ambitious billionaires of dubious morality because:
1. There’s no excuse for Altman to have retained ownership of OpenAI’s startup fund as it promised tens of millions of dollars while telling the public he had no financial interest in the company.
2. He absolutely lied about copying Scarlett Johannson’s voice for the latest version of ChatGPT, and anyone who thinks the average person is that stupid cannot be trusted.
3. I don’t believe Altman gives a shit about safety.A pledge to give away his future billions is not even close to being enough to make him appear trustworthy.
📖Must Read
Does A.I. Lead Police to Ignore Contradictory Evidence? by Eyal Press · The New Yorker
While I hope and believe these technologies will one day cure all cancers, right now they’re being used by lazy law enforcement officials to legitimize shoddy police work.
In January of 2020, a resident of Michigan named Robert Williams was arrested in front of his wife and children for allegedly stealing watches from a store in Detroit, after a facial-recognition search, based on a photograph extracted from surveillance footage, identified him as a person of interest. Williams did not commit the crime, as the police realized when he held up a picture of the shoplifter next to his face: among other things, Williams was larger than the thief and had a broader face. His ordeal, later recounted in the Times, was the first publicly known example of a case in which facial-recognition technology played a prominent role in a false arrest. Five similar cases have since been documented. In all of them, the suspect mistakenly brought into custody has been Black, raising concerns that the algorithms less accurately distinguish the faces of people with darker skin, reinforcing racial disparities in the criminal-justice system. In 2019, the National Institute of Standards and Technology, a federal agency, published a study revealing that many facial-recognition systems falsely identified Black and Asian faces between ten and a hundred times more frequently than Caucasian ones. Errors were also more common for women and elderly people.
5️⃣ Things I’ve Used AI For Recently
Transcribed audio from voice memos and clips (including some from Helen’s interview) with Otter.ai
Turned the voice memo transcriptions into a single document of organized bullet points that I could use to record a podcast with Claude
Brainstormed content features for this newsletter with Gemini Advanced, ChatGPT, and Claude
Summarized scholarly articles with ChatGPT and Claude
Brainstormed networking tips for a specific industry with Claude
If you’re still unsure what to use AI chatbots for, this was a good primer for Google’s Gemini.
That’s it, y’all.
Update: 05/30/24
I missed another *alleged* lie from Altman. This month, former OpenAI employees claimed the company attempts to coerce them into silence by threatening to take back vested equity or prevent them from selling it if they refuse to sign ultra-restrictive exit documents. He claimed he didn't know about these provisions in the separation agreements, but a Vox report uncovered his signature multiple times on incorporation documents that grant OpenAI the right to do this. Apparently, these terms are highly unusual, so Altman either knew very little about how the company he’s led since 2019 manages equity agreements, or he’s a liar. You decide.