OpenAI CEO Sam Altman gave
an hour-long interview to the "All-In" podcast (hosted by Chamath Palihapitiya, Jason Calacanis, David Sacks and David Friedberg).
And when asked about this summer's launch of the
next version of ChatGPT, Altman said they hoped to "be thoughtful about how we do it, like we may release it in a different way than we've released previous models...
Altman: One of the things that we really want to do is figure out how to make more advanced technology available to free users too. I think that's a super-important part of our mission, and this idea that we build AI tools and make them super-widely available — free or, you know, not-that-expensive, whatever that is — so that people can use them to go kind of invent the future, rather than the magic AGI in the sky inventing the future, and showering it down upon us. That seems like a much better path. It seems like a more inspiring path.
I also think it's where things are actually heading. So it makes me sad that we have not figured out how to make GPT4-level technology available to free users. It's something we really want to do...
Q: It's just very expensive, I take it?
Altman: It's very expensive.
But Altman said later he's confident they'll be able to reduce cost.
Altman: I don't know, like, when we get to intelligence too cheap to meter, and so fast that it feels instantaneous to us, and everything else, but I do believe we can get there for, you know, a pretty high level of intelligence. It's important to us, it's clearly important to users, and it'll unlock a lot of stuff.
Altman also thinks there's "great roles for both" open-source and closed-source models, saying "We've open-sourced some stuff, we'll open-source more stuff in the future.
"But really, our mission is to build toward AGI, and to figure out how to broadly distribute its benefits... " Altman even said later that "A huge part of what we try to do is put the technology in the hands of people..."
Altman: The fact that we have so many people using a free version of ChatGPT that we don't — you know, we don't run ads on, we don't try to make money on it, we just put it out there because we want people to have these tools — I think has done a lot to provide a lot of value... But also to get the world really thoughtful about what's happening here. It feels to me like we just stumbled on a new fact of nature or science or whatever you want to call it... I am sure, like any other industry, I would expect there to be multiple approaches and different peoiple like different ones.
Later Altman said he was "super-excited" about the possibility of an AI tutor that could reinvent how people learn, and "doing faster and better scientific discovery... that will be a triumph."
But at some point the discussion led him to where the power of AI intersects with the concept of a universal basic income:
Altman: Giving people money is not going to go solve all the problems. It is certainly not going to make people happy. But it might solve some problems, and it might give people a better horizon with which to help themselves.
Now that we see some of the ways that AI is developing, I wonder if there's better things to do than the traditional conceptualization of UBI. Like, I wonder — I wonder if the future looks something more like Universal Basic Compute than Universal Basic Income, and everybody gets like a slice of GPT-7's compute, and they can use it, they can re-sell it, they can donate it to somebody to use for cancer research. But what you get is not dollars but this like slice — you own part of the the productivity.
Altman was also asked about the "ouster" period where he was briefly fired from OpenAI — to which he gave a careful response:
Altman: I think there's always been culture clashes at — look, obviously not all of those board members are my favorite people in the world. But I have serious respect for the gravity with which they treat AGI and the importance of getting AI safety right. And even if I stringently disagree with their decision-making and actions, which I do, I have never once doubted their integrity or commitment to the sort of shared mission of safe and beneficial AGI...
I think a lot of the world is, understandably, very afraid of AGI, or very afraid of even current AI, and very excited about it — and even more afraid, and even more excited about where it's going. And we wrestle with that, but I think it is unavoidable that this is going to happen. I also think it's going to be tremendously beneficial. But we do have to navigate how to get there in a reasonable way. And, like a lot of stuff is going to change. And change is pretty uncomfortable for people. So there's a lot of pieces that we've got to get right...
I really care about AGI and think this is like the most interesting work in the world.