Altman discussed the hype surrounding the unannounced GPT-4 in a recent interview, but refused to confirm whether the model will even be released this year.
OpenAI CEO Sam Altman has responded to rumours about GPT-4, the company’s unreleased language model and the latest in the GPT-series that serves as the foundation for AI chatbot ChatGPT, saying that “people are begging to be disappointed, and they will be.”
Altman was asked during an interview with StrictlyVC if GPT-4 will be released in the first quarter or half of the year, as many anticipate. He responded by not providing a timetable. “It will come out when we are confident that we can do it safely and responsibly,” he said.
GPT-3 was released in 2020, and ChatGPT was built using an improved version, GPT 3.5. The release of GPT-4 is highly anticipated, with some members of the AI community and the Silicon Valley world already declaring it to be a significant step forward. Making wild guesses about GPT-4’s capabilities has become something of a meme in these circles, especially when it comes to estimating the model’s number of parameters (a metric that corresponds to an AI system’s complexity and, roughly, its capability — but not in a linear fashion).
Altman called one viral (and factually incorrect) chart that purportedly compares the number of parameters in GPT-3 (175 billion) to GPT-4 (100 trillion) “complete bullshit.”
“The GPT-4 rumour mill is an absurd thing. “I’m not sure where it all comes from,” said the CEO of OpenAI. “People beg to be disappointed, and they will be. We don’t have an actual AGI, and that’s sort of what’s expected of us.”
(The term “artificial general intelligence” (AGI) refers to an AI system that has at least human-equivalent capabilities across many domains)
According to Altman, video-generating AI models are on the way.
Altman discussed a variety of topics in the interview, including when OpenAI will build an AI model capable of generating video. (Research in this area has already been demonstrated by Meta and Google). “It will happen. “I wouldn’t want to say when with certainty,” Altman said of generative video AI. “We’ll try to do it, and others will try to do it… It’s an official research project. It could be soon or it could take a while.”
The full interview is available in two parts, here and here (with the second focusing more on OpenAI the company and AI in general), but we’ve highlighted some of Altman’s most notable statements below:
- “Not much,” says OpenAI’s current revenue. We’re running late.”
- On the necessity of AI from various perspectives: “The world can say, ‘Okay, here are the rules, here are the very broad absolute rules of a system.'” However, within that framework, people should be free to programme their AI to do whatever they want. You should get the super never-offend, safe-for-work model, and you should get an edgier one that is creative and exploratory but says some things you might not be comfortable with, or some people might not be comfortable with. And I believe that many systems around the world will have different settings for the values they enforce. And really, what I believe — but this will take more time — is that you, as a user, should be able to write a few pages of “here’s what I want; here are my values; here’s how I want the AI to behave,” and it should read it, think about it, and act exactly how you want because it should be your AI.”
(This point is notable given ongoing conversations about AI and bias. Many social biases, such as sexism and racism, are internalised by systems like ChatGPT based on their training data. Companies such as OpenAI attempt to mitigate biases by preventing systems from repeating these ideas. Some conservative writers, however, have accused ChatGPT of being “woke” because of its responses to specific political and cultural questions.)
“Generated text is something we all need to adapt to, and that’s fine.”
- “We’re going to try and do some things in the short term,” he says of AI changing education and the threat of AI plagiarism. There may be ways to make teachers more likely to detect output from a GPT-like system, but a determined person will find a way around them, and I don’t think society can or should rely on it in the long run. We’ve just entered a new world. We must all adapt to generated text, which is perfectly fine. I imagine we adapted to calculators and changed what we tested in math class. Without a doubt, this is a more extreme version of that. However, the benefits are more extreme as well.”
- “I would much rather have ChatGPT teach me something than go read a textbook,” he says of his own use of ChatGPT.
- “The closer we get, the harder it is for me to answer. Because I believe it will be a much more hazy and gradual transition than people anticipate.”
- “I think whenever someone talks about a technology being the end of some other giant company, it’s usually wrong,” he says of predictions that ChatGPT will kill Google. I think people forget they have the opportunity to make a countermove here, and they’re pretty smart and competent. “I believe there will be a change in search at some point, but not as dramatically as people believe in the short term.”