OpenAI's o1: The AI That Thinks | EP 35
Description
In this episode of UNinvested, host Sahil dives into the groundbreaking release of OpenAI's new model, o1, which marks a significant leap in artificial intelligence by introducing true reasoning capabilities. Sahil breaks down how o1 differs from previous AI models and explores the profound implications this advancement holds for various industries.
What we explore:
The limitations of older AI models and their pattern-matching approach
How OpenAI's o1 model introduces "thinking" through chain-of-thought processing
The "strawberry" example demonstrating o1's step-by-step reasoning
The impact of o1's human-like reasoning on real-world applications
The future of AI in thoughtful problem-solving and collaboration with humans
Where to find Uninvested:
Website: https://www.uninvested.org/
In this episode, we cover:
[00:00] Sahil introduces the "strawberry" experiment to illustrate AI limitations
[00:25] Discussion on OpenAI's o1 model release and its ability to "think"
[01:25] Explanation of how o1 uses chain-of-thought processing to think
[01:46] How o1 correctly counts the 'R's in "strawberry" by reasoning step by step
[02:08] o1's reasoning process feels more human-like
[02:31] Why o1 is a game changer in AI development
[03:03] Real-world applications of o1 in various industries
[04:00] The shift in how AI assists us, focusing on better answers
[04:24] Limitations of o1 and the future direction of AI
[04:46] Sahil concludes the episode and thanks listeners
Go into ChatGPT and ask how many R's are in the word 'strawberry.' You'll most likely get the answer '2,' which is wrong, by the way—it should be 3. But if you toggle the little dropdown in the top left corner to change the model to o1, you're going to get the correct answer. And that small change, from knowing 3 R's versus 2, is the single biggest innovation in AI to date. Let’s get into it.
[Intro Music]
On September 12, 2024, OpenAI released their new model, o1, which now has the ability to "think." And how exactly does an AI think? Well to understand this, we need to first talk about how AI’s functioned previously.
Older AI models are like really fast guessers because all they do is take your input and see if there are patterns in what the AI model has previously learned. Before it makes that guess, it takes your input and “tokenizes” it, which, in simpler terms, means the model takes a word or part of a word and tags it to a numerical digit. So, the AI model never sees the word Strawberry when you type it into ChatGPT. Then based on the numerical digits, it makes the guess. Crazy, right?
You could also try this with any AI model out there. If it’s not OpenAI’s new o1 model, it will get it wrong.
So what makes o1 different? In technical terms, it has chain-of-thought processing. In common language, you and I understand, it “thinks.”
First, the AI model breaks down the problem into smaller steps and then reasons through each step. It’s like going down a path, solving one piece of the puzzle at a time, and then putting everything together for the final answer.
So, back to our “strawberry” example. Instead of just guessing how many R’s there are, o1 actually takes the time to check each letter, step by step. It counts one R, then another, and finally the third one. It’s like when you slowly read through a word to make sure you don’t miss any details. And guess what? You get the correct answer—3.
To me, this process feels more human-like. Every innovation in AI brings it closer to thinking and reasoning like a person. If you try o1 on ChatGPT, which is free for all users, you can even see the model's thought process. After it gives you an answer, you can go back and see the steps it followed to get there.
And that’s exactly what makes o1 such a game changer.
We’ve always wanted AI to feel more intuitive, to be able to follow along as we reason through things step by step. Think about it this way—when you’re trying to solve a tough problem, like putting together a piece of furniture or figuring out a complicated math equation, you don’t just take a wild guess and hope for the best. You think about each step carefully, working through it piece by piece until you get the right answer.
That’s what o1 is doing. It’s no longer just matching patterns—it’s actually processing the problem in a way that feels more deliberate, almost human-like.
Now, why does this matter in the real world? Well, this kind of reasoning is useful in so many areas. For example, in healthcare, doctors often need to figure out complex diagnoses. With the older AI models, it might just spit out the most common diagnosis based on patterns it’s seen before. But with o1, the AI can now take a more thoughtful approach. It can look at different symptoms one by one, consider the possibilities, and give a better, more accurate diagnosis.
The same goes for other fields like law, finance, or even education. In finance, an AI model could help you figure out investment strategies by reasoning through market conditions and trends step by step. In law, it could assist lawyers by analyzing complex legal documents and breaking down each part to find the most relevant information. And in education, o1 could become a tutor that helps students work through problems more thoroughly, giving them better guidance along the way.
So, what we’re looking at here is a shift in how AI helps us. It’s not just about answering questions faster; it’s about answering them better.
But, of course, like any new technology, o1 still has its limitations. It's great at breaking down problems and reasoning through them, but it’s still not perfect. There will be times when the AI gets things wrong, or when the reasoning process might be a little off. After all, it’s still learning, and while it’s closer to human reasoning, it’s not quite there yet.
The exciting part is that we’re now entering a new era of AI—one where it’s not just about big data and fast answers, but about thoughtful problem-solving. This brings us closer to creating AI systems that can work alongside us in a more meaningful way, helping us tackle complex challenges in a way that feels natural and, most importantly, useful.
[00:22:33] Speaker 3: This is a personal video. Any views or opinions represented in this video are personal and do not represent those of people, institutions, or organizations we may or may not be associated with in a professional or personal capacity.
The views expressed are for entertainment purposes only and not to be misinterpreted as actionable investment advice.