It’s always fun to watch a future case study unfold in real-time, isn’t it? Well, maybe not fun, but you know what I mean.
Meta’s Chief AI Scientist, Yann LeCun, has said that “In terms of underlying techniques, ChatGPT is not particularly innovative.” He goes on to concede that it is perceived as being revolutionary because “it’s nicely done”.
He’s not entirely incorrect. I have been speaking to smarter people recently about OpenAI and they share LeCun’s assessment.
But the case study resides in the reasons why ChatGPT has pierced the public consciousness.
If every big tech company has access to the same, open-source research, why have they all sat back?
If ChatGPT is just “nicely done”, why have only they done it?
“Google and Meta both have a lot to lose by putting out systems that make stuff up”
Today, we turn our focus to Google.
The company is getting the old band back together, so you know something big is on the way. They have called in founders Sergey Brin and Larry Page as part of a “Code Red” response to OpenAI’s rapid success.
AI scientist Jeff Dean wrote a lengthy post (>7,000 words) on Google’s AI blog that explained a lot of the company’s latest research.
I have two aims in my post today:
- Simplify the Google post into five main focus areas
- Predict what Google will do next in each of said areas
What is Google focused on in AI?
Dean writes that Google has a number of tasks in mind when it considers AI:
- Complex, information-seeking tasks.
- Creative tasks, like creating music, drawing new pictures, or creating videos.
- Analysis and synthesis tasks, like crafting new documents or emails from a few sentences of guidance.
- Partnering with people to jointly write software together.
- Complex mathematical or scientific problems.
- Translate the world’s information into any language.
- Diagnose complex diseases, or understand the physical world.