As against Elon Musk who said, in April 2024 with respect to the end of 2025 “My guess is that we’ll have AI that is smarter than any one human probably around the end of next year [ie. end of 2025]”, here my own predictions for where we will be at the end of this year:
High confidence
- We will not see artificial general intelligence this year, despite claims by Elon Musk to the contrary. (People will also continue to play games to weaken the definition or even try to define it in financial rather than scientific terms.)
- No single system will solve more than 4 of the AI 2027 Marcus-Brundage tasks by the end of 2025. I wouldn’t be shocked if none were reliably solved by the end of the year.
- Profits from AI models will continue to be modest or nonexistent (chip-making companies will continue to do well though, in supplying hardware to the companies that build the models; shovels will continue to sell well throughout the gold rush.)
- The US will continue to have very little regulation protecting its consumers from the risks of generative AI. When it comes to regulation, much of the world will increasingly look to Europe for guidance.
- AI Safety Institutes will also offer guidance, but have little legal authority to stop truly dangerous models should they arise.
- The lack of reliability will continue to haunt generative AI.
- Hallucinations (which should really be called confabulations) will continue to haunt generative AI.
- Reasoning flubs will continue to haunt generative AI.
- AI “Agents” will be endlessly hyped throughout 2025 but far from reliable, except possibly in very narrow use cases.
- Humanoid robotics will see a lot of hype, but nobody will release anything to remotely as capable as Rosie the Robot. Motor control may be impressive, but situational awareness and cognitive flexibility will remain poor. (Rodney Brooks continues to make the same prediction.)
- OpenAI will continue to preview products months and perhaps years before they are solid and widely available at an accessible price. (For example, Sora was previewed in February, and only rolled out in December, with restrictions on usage; the AI tutor demoed by Sal Khan in May 2024 is still not generally available; o3 has been previewed but not released likely will likely be quite expensive.)
- Few if any radiologists will be replaced by AI (contra Hinton’s infamous 2016 prediction).
Medium confidence
- Technical moat will continue to be elusive. Instead, there will be more convergence on broadly similar models, across both US and China; some systems in Europe will catch up to roughly the same place.
- Few companies (and even fewer consumers) will adopt o3 at wide scale because of concerns about price and robustness relative to that price.
- Companies will continue to experiment with AI, but adoption to production-grade systems scaled out in the real-world will continue to be tentative.
- 2025 could well be the year in which valuations for major AI companies start to fall. (Though, famously, “the market can remain irrational longer than you can remain solvent”)
- Sora will continue to have trouble with physics. (Google’s Veo 2 seems to be better but I have not been able to experiment with it, and suspect that changes of state and the persistence of objects will still cause problems; a separate not-yet-fully released hybrid system called Genesis that works on different principles looks potentially interesting.)
- Neurosymbolic AI will become much more prominent.
Low confidence, but worth discussing
- We may well see a large-scale cyberattack in which Generative AI plays an important causal role, perhaps in one of the four ways discussed in a short essay of mine that will appear shortly in Politico.
- There could continue to be no “GPT-5 level” model (meaning a huge, across the board quantum leap forward as judged by community consensus) throughout 2025. Instead we may see models like o1 that are quite good at many tasks for which high-quality synthetic data can be created, but in other domains only incrementally better than GPT-4.
This article is republished from Marcus on AI under a Creative Commons license. Read the original article.