Apple just released its first Apple Intelligence features and launched new AI-optimized Macs. But despite all the hype around AI, the technology's intelligence is clearly limited. And one of those limitations was revealed by a recent experiment conducted by Apple in the field of artificial intelligence.
Testing AI Capabilities
Last month, a team of Apple researchers published a new paper about a key limitation of artificial intelligence.
Michael Hiltzik writes for The Los Angeles Times:
Try this math problem:
On Friday, Oliver picks 44 kiwis. Then on Saturday, he picks 58 kiwis. On Sunday, he picks twice as many kiwis as he did on Friday, but five of them were slightly smaller than average. How many kiwis does Oliver have?
If you answered 190, congratulations: you did as well as the average elementary school student in guessing the answer. (44 Fridays plus 58 Saturdays plus 44 Sundays, multiplied by 2, or 88, equals 190.)
You also outperformed more than 20 state-of-the-art AI models tested by Apple's AI research team. They found that the AI bots were consistently wrong.
The research paper explains that the best and brightest LLM models showed a “catastrophic drop in performance” when trying to answer simple math problems that were written in this way.
This occurred primarily when these problems included irrelevant data, which even schoolchildren quickly learn to ignore.
Thus, calling into question the current intelligence capabilities of AI.
Apple's AI Research Finds 'Intelligence' Is Not What It Seems
Due to the variety of tests that Apple's AI research entailed, the paper concludes that current AI models are “incapable of genuine logical reasoning.”
This may be something we're generally aware of, but it's an important caveat as more and more trust is placed in the “intelligence” of AI.
Top comment from ââ½ï¸ð
Grady Booch, the father of UML, has been saying this for years. LLMs are not intelligent and never will be, although they can become large and complex enough to simulate it. The problem isn't really the amount of data you feed them, it's the underlying architecture. LLMs are based on probability, not logic and understanding.
View all comments
AI optimists might suggest that the problem is easily solved, but the Apple team disagrees. “Can scaling data, models, or computation fundamentally solve this problem? We don't think so!”
Ultimately, Apple's paper isn't meant to dampen enthusiasm for the possibilities of AI, but rather to provide a measure of common sense.
AI can perform some tasks as if it were extremely intelligent, but in many ways that “intelligence” is not what it seems.
What do you think of Apple's AI research findings? Let us know in the comments.
The Best Accessories for iPhone, iPad, Mac, and More
- ESR 3-in-1 MagSafe Portable Charger
- AirPods Pro 2
- 100W USB-C Fast Charging Power Adapter
- Tomtoc Laptop/Tablet Carrying Case
- Anker USB-C Portable Charger for Apple Watch