On AI and The Soul-Stirring Char Siu Rice
Before the internet took over our lives, every Lunar New Year meant the same thing: TV2 would air Stephen Chow's God of Cookery, and I would watch it again whether I wanted to or not. The climax is a cooking duel. The villain unveils Buddha Jumps Over the Wall—an extravagant dish, premium ingredients, technique on full display. Chow responds with something almost insultingly simple: char siu rice. He calls it "Soul-Stirring Char Siu Rice." No flash, no showmanship, just the thing itself, done right. The Soul-Stirring Char Siu Rice wins. The judges weep.
Let me use this char siu rice to explain how AI actually learns.
The Limits of Cooking by the Book
Traditional programming works like following a recipe. Computers execute exactly what humans tell them. A supermarket cash register scans Thai jasmine rice—charge X ringgit. Scans Bin Bin crackers—charge Y ringgit. Explicit, mechanical, utterly predictable.
This is learning by rote, like memorizing a textbook word for word. And it has obvious limits. First, it falls apart when the world gets complicated. Buddha Jumps Over the Wall has dozens of ingredients and a hundred steps, but the dish is still fixed—the recipe can't adapt to new ingredients or improvise when something's missing. Second, most real knowledge doesn't come pre-labeled. A YouTube video contains information, but it's woven into the fabric of the content, impossible to extract with explicit rules.
Machine learning takes a different approach. Instead of following instructions, it finds patterns in data on its own. This isn't a new idea—Turing and others were thinking along these lines in the 1950s.
Laksa Minus Penang Plus Ipoh
When I was a kid, my mom worried I was grinding myself down with schoolwork, so she signed me up for a memory training class. I still remember the core lessons. First: don't memorize raw facts—compress them into mental maps. Second: connect everything through association.
Modern machine learning works the same way. Computers build internal knowledge graphs, compressing oceans of information into webs of relationships. These relationships are what enable generalization—learn one thing, apply it to many.
Here's a famous example. Type "King minus Man plus Woman" into ChatGPT. It returns "Queen." The machine isn't reciting a dictionary; it's learned that "King" decomposes into something like "royalty + male." Swap male for female, and you get "Queen."
Try this: "Laksa minus Penang plus Ipoh." The answer is "hor fun."
How Brains Actually Work
In a sense, machine learning is compression—distilling the chaos of the world into structured relationships. That compressed structure is what makes machines look intelligent. And it's strikingly similar to how our own minds work.
Most people assume eyes function like cameras, faithfully recording every pixel. But if our brains actually processed complete images in real time, the energy cost would be crippling. Instead, we receive fragments and reconstruct the whole from internalized models. We see what we expect to see.
This is why optical illusions work. Two people of identical height can look wildly different sizes depending on the background. Our brains apply learned rules to fill the gaps, and when those rules are deliberately broken, perception fails.
Remember AlphaGo versus Lee Sedol? The machine crushed him in most games. But in game four, Lee Sedol played a move so unexpected it fell outside AlphaGo's learned patterns—like showing the machine an optical illusion. It stumbled, made several bad moves in a row, and humanity clawed back a single victory. Same principle.
The Bitter Lesson
Building these internal knowledge structures has always been machine learning's central challenge, and for a long time, we did it badly.
Before deep learning, we were confident that human expertise should guide the machines. Chess programs were stuffed with hand-crafted rules. There was learning involved, sure, but it was Buddha Jumps Over the Wall—we chose the ingredients, we wrote the recipe, we assumed we knew best.
Rich Sutton, one of the founding figures of modern machine learning, wrote a famous essay called "The Bitter Lesson." His point: we keep assuming our human intuitions are optimal, and we keep being wrong. It's hubris.
The biggest breakthroughs of the last decade—AlphaZero, ChatGPT—all followed a different philosophy. Call it the Soul-Stirring Char Siu Rice approach: step back, let the machine learn from raw data, intervene as little as possible. Sutton's argument is simple. Computing power grows exponentially. Betting on machines learning for themselves is betting on the arc of history.
I know that "self-learning machines" might sound ominous. But as I mentioned last time, this kind of autonomy has nothing to do with consciousness or self-awareness.
These models are just parsing, sorting, and linking information—exactly like the King-Queen example. Every time you use Google or scroll through TikTok, you're already immersed in this kind of processing.
So instead of worrying about something you don't fully understand, maybe just sit back and enjoy a soul-stirring char siu rice.