Last month, I attended a machine learning conference in Hawaii. One evening, I wrapped up early and wandered into a Chinese restaurant to grab some food. I happened to be the only customer, so the owner and I got to chatting. She asked if I was there for that machine learning conference. I nodded. She kept chopping vegetables, half-smiling: "You young people and your AI—you're going to be the death of us all, you know."
The Breakneck Pace of AI
Since ChatGPT exploded onto the scene, suddenly everyone's paying attention to AI and machine learning. As someone who actually works in this field, I welcome the interest. And honestly, what these models can do with language right now is genuinely stunning. There's so much we still don't understand, and we need more young people diving in.
So yes, the attention is good.
But here's the thing: the pace of change is so disorienting that many people can't keep up. This breeds anxiety. First, there's the worry about jobs disappearing. Second, there's something almost nuclear-crisis-level about the whole thing—major powers are pouring money into AI like there's no tomorrow, and given the geopolitical climate, that's not entirely reassuring. Third, there's the restaurant owner's "Terminator" scenario: 2023 as year zero for Skynet, machines gaining consciousness, Arnold Schwarzenegger descending from the heavens, pew pew pew, humanity extinct.
I won't dismiss these concerns outright. Machine learning is moving so fast that even those of us working in the field can barely keep up. What seems impossible today might look completely different in five years, and I'll be eating my words.
But from where I'm sitting right now, much of this fear is overblown. Over the next few columns, I want to share what I actually know about machine learning. Maybe understanding the reality will ease some of that anxiety.
Let's start with the consciousness question.
What Machine Learning Actually Is
First, a confession: most of us in the field don't love the term "AI." Artificial Intelligence implies something virtual and intelligent. Current technology is neither—not in the way people imagine. We prefer "machine learning." At its core, it's simply about letting machines find patterns in data.
Machine learning has three main branches. Natural Language Processing—that's ChatGPT's domain. Computer Vision—think facial recognition, self-driving cars. And Reinforcement Learning—remember AlphaGo from a few years back? Though reinforcement learning's real applications are in robotics.
Regardless of the flavor, the core idea is the same: machines learning patterns from data. And since most training data comes from human-generated content—our texts, images, videos—machines are essentially learning human patterns. Natural language processing, for instance, is fundamentally about prediction: given the start of a sentence, what word comes next?
Occasionally someone will claim that some AI model exhibits "human-like" qualities. This view doesn't get much traction in academia. The truth is, the barrier to entry for machine learning isn't that high. Yes, training massive models requires enormous computing resources, but the principles themselves aren't hard to grasp. Any undergraduate who pays attention can understand the basics and build their own model. (This is very different from, say, building an atomic bomb.)
Machine Thought: Real or Illusion?
I get it. If you've used ChatGPT, you've been impressed. It's not hard to see why some people wonder if these language models are conscious.
But here's where we might have a blind spot. Language models are just models.
The paradox is this: for most people, language feels like humanity's crown jewel. Our communication, our societies, our cultures—all built on language. Complex language is what separates us from other animals. So when a machine speaks as fluently as a human, the shock runs deeper than seeing a humanoid robot stumble around.
But consider the timeline. The prefrontal cortex that processes our language evolved over roughly a million years. Our other mammalian functions—movement, vision—have been around for hundreds of millions of years. If you compressed mammalian evolution into a single day, language shows up in the final fifteen minutes.
Precisely because language is evolutionarily recent, it's actually simpler to replicate in machines than our other abilities. This explains why natural language processing has leapt ahead of computer vision and robotics. Those fields are still struggling with problems that babies solve effortlessly.
So how far are machines from consciousness? Well, that question is already a bit fuzzy—philosophers and neuroscientists have been arguing about what consciousness even means for centuries without resolution. Whether machines can be conscious may depend entirely on how we define the term. But even if we use the "duck test"—if it looks like a duck, walks like a duck, quacks like a duck—machine learning still has a very long way to go.
We overestimate language. Yes, language and logic are deeply connected—that's why ChatGPT can solve math problems. But human consciousness is far more than language and logic. Current machine learning has mastered maybe the last fifteen minutes of our evolutionary day.
In a way, ChatGPT's arrival is a wake-up call. Maybe the abilities we're most proud of—our logic, our language—are actually easier to replicate than wiggling a finger or blinking an eye. In Hawaii, I was chatting with a colleague, a rising star in mathematics. He told me: "I honestly don't know how much of my work as a mathematician will eventually be replaced by language models. But I'm pretty sure my job will be automated before a cleaning lady's."
Forget about Arnold Schwarzenegger. As my mom likes to say: "Why is my Roomba so useless?"