Sin Chew DailySeptember 2023

Becoming a Hexagonal Warrior in the AI Era

Yuan-Sen Ting / 丁源森View original →

A few days ago, I was chatting with a colleague who works in machine learning. Someone nearby asked: "Technology's moving so fast, especially in language processing—will this change how we teach?" My colleague laughed. "Absolutely. I tell my students all the time: if you submit work without running it through ChatGPT first, that's just rude."

Ever since ChatGPT landed, everyone's been wringing their hands about the downsides. Endless debates. Detection tools. Academic integrity committees in emergency session.

I get it. Forget high school homework—take the notoriously brutal statistics and machine learning course I taught last semester. ChatGPT handled it easily. When I fed it my exam questions, it outperformed nearly ninety percent of my students.

But I don't think we should ban these tools.

Is This Plagiarism?

For one thing, banning them is nearly impossible. The nature of the technology makes it so.

As I've written before, machine learning is fundamentally different from internet search. These models compress and extract internal relationships between data points. When they generate content, they're sampling from those relationship maps—not retrieving and copying specific sources. Because the process is sampling rather than retrieval, the output can't really be called "plagiarism" in any meaningful sense. There's no specific source being copied. It's like an impromptu speech: even with the same train of thought, no two deliveries are identical.

So in my classes, I don't just allow these tools—I encourage students to go wild with them. Not using them, as my colleague put it, would be rude.

Rethinking Education

Should we worry that students will stop learning?

These tools will undoubtedly disrupt education. But that might actually be a good thing. Google Maps turned me into a directional disaster, sure. It also freed up mental bandwidth for skills that matter more.

The first casualty will be rote learning. Repetitive assignments have no chance against language models. We don't know exactly how education will evolve, but we can be confident the new model will emphasize problem-solving over regurgitation.

Traditional homework—multiple choice, fill-in-the-blank—is child's play for these systems. So I've shifted to research projects. Even when students use every tool available, they still need to decompose problems, evaluate AI suggestions, and catch errors. These projects better reflect real ability. Current language models can reason, but they struggle with lateral thinking. If students learn enough to guide the AI and narrow down its answers, the quality of their work improves dramatically.

Learning to collaborate with machines is no longer optional. It's essential.

Many top universities—Tsinghua, Harvard—now offer foundational AI courses to all incoming students, not just computer science majors. They teach not just how to use ChatGPT, but the underlying principles, how to fine-tune models. In my recent astrophysics course, even though most students weren't in CS, I dedicated an entire class to using and fine-tuning large language models.

The Generalist Strikes Back

Machine learning is white-hot right now. CS enrollment has nearly doubled every year for several years running. People occasionally ask me for advice on choosing majors. As someone who straddles astrophysics and computer science, here's my take.

If you're genuinely interested in computers, CS is fine. But understand that the curriculum goes far beyond machine learning—systems design, compilers, theory. For someone who just wants to work with AI, those courses aren't necessarily priorities. And as language models become ubiquitous, cross-disciplinary knowledge matters more than ever. If you don't understand astrophysics, your ability to get useful astrophysics answers from these models plummets.

Here's something you might not know: many leaders in machine learning today didn't come from CS. Key researchers behind GPT-3 and at Anthropic—OpenAI's main competitor—studied physics. Different backgrounds mean different perspectives. They're more likely to propose creative solutions.

Beyond the Industrial Cog

We're living in an era where generalists are making a comeback. Only through genuine understanding combined with hands-on application can these tools reach their potential. I consider this machine learning's greatest gift.

Think back to the industrial age. In pursuit of efficiency, people became cogs—small, replaceable, stuck repeating the same tasks in fixed positions. This model kept the machine running but gradually dehumanized us. Now, machine learning suggests that fixed, mechanical work will be the first to disappear. Both low-level grunt work and high-level specialist tasks are vulnerable.

Jobs will undergo major upheaval—there are real concerns worth discussing. But the people who thrive in this technological revolution won't be narrow specialists in ivory towers. They'll be "hexagonal warriors"—people willing to absorb new knowledge across multiple domains.

This is an era of transformation and open competition. Machine learning's low barrier to entry has blown the doors wide open. This isn't a superpower arms race like nuclear technology. Nations that cultivate cross-disciplinary generalists will pull ahead.

Is Malaysia's education system ready?