Sin Chew DailyOctober 2023

Opportunity and Responsibility in the Age of AI

Yuan-Sen Ting / 丁源森View original →

In this series, I've tried to demystify what I've learned about machine learning. This is the final installment. I hope it's helped ease some anxiety—but easing anxiety doesn't mean letting down our guard. In these last words, I want to discuss two challenges our society will inevitably face.

The Coming Automation Wave

The first is technological unemployment.

This isn't new. Every technological wave disrupts labor markets. In mid-twentieth-century Malaysia, farmers left their land for factories. More recently, internet-era automation swept through white-collar work.

Machine learning is simply the next wave. As I've discussed, large language models will keep improving. Many white-collar jobs will be partially automated. The question isn't whether displacement will happen, but how.

I have no policy magic bullet. But here's one perspective worth considering: the "better" a job looks, the more vulnerable it may be.

People assume assembly-line workers—with their precisely defined tasks and rigid environments—face the greatest risk. In fact, those jobs are among the hardest to automate. Language models excel at breadth and adaptability but struggle with physical precision.

Consider a medical receptionist who answers calls, explains procedures, and schedules appointments versus an orderly who physically moves patients. Which sounds more professional? Most would say the receptionist. But call handling is precisely what language models do well—that job is arguably more at risk. The physical demands of moving patients remain far beyond current technology.

Or think about this: language models can write passable articles; robots still can't reliably wash dishes. Which sounds more advanced? Obviously the former. But generating coherent paragraphs is literally what these models were built for—child's play. Making a robot handle the infinite variety of kitchens and dish shapes? That's orders of magnitude harder.

I'm not saying every manual job is safe—far from it. But societies need to rethink how they respond to automation. The standard corporate line—"this is just a reskilling problem"—may be dangerously naive, especially as the pace of change accelerates and workers have less time to adapt.

At the extreme, some propose universal basic income, though specifics remain vague. I suspect the real answer lies somewhere between doing nothing and restructuring society from scratch. Either way, there's no avoiding the conversation. The AI frenzy reflects genuine uncertainty, and real solutions will require experts and the public working together. As someone in the field, I share some responsibility for starting that dialogue.

When Seeing Is No Longer Believing

The second challenge is what I'll call truth distortion: when fake becomes indistinguishable from real.

AI-generated content grows more sophisticated by the month. Image generators produce photos people can't tell from reality. Language models are expanding into voice and video. I believe any technological barrier between "real" and "fake" will eventually fall.

Philosophically, if something is indistinguishable from reality, is it still "fake"? That question is above my pay grade.

Let me focus on a narrower problem: deepfakes.

Our courts are built on evidence—seeing is believing. But what happens when video and audio become trivially easy to fabricate? Recent cases give me pause. Audio of a politician was leaked and went viral on social media. Months later, even expert analysts couldn't determine whether it was real.

I used to think authentication systems might help—metadata trails to certify where images came from. I'm no longer optimistic. AI can generate images with perfect synthetic metadata. In the race between certifiers and forgers, forgers have the structural advantage: they only need to win once.

We may be entering an era where absolute truth is unknowable.

This isn't hyperbole. Our legal systems, our governance structures—both assume verifiable evidence. What happens when that assumption collapses? How does society function?

Ironically, for science, this problem may be less devastating. We gave up on absolute truth long ago. Measurements carry uncertainties; conclusions are provisional. We're trained to deal with imperfect information.

If anything, the scientific mindset—embracing uncertainty, focusing on relationships rather than absolutes—may become essential for everyone.

What Comes Next

AI is at an inflection point. Humanity will gain new capabilities, but some things will be lost forever. Panic helps no one, but neither does complacency. Only by discussing our concerns openly can we shape what comes next.

I hope this series has offered some useful perspective. Thanks for reading.