If you read my CV, it tells a clean story: IT audit to data science to AI leadership. It looks intentional — a professional who spotted the data revolution early and positioned himself accordingly.
Here’s what actually happened.
I started in IT audit at PwC. System controls, user access reviews, change management — the kind of work that keeps organisations running but rarely makes anyone’s pulse quicken. The future looked like becoming a deeper specialist in the same kind of work, which in Hong Kong is a career with a visible ceiling. I wasn’t pulled toward data science. I was pushed away from where I was.
The Harvard Business Review called data science “the sexiest job of the 21st century” right around when I graduated. I’d love to tell you I independently arrived at the same conclusion through rigorous analysis. I didn’t. I read the article. It sounded exciting. I started learning on the side.
I was genuinely interested — I’ll give myself that much. I found the algorithms fascinating. Spent evenings studying random forests, SVMs, neural networks, the whole zoo. I thought the elegance of the mathematics was the point. Then I got into practice and discovered that XGBoost ate the world. You could throw it at almost any tabular problem and it would win. The beautiful algorithms I’d studied were mostly academic. The job was feature engineering, data cleaning, and explaining to stakeholders why the model said what it said.
I adjusted. Moved to DBS, found a way to apply ML within internal audit — AML models, anomaly detection. Moved to CNCBI, built a data science function from scratch. Each move made sense at the time. None of them were part of a plan.
Then LLMs happened, and I got it wrong again.
When a colleague showed me ChatGPT in early 2023, my honest reaction was: this is just another NLP thing. Probably hype. I’d seen enough AI hype cycles to be skeptical. A friend — Simon — saw it faster than I did. He was already building with it while I was still dismissing it. By the time I caught up, I’d lost months.
And underneath the LLM misjudgment was a quieter doubt I’d been carrying since DBS: does most of this actually matter?
The dirty secret of ML in enterprise is that genuinely high-ROI use cases are a short list. Fraud detection, AML, credit scoring, recommendation engines, pricing — after that it gets thin. A lot of “ML projects” are dashboards with a model somewhere that nobody trusts enough to act on. At DBS, the AML work was real. But plenty of other ML initiatives across the bank were solutions looking for problems. At CNCBI, I shipped two things that genuinely changed how people worked — and spent plenty of time on vendor evaluations and platform strategies that were partly about looking modern.
Gen AI has made this tension sharper, not smaller. As a personal productivity tool, it’s genuinely transformative — I use it every day and it makes me measurably faster. But enterprise gen AI? I’m less sure. The architecture makes sense in theory — agents, multi-agent workflows, RAG over internal knowledge — and I believe it’s technically doable. But “technically doable” and “ROI justified” are different questions, and most organisations haven’t honestly answered the second one. We might be building cathedrals where a chapel would do.
I say this as someone about to start a consulting role selling AI solutions. The irony isn’t lost on me. But I think the honest version of that job is knowing which cathedrals are worth building and which clients just need a chapel — and having the credibility to say so.
The uncomfortable pattern: my career has been shaped more by wrong judgments and course corrections than by right calls. I was wrong about SAP audit being fine. Wrong about elegant algorithms mattering. Wrong about LLMs being hype. Wrong, probably, about some of the enterprise AI work I’ve done being as impactful as I told myself. Each time, I adjusted — but the adjustment came after the mistake, not before it.
People talk about careers like optimisation problems. Find the right track, build the right skills, maximise the right metric. But you can’t A/B test your life. You can’t run a counterfactual where you stayed in IT audit, or pivoted to LLMs six months earlier, or skipped data science entirely for software engineering. You get one run, with incomplete information, and you make the next reasonable move.
Looking back, the thing that actually worked wasn’t judgment — it was the willingness to move when something felt wrong. The push mattered more than the pull. I didn’t know where I was going. I just knew I couldn’t stay where I was.
That’s less inspiring than “I saw the future and positioned myself.” But I think it’s more useful, because it’s actually replicable. You don’t need to be visionary. You need to pay attention to when something isn’t working, be honest about it, and adjust before the cost of staying exceeds the cost of moving.
My judgment has been wrong at every major turn. My willingness to adjust has been right. I’ll take that trade.