“When AI makes decisions based on personal data—without transparency or accountability—it crosses into dangerous territory,” said Alvin Toh, Co-Founder and Chief Marketing Officer of Straits Interactive, a regional authority on data and AI governance. “You’re no longer just automating processes. You’re making calls on people’s lives, often invisibly," he said as he shared his thoughts with Malaysia SME's Aileen Anthony.

Key Takeaways

  • AI Governance Is No Longer Optional As AI adoption accelerates, governance around its use—especially when handling personal data—is becoming critical. Regulations are emerging globally and regionally, and SMEs must start preparing now to remain compliant and competitive.
  • Roles Matter: Developer, Deployer, User Regulators differentiate between those who develop, deploy, or use AI. Each has distinct responsibilities, but all must be aware of the risks—particularly when working with high-risk AI like facial recognition or automated decision-making.
  • Human Rights Are at the Heart of Regulation AI governance, like data protection before it, is grounded in the protection of human rights. The misuse of personal data can reinforce bias, discrimination, and surveillance—making governance a moral as well as legal necessity.
  • The Data Protection Officer (DPO) Is Your Starting Point With Malaysia’s new PDPA mandating a DPO, SMEs already have a foundational role in place. Upskilling this role to include data governance and AI oversight is a practical and strategic move.
  • Good Governance Is a Business Advantage For SMEs eyeing growth, investment, or partnerships with larger firms, demonstrating AI and data governance is increasingly part of due diligence. Those who get ahead of compliance can gain trust, credibility, and market edge.

Governing Responsibilities

In the rush to harness Artificial intelligence (AI), attention is turning not just to what it can do, but who’s responsible when things go wrong—and how are these responsibilities governed?

AI, once the preserve of big tech and research labs, is rapidly filtering into the everyday tools of businesses across Asia. From recruitment and marketing to customer service and analytics, small and medium enterprises (SMEs) are already relying on AI more than they may realise.

Yet, with that convenience comes complexity—and risk. At the heart of this tension is AI governance, a discipline still taking shape but increasingly – indispensable.

Why AI Governance Matters Now

The shift from awe to awareness is happening quickly. According to a recent Deloitte report, 78% of SMEs in APAC are already using at least one AI-enabled tool, with 82% planning to adopt more. Hence, governance must catch up.

“Many SMEs are already exposed to risks they don’t fully understand,” said Toh. “And as regulations emerge, ignorance won’t be a defence.”

This is especially true in sectors dealing with sensitive data—finance, healthcare, education, and HR among them. But even outside these verticals, AI is often deployed in ways that warrant scrutiny; recommendation engines, automated CV scanning, customer profiling.

“We’ve seen cases where AI tools unknowingly introduce bias,” Toh explained. “Like one infamous incident where a recruitment bot disproportionately rejected female candidates because it had been trained on past hiring data that was male-dominated. It wasn’t malicious—it was poorly governed.”

The Regulatory Wave Is Coming

Globally, the European Union’s AI Act has set the benchmark for regulation. Categorising AI systems by risk level, the EU framework imposes strict requirements on developers, deployers, and users—especially where AI touches on human rights or personal data. (See SideBar on Developers, Deployers, Users)

“Think facial recognition, credit scoring, behavioural analysis—these are considered high-risk,” Toh said. “If you’re using AI in these ways, you’ll face the heaviest scrutiny.”

Asia is not far behind. China has already implemented AI regulations. South Korea’s act comes into force in early 2026. Japan and Australia have tabled legislation. 

Singapore, meanwhile, is leading the charge in ASEAN with its Model AI Governance Framework, which has been recognised internationally and shared with the EU.

Even without binding laws yet, these frameworks are shaping expectations.

“Many of them emphasise the same core principles—privacy, transparency, cybersecurity, fairness, and national security,” said Toh. “The language may vary, but the direction shares common ground.”

The ASEAN Context, Light Touch, But Serious Intent

Within ASEAN, six countries have already announced national AI strategies. While the pace varies, governance appears in all of them as a foundational pillar. Malaysia, for example, recently mandated the appointment of Data Protection Officers (DPOs) under its updated Personal Data Protection Act (PDPA). Though specific AI legislation is still forthcoming, Toh sees this as an important step.

“Malaysia is taking a ‘guidelines-first’ approach—encouraging best practices before enforcement kicks in,” he said. “But make no mistake, enforcement will come. The groundwork is already being laid.”

He noted that two years ago, Indonesia, the Philippines, and Singapore , were among the signatories of the Bletchley Park AI Safety Agreement, an international commitment to coordinated AI regulation. Malaysia was not among the signatories of the Bletchley Declaration. Despite this, Malaysia has demonstrated active participation in global AI governance discussions. 

Notably, Malaysia co-chaired side events at the AI Action Summit held in Paris in February 2025, focusing on building safe and responsible AI in the region. “It’s only a matter of time before we see more local frameworks taking shape,” Toh added.

The Straits Interactive Lens

While regulations evolve, Straits Interactive has taken on a critical role in preparing organisations for what’s ahead. Founded in Singapore in 2013, the company first made its mark by helping businesses comply with data protection laws through automation and advisory services.

Over time, its scope expanded. Straits Interactive became the regional training partner for the International Association of Privacy Professionals (IAPP), and later OCEG. The company also built one of Asia’s most active data governance communities through the Data Protection Excellence Network (DPEX).

“Our approach has always been to help companies move from compliance to competence,” Toh said. “We build frameworks, ecosystems, and people.”

As AI entered the conversation, Straits Interactive was well-positioned to guide the shift. “AI governance is not a separate discipline—it’s a natural extension of data governance,” he explained. “If you already understand how to manage personal data, you have a head start.”

The DPO as a Gatekeeper

For SMEs in Malaysia, Toh believes the newly required DPO role provides a natural anchor for AI governance.

“Start with what’s already required,” he advised. “You have to appoint a DPO anyway. Train and empower them to look beyond just privacy—to also oversee how AI tools are being used, what data they’re processing, and whether decisions can be explained.”

This aligns with what regulators are calling for: explainability, transparency, and accountability.

“If your AI system recommends a product, changes a price, or filters out a job candidate—can you explain how it did that? Can you defend that logic? That’s where governance comes in,” said Toh.

He added that courts in China have already used published AI governance guidelines as benchmarks in legal decisions—despite those guidelines being non-binding. “So, even if there’s no ‘AI Act’ yet, you’re not off the hook,” he warned.

For Startups and Scale-Ups, Don’t Wait

The challenge is even more acute for startups looking to scale or raise funding.

“Founders are often juggling product, growth, operations, and compliance all at once,” Toh said. “But the moment you pitch to a multinational client or venture capitalist, the topic of governance will come up. It’s already part of most due diligence checklists.”

He recalled examples of companies caught off guard when a major tender required evidence of a DPO appointment, documented data policies, or AI usage disclosure.

“By then, it’s too late to scramble. You’re either ready or you’re not.”

He believes investors themselves have a role to play. “VCs need to understand that responsible AI use isn’t just a risk issue—it’s a business enabler. They should be funding governance, not just product,” he said.

Toh shared that Straits Interactive works with angel investors to train them on evaluating AI and data risks. “It’s part of maturing the ecosystem.”

The Talent Equation

One silver lining? The rise of AI governance is creating a demand for new, future-ready roles.

“We’re already seeing the emergence of ‘Data Governance Officers’ and ‘AI Officers’ in multinational firms,” said Toh. “These jobs didn’t exist five years ago—but now they’re among the most sought-after in compliance and risk.”

In fact, a recent study by Straits Interactive found that data governance roles now command higher pay and seniority than traditional DPO roles—a signal of where the market is heading.

“So if you’re a mid-career professional, especially in compliance, legal, or IT, this is your moment,” he said. “Upskill now, and you’ll be indispensable.”

To SMEs

As regulations catch up with innovation, SMEs must prepare—not out of fear, but as a business imperative. AI may unlock unprecedented opportunities, but without governance, it risks reinforcing bias, eroding trust, and infringing on basic human rights.

“AI is not going away. But neither are the risks,” Toh concluded. “The good news is, if you start with good governance, you don’t have to fear it. You can own it.”

Just as Europe’s landmark data protection laws were born from the post-war resolve to protect human rights against the misuse of personal information to oppress, today’s push for AI governance carries the same moral weight. The question is no longer whether to govern AI—but how responsibly and how soon.

For SMEs charting their future in this evolving landscape, embracing AI governance isn’t just about ticking regulatory boxes. It’s about aligning with a future where technology advances human dignity, not undermines it.


Understanding AI Adoption Roles and Risk Levels

AI regulations—such as those under the EU AI Act—categorise adopters based on their role and associated risk:

1. Developer
Creates the AI systems and bears the most stringent obligations.

High impact due to broad adoption

Must ensure safety, fairness, and compliance at the source

2. Deployer
Implements or integrates AI into services or platforms.

Example: SaaS providers embedding AI features

Faces significant requirements, especially at scale

3. User
Uses AI tools internally (e.g. on intranet or in business processes).

Still has legal and ethical obligations, especially if processing personal data

High-Risk AI Systems

These systems are the focus of most regulatory efforts and can exist at any level above.

Includes use of biometrics, automated decision-making, surveillance, and recommendation engines

Tighter scrutiny and transparency required, regardless of whether you're a developer, deployer, or user.