Thailand has no dedicated AI law yet. But assuming that means “anything goes” would be a serious mistake. Sector-specific rules are already in force in finance, the judiciary, and consumer protection; the Personal Data Protection Act (PDPA) applies to AI use right now; and a comprehensive AI bill reached a consolidated draft stage in 2025, with enactment expected within the next few years. Waiting for the law before acting is not a strategy — it is a risk. This article maps Thailand’s AI regulatory landscape as of March 2026 and sets out what Japanese SMEs operating in Thailand should prepare today.
Where Does Thailand Stand on AI Regulation?
Thailand’s AI regulatory environment is best understood through three distinct layers.
Layer 1 — Comprehensive AI Bill (in preparation)
Since 2023, two agencies have been developing AI legislation in parallel:
- ONDE (Office of the National Digital Economy and Society Commission): a regulation-focused draft Royal Decree targeting transparency, safety, and fairness in commercial AI use.
- ETDA (Electronic Transactions Development Agency): a promotion-focused draft act designed to foster AI innovation through sandboxes and data-sharing frameworks.
In June 2025, following public hearings, the two drafts were merged into a single “Draft Principles of the AI Law.” ETDA continues to refine the text, with formal enactment targeted within the next few years — though the exact timeline remains open.
Layer 2 — Sector-Specific Regulation (partly in force)
The Securities and Exchange Commission (SEC) published an AI and Machine Learning regulatory framework for capital markets in 2023. The Bank of Thailand (BOT) has issued AI guidelines for financial institutions. AI-related rules are also already in effect for the judiciary and consumer protection sectors.
Layer 3 — Existing Law (PDPA and beyond)
Even without a comprehensive AI law, the PDPA, Consumer Protection Act, and tort law apply to AI use today. This is the most immediate source of legal exposure for Japanese companies in Thailand.
Thailand’s current landscape can fairly be described as a “hybrid regulatory environment.” Companies cannot afford to wait for a single comprehensive law; they must address existing legal obligations while preparing for the new framework simultaneously.
What Is a Risk-Based Approach? — Explained Through the EU AI Act
Thailand’s AI draft draws heavily on the EU AI Act (entered into force August 2024). The central concept is the risk-based approach: rather than regulating all AI uniformly, the level of regulation scales with the degree of risk the AI poses to society.
The Four-Tier Risk Classification
| Risk Tier | Examples | Regulatory Treatment |
|---|---|---|
| Prohibited AI | Subliminal manipulation, social scoring, real-time biometric identification in public spaces | Complete ban on deployment and use |
| High-Risk AI | Recruitment screening, credit scoring, medical diagnosis support | Registration, risk management, transparency, accuracy obligations |
| Limited-Risk AI | Chatbots, emotion recognition, deepfakes | Transparency obligation (disclosure to users) |
| Minimal-Risk AI | Spam filters, AI games | No regulation as a rule |
Key Differences Between Thailand and the EU
Thailand’s bill follows the same framework but incorporates several distinctly Thai design choices:
High-risk AI lists: The EU embeds specific lists in the law itself. Thailand delegates list-making to sector-specific regulators, which preserves flexibility but makes it harder for companies to self-assess whether their AI qualifies as high-risk.
General-Purpose AI (GPAI): The EU imposes transparency obligations on GPAI models like ChatGPT. Thailand’s current draft contains no explicit GPAI provisions — though this may change in future revisions.
AI Sandbox: Mandatory for EU member states; voluntary in Thailand.
Penalties: Thailand’s draft specifies both administrative and criminal sanctions — arguably more explicit than the EU regime on the criminal side.
Comparison with Japan
Japan’s AI Governance Guidelines and the Hiroshima AI Process prioritize voluntary compliance over mandatory rules. Thailand (and the EU) are considerably more prescriptive. Japanese companies operating in Thailand should not assume that Japan-style self-governance will satisfy Thai regulatory expectations.
PDPA × AI — The Most Immediate Risk for Japanese Companies
On 17 February 2026, Thailand’s Personal Data Protection Committee (PDPC) published draft Guidelines on Personal Data Protection in the Development and Use of Artificial Intelligence and opened them for public comment. This is the first systematic articulation of how the PDPA applies to AI — and it carries important implications for any business using AI tools in Thailand. Note that as a draft, the final version may differ.
”Using AI Does Not Reduce Your Liability”
The draft guidelines make one point emphatically clear: organizations cannot offload data protection responsibility onto their AI tools.
Your company (the AI user): Because you determine the purpose and inputs of AI processing, you are a data controller under the PDPA. Using AI to analyze customer data does not change your obligations regarding consent, purpose limitation, and data minimization — those duties remain with you.
Your AI vendor: Vendors are generally classified as data processors. However, if a vendor independently reuses your customer data — for example, to train its own models — it may become a data controller in its own right. Contracts with vendors must clearly specify how data is used.
The PDPC has also deployed an automated enforcement tool called “Eagle Eye Crawler” to actively scan for PDPA violations online. As AI use expands, data protection exposure grows in tandem.
Practical Implications
- Audit whether staff are entering customer or employee personal data into tools such as ChatGPT or other general-purpose AI services
- Review and update Data Processing Agreements (DPAs) with AI vendors
- Update Privacy Notices and Terms of Service to disclose AI-based processing to users
- For AI used in hiring, credit assessment, or similar decisions, establish a process for explaining AI-driven outcomes
PDPA penalties reach up to THB 5 million (approx. USD 140,000) in administrative fines, with criminal sanctions possible in some cases. For a full overview of Thailand’s PDPA framework, see our forthcoming PDPA Practice Guide.
Five Things Japanese Companies Should Do Right Now
Thailand’s comprehensive AI law has not yet been enacted — but “waiting for the law” is itself a strategic choice, and not a wise one. Companies that establish governance frameworks now will face far lower compliance costs once legislation is in force.
Step 1 — Inventory your AI use
Map which business functions are using which AI tools, and for what purpose. Include SaaS chatbots, CRM automation, recruitment tools, data analytics dashboards, and any embedded AI in software your company uses. “AI the IT department quietly installed” and “ChatGPT that staff use on their own” are both part of this inventory.
Step 2 — Classify your AI by risk tier
Apply the four-tier framework to your inventory. Prioritize AI used in hiring, credit, healthcare, and safety-critical processes as high-risk candidates. Even factory IoT systems may warrant scrutiny if sensor data includes personal information.
Step 3 — Review and update vendor DPAs
Contracts should clearly specify whether your vendor may use your data for model training or other independent purposes. Free or low-cost AI tools are particularly likely to include broad data reuse rights in their terms of service — read them carefully.
Step 4 — Update privacy notices and terms of service
Disclose AI-based processing to customers and employees in plain language. For chatbots, the draft framework indicates that disclosure (“this service is handled by AI”) will be required as a transparency obligation under the limited-risk tier.
Step 5 — Develop an internal AI governance policy
Document who is responsible for AI oversight, how risk assessments are conducted, what the incident response process is, and what records must be kept. This documentation will also serve as evidence of compliance once formal requirements are in force.
The AI Sandbox — Opportunities for Japanese Companies
ETDA’s AI Innovation Testing Center operates a sandbox program — a controlled environment in which companies can deploy and test AI services before a full regulatory framework is in place.
Sandbox participants receive a safe harbor from administrative penalties (civil liability is not waived). The program has seen early adoption in fintech and healthtech, and has potential relevance for Japanese companies pursuing manufacturing IoT, logistics optimization, and customer-facing AI.
Combined with BOI investment promotion incentives — see our article on Thailand BOI Updates 2025 — the sandbox offers a structured path to piloting AI-driven innovations with reduced regulatory risk.
What to Watch Going Forward
Progress on the comprehensive AI bill: ETDA is actively revising the draft. Track public consultation announcements and parliamentary developments for timing signals.
Finalization of the PDPC AI–privacy guidelines: No timeline has been announced for the final version. Rather than waiting, align with the draft’s direction now.
Expansion of sector-specific rules: Expect AI-related guidance to spread to healthcare, automotive, construction, and education sectors.
ASEAN-wide AI governance harmonization: Thailand is designing its framework with reference to the “ASEAN Guide on AI Governance and Ethics.” Comparisons with Singapore’s Model AI Governance Framework and Indonesia’s approach will be covered in upcoming articles.
The window of low regulatory obligation will not stay open indefinitely. The companies that build governance infrastructure now will be the ones best positioned when the law lands.
Our Firm’s Work in This Area
Our firm advises on Thailand-related legal matters in collaboration with JTJB International Lawyers’ Thai-qualified attorneys, and we are also engaged in research on AI-assisted Online Dispute Resolution (AI-ODR) — the use of artificial intelligence to facilitate and resolve disputes outside traditional litigation. This gives us a perspective on AI that spans both legal risk and legal opportunity.
For more on how AI and ODR are reshaping dispute resolution in the ASEAN region, see Dispute Resolution Series Part 3: ODR and AI on the Frontier.
We advise on PDPA compliance for AI use in Thailand, AI governance framework design, and AI-related contract clause drafting — from both a Japanese and Thai law perspective. Our firm’s engagement with AI-ODR research also means we bring first-hand understanding of how AI is transforming legal processes. Please feel free to contact us.
This article is for general informational purposes based on publicly available information as of March 2026 and does not constitute legal advice. For specific matters, please consult a Thai-qualified legal professional. Our firm works in collaboration with JTJB International Lawyers’ Thai-qualified attorneys.