rag.art
ProductPricingUse casesDocs
Log inStart free
  1. Home/
  2. Blog/
  3. EU AI Act compliance for chatbots — the solo founder's checklist
2026-04-07·8 min read·rag.art team

EU AI Act compliance for chatbots — the solo founder's checklist

A practical checklist for solo founders and small teams shipping chatbots into the EU — what the AI Act actually requires, what it doesn't, and how to ship without lawyering up.

complianceeu-ai-actgdpr

The EU AI Act entered force in August 2024, with obligations phasing in through 2027. If you read the full text, you will conclude the correct response is to stop shipping. That reaction is proportional to reading the full text, which is 144 articles of structured paranoia. In practice, almost none of it applies to a customer support chatbot, and the parts that do can be ticked off on an afternoon.

This is not legal advice. This is a field-level checklist from building rag.art. For the actual compliance work, pay a lawyer a few hundred euros and have them review your specific deployment.

Which risk tier are you?

The Act classifies AI systems by risk. A generic customer service chatbot that answers questions from your website falls in the 'limited-risk' tier — Article 50 transparency obligations apply, full stop. You're not a 'high-risk' system unless you're used in biometrics, critical infrastructure, education assessment, employment decisions, essential services, law enforcement, migration, or justice administration. If none of those words describe your deployment, you're in limited-risk territory.

The limited-risk checklist

1. Tell users they're talking to an AI

Article 50.1 is blunt: 'natural persons … are informed that they are interacting with an AI system … unless this is obvious from the circumstances and context of use.' A banner, a greeting line, or a visible 'AI assistant' label meets this. A bot that pretends to be a human named Sarah does not.

2. Label AI-generated content

If your bot generates media (images, video, audio), Article 50.2 requires watermarking. Text-only chatbots are exempt in practice — the label on the assistant is the disclosure.

3. Log interactions

Not a strict Article 50 requirement for limited-risk systems, but every adjacent regulation (GDPR Art. 30, ISO 27001, the German customer-service-law) wants it. Store conversation logs, with retention limits, and give the user a way to request deletion.

4. Don't do prohibited things

Article 5 lists outright bans — social scoring, biometric categorisation, manipulation exploiting vulnerabilities. A customer service chatbot is nowhere near these. Don't build one that is.

5. Respect GDPR (the dull, important one)

The AI Act doesn't override GDPR; it layers on top. The parts you actually need to implement are the GDPR parts: lawful basis documented, data processing agreement with your vendor, retention policy, Subject Access Request and deletion flows, minimal data collection, proper cookie consent.

What you do NOT need to do

  • You do not need to register with an AI regulator. Limited-risk systems don't go in any EU register.
  • You do not need a CE marking. That's high-risk territory.
  • You do not need a Fundamental Rights Impact Assessment unless you're public sector or high-risk.
  • You do not need model cards or technical documentation published externally — just internal awareness of what your vendor trains on.

If your vendor says the wrong things

Two responses from a vendor should make you close the tab. 'We're fully AI-Act compliant — our certificate is in the footer.' (There is no compliance certificate for limited-risk systems.) 'The AI Act doesn't apply to us because we're a US company.' (It applies if your customers are in the EU.) Either answer tells you the vendor hasn't read the law.

A reasonable solo-founder posture

  1. Ship with an 'AI assistant' label visible in the widget.
  2. Write a one-paragraph 'AI disclosure' page describing what the bot does and doesn't do.
  3. Use a vendor that offers an EU-hosted option and a downloadable DPA.
  4. Log conversations, set a retention policy (90 days default), and build a deletion endpoint.
  5. Don't deploy the bot into high-risk use cases without actual legal review.

That's the honest-engineering version of compliance. Most of the noise on LinkedIn is from consultancies selling the paranoid version. Ignore it.

Want to see this in practice?

rag.art ships EU-hosted with a DPA on page one →
rag.art

RAG chatbots, your brand. Made in the EU, GDPR-ready, transparent pricing.

Product

  • Features
  • Pricing
  • Use cases
  • Widget playground

Verticals

  • Real estate
  • Insurance brokers
  • Franchises
  • Dental clinics
  • Law firms
  • Ecommerce

Resources

  • Docs
  • Blog
  • Compare
  • Trust & Security

Legal

  • Terms
  • Privacy
  • Cookies
  • DPA
  • AI disclosure

© 2026 rag.art — Made in the EU · GDPR-ready

PrivacyTerms