AI can produce legal text that sounds persuasive, while research indicates a substantial risk of incorrect or fabricated legal references. That is why it is prudent to review AI-generated memos, thesis drafts, and legal advice drafts for references and context before relying on them. VeriLeges supports this with source verification, warning signals, and a clear verification report within the current verification scope, including AI output from tools such as ChatGPT or Gemini.
Built for law students, legal professionals, law firms, and organizations that want faster, more careful review of AI-generated legal text.
Research on large language models in legal use cases shows hallucination risk is not a marginal issue. A study in the Journal of Legal Analysis found that public LLMs (such as ChatGPT or Gemini) hallucinated at least 58% of the time on verifiable legal benchmark questions, with higher rates for some models. A later 2025 study on AI legal research tools showed that even specialized systems still produced hallucination rates in the 17% to 33% range. This does not mean AI is always wrong, but it does mean reference and context checks remain essential.
Sources: Journal of Legal Analysis (2024) and Journal of Empirical Legal Studies (2025).
The EU AI Act introduces a risk-based framework for trustworthy AI, including transparency expectations for certain AI systems and transparency and risk-management obligations around general-purpose AI. This does not mean every user is legally required to manually check every memo, but it does raise expectations for transparent and accountable AI output handling. For legal text, source verification and review therefore become more important.
VeriLeges acts as a first verification layer for legal references in AI-generated text, including output generated with tools such as ChatGPT or Gemini.
Within the current scope, VeriLeges checks references against linked authoritative sources such as Rechtspraak.nl and wetten.overheid.nl, and builds verification signals based on those source links.
Paste a memo, thesis draft, legal advice draft, or other legal text you want to review.
The tool detects ECLI references, statute articles, and other legal citations, and flags parts that may need additional review.
Within the current scope, references are checked against linked sources so you can see where human review remains necessary.
Avoid leaving AI-fabricated references in thesis drafts, papers, and study notes.
Use VeriLeges as a first verification layer for memos, draft advice, and AI-assisted legal drafts.
Make AI legal text checks part of responsible AI use across your team or organization.
VeriLeges is a verification layer for references and verification signals. A green outcome means no hard or soft signals were found within the current verification scope. It does not mean the text is legally correct, complete, or approved. Human legal assessment remains required.
Use VeriLeges as a scoped first verification layer, with clear signals and source links for structured review.