News

26 February 2026

This weekend, Dutch broadcaster NOS reported that three lawyers had received warnings for using AI (such as ChatGPT) in legal arguments, with references to rulings that turned out not to exist or to be about something else.

(The article: https://lnkd.in/eGAMGxmw)

Two of them have been required by the regulator to take an AI course.

What is evident here is not purely a technological problem that can be solved with training.

AI outcomes are quickly seen as authoritative. A classic example of automation bias, reinforced by AI: “The computer must know better.”

 

The result:

● Outcomes that are not always critically verified

● Errors that go unnoticed

 

We also saw this recently in algorithmic decision-making (which, incidentally, was not exclusively attributable to AI). Blind trust without effective controls leads to incorrect outcomes.

Professional responsibility remains with people, while the use of these systems is increasing anyway.

 

Training is, of course, important in this regard.

In fact, under the EU AI Act, organizations that use AI must ensure sufficient AI literacy.

 

But training alone is not enough.

Responsible AI use requires clear frameworks, verification processes, and oversight. It is also a governance and control issue.

These issues were noted by judges in Arnhem, Rotterdam, and Groningen.

But AI is now being used much more broadly: legally, administratively, financially, and operationally.

 

How many of these AI-generated errors are actually noticed? And how many are not?

 

Organizations that use AI structurally need frameworks that are risk-driven, verifiable, and administratively anchored.