Many offices (public sector, commercial, etc) are full of people who write reports that no one reads.
With AI, there is a strong incentive for the employee to churn out the report quickly and take Friday off, since no one is going to read it anyway. Maybe they still skim the report and make sure it looks reasonable, but they don't catch the smaller details that are wrong, and more importantly, they don't think deeply or critically about the issue at hand since the report looks reasonable at a glance.
"Show me the incentive, and I'll show you the outcome" - Charlie Munger
My guess is that reporting processes will change and adapt once everyone realizes how useless the old system is in an AI world. A long report used to signal "we thought about this, so rest easy". Now, a long report doesn't signal that. It no longer serves the function it once did.
This is a perfect example of why AI outputs need human verification, especially in policy decisions.
The scary part isn't that the AI hallucinated - it's that the policy went through without anyone fact-checking the source. Makes you wonder how many other decisions are being made based on unverified AI outputs.
Critical reminder: AI is a tool, not a replacement for due diligence.
or malice, or agenda. For example, in the linked article the West Midlands Police had an agenda to find evidence to support a ban, they asked an AI to find evidence to support a ban, and the AI obliged and "found' evidence to support a ban.
Many offices (public sector, commercial, etc) are full of people who write reports that no one reads.
With AI, there is a strong incentive for the employee to churn out the report quickly and take Friday off, since no one is going to read it anyway. Maybe they still skim the report and make sure it looks reasonable, but they don't catch the smaller details that are wrong, and more importantly, they don't think deeply or critically about the issue at hand since the report looks reasonable at a glance.
"Show me the incentive, and I'll show you the outcome" - Charlie Munger
My guess is that reporting processes will change and adapt once everyone realizes how useless the old system is in an AI world. A long report used to signal "we thought about this, so rest easy". Now, a long report doesn't signal that. It no longer serves the function it once did.
This is a perfect example of why AI outputs need human verification, especially in policy decisions.
The scary part isn't that the AI hallucinated - it's that the policy went through without anyone fact-checking the source. Makes you wonder how many other decisions are being made based on unverified AI outputs.
Critical reminder: AI is a tool, not a replacement for due diligence.
Also, was anything verified before AIs. Or was something written just accepted as given without any real critical thought or considerations.
Malice or agenda can be real. With AI it is incompetence but before AI real malice was a possibility.
> was anything verified before AIs.
Yes.
> With AI it is incompetence
or malice, or agenda. For example, in the linked article the West Midlands Police had an agenda to find evidence to support a ban, they asked an AI to find evidence to support a ban, and the AI obliged and "found' evidence to support a ban.