AI and the Consultation Trap: why planning needs new ground rules for the age of generated representations

Planning consultations were built for humans to respond to. But generative AI breaks that link. It can help people translate jargon and express a view clearly which is a positive - but it also makes it easy to produce lengthy submissions or multiple submissions at scale.

Issues with volume

Tools now exist that scan applications and generate 'policy-backed objections in minutes'. That may feel like democratisation, but it can also drown out genuine local insight. And for planning consultants and local planning authorities, it means more material to read, more time spent filtering and less attention on what really matters.

When the content is wrong

AI is persuasive even when it is mistaken. In planning that is demonstrated in misquoted policy, invented appeal decisions and dubious legal claims that look credible to non-specialists.

The Planning Inspectorate has flagged the risk of fake representations or evidence, including generated text and altered images. It expects parties to declare when AI has drafted or substantially rewritten submissions, produced summaries or generated or altered images or video, and to state what checks were carried out. It also warns that improper use can amount to unreasonable behaviour, exposing parties to costs awards.

If a committee debate is influenced by fabricated authorities, the reputational hit is immediate, but so is the legal vulnerability. Decisions become harder to defend and easier to challenge.

Trust, bias and black boxes

AI creates a legitimacy problem. If stakeholders suspect that consultation analysis has been automated, reports are not seen as valid. Trust depends on traceability: what was automated, what was checked and who is accountable.

Bias sits inside this too. Tools trained on historic data can reproduce historic patterns in who is heard and how issues are framed. With weak datasets, the output can be wrong with a confident tone.

And visualisation forms part of that risk. Augmented reality and polished imagery can improve understanding, but they can also curate reality, emphasising what is attractive while masking what is inconvenient. The standard needs to be evidence-led visuals with assumptions stated.

The RTPI has warned that AI can bring inaccuracies and security risks, and that it must support professional judgement rather than replace it.

What needs to change

Banning AI would be unenforceable and counter-productive. Instead we need guardrails that keep consultation useful.

We should start with structure: consultation platforms should require respondents to tag comments to policies, topics or site references, apply sensible word limits and deduplicate near-identical submissions. This protects officers' time and makes genuine insight easier to find.

We must also normalise disclosure. A simple tick-box and short statement should be standard in consultations, not just at appeal.

It is also important to demand provenance for factual and legal assertions. If a representation relies on case law, policy wording or technical standards, it should cite the source in a way that can be checked quickly. Where submissions are knowingly fabricated, there should be consequences.

Finally, if authorities use AI to summarise responses, do it openly and with quality assurance. Current government pilots are stress-testing tools that group and summarise local plan representations while keeping officers reviewing the outputs, including experiments that let respondents review an AI summary of their own submission before it is logged.

AI will make it easier for more people to speak. So the planning system needs to make it just as easy to tell what is accurate, what is relevant and what a decision can safely rely on.