The UK’s “Online Safety Act”
What the Law Actually Does
The Act gives Ofcom sweeping powers to regulate the internet. Any site or app that lets users post content — from Facebook and Reddit to forums, chat apps, and even small websites — now falls under its scope.
Platforms must:
Remove anything considered illegal (like terrorism or child exploitation).
Filter out “harmful” material for children — a category so vague it could cover self-expression, mental-health discussions, or adult education.
Show they’re preventing access to content that might be considered risky.
Introduce age-verification systems for adult material and possibly other types of content in future.
Failure to comply can mean fines up to £18 million or 10% of global turnover. Senior executives can even face criminal charges if they ignore Ofcom orders.
The Problem With “Safety”
No one disagrees that protecting kids online is important. The issue is how broad and intrusive this law is.
By forcing companies to constantly monitor what users upload, the Act pushes the UK toward a system of mass content filtering.
That means:
Algorithms and moderation bots making judgment calls about what you can see.
Conversations or posts about taboo but important topics — like sexual health, drug use, or self-harm recovery — being restricted or removed.
Independent websites or small creators being hit with compliance demands they can’t realistically meet.
It’s less about child protection, and more about creating a legal framework that lets the state and big tech decide what’s acceptable speech.
Age Verification and Privacy Risks
One of the most controversial parts is age verification.
To block under-18s from adult content, the law allows (and in some cases requires) sites to demand proof of age — often through ID checks or third-party verification services.
That raises huge privacy concerns.
To access certain parts of the web, UK users may soon have to hand over personal data to untested verification companies.
Even if data leaks are rare, the idea of linking your identity to every click or view online is a serious blow to digital privacy.
The Slippery Slope
The Act doesn’t just ban illegal material — it also covers so-called “legal but harmful” content for children, a term that leaves a lot of room for interpretation.
Once platforms start filtering that out, it’s only a short step to applying similar filters to adults “for safety reasons.”
And that’s the danger: laws written to protect kids often become the foot in the door for wider online control.
If Ofcom or future governments decide something is too risky or politically sensitive, they now have the power to pressure platforms into removing it.
What It Means for Everyday Users
You might start noticing:
More content warnings or blocked posts.
Verification pop-ups before you can access certain sites.
Reduced access to international platforms that choose to pull out of the UK rather than comply.
For businesses and creators, it adds another layer of red tape — especially if your platform allows user uploads or discussion.
Small UK sites could find compliance costs impossible, which might drive innovation and free expression further toward large, corporate platforms.
So, What’s Next?
Ofcom is rolling the Act out through 2024 and 2025.
The first wave targets illegal content, but the bigger changes — content filters, transparency rules, and age-checks — come later.
The government promises that free speech will still be protected, but critics aren’t convinced. Once automatic filters and ID checks become normal, reversing them will be nearly impossible.