The Flemish government is moving to strictly enforce a minimum age of 13 for access to social media platforms deemed harmful to minors, marking a significant shift from largely
unenforced rules to concrete regulation.
While most major platforms already set 13 as the minimum age to create an account, the requirement has long been easy to bypass. Flemish authorities now want to change that by introducing a legal framework that would force companies such as TikTok and Snapchat to implement effective age verification systems.
From guidelines to enforcement
The decision builds on the “Safe Online” action plan approved in late 2025, which stopped short of banning social media use for children under 16 but called for stronger protections. These include stricter age checks and measures to limit addictive features such as endless scrolling.
Now, the government is taking a more assertive approach. It plans to create an official list of “harmful social media” platforms. Companies included on that list will be required to verify users’ ages more rigorously or face consequences.
Flemish Media Minister Cieltje Van Achter emphasized that the goal is to turn the existing age threshold into a rule that actually works in practice. Regulators in Flanders will collaborate with the European Commission to monitor compliance, and penalties could be imposed at the EU level. In extreme cases, platforms that fail to comply risk being blocked.
Political divide over age limits
The move follows months of political debate. Some governing parties, including Vooruit and CD&V, had pushed for a higher minimum age of 15 or even 16—similar to recent developments in Australia. However, the N-VA party opposed a full ban for younger teens, arguing instead for stricter enforcement of existing rules.
The compromise keeps the age limit at 13 but introduces mechanisms to ensure it is no longer ignored.
Growing pressure on big tech
The policy also reflects a broader shift in how governments view social media companies. Increasingly, platforms are facing scrutiny—and even legal action—over how their algorithms are designed, particularly when they encourage addictive behavior among young users.
Van Achter pointed to recent studies and court cases holding companies accountable for the impact of their products. “For too long, Big Tech has looked the other way,” she said, warning that platforms must comply with stricter rules or risk losing access to the market.
What are the age rules across the EU?
Across the European Union, the baseline legal framework comes from the General Data Protection Regulation (GDPR). It sets 16 as the default age at which minors can consent to the processing of their personal data online—but allows member states to lower this threshold to 13.
As a result, most EU countries, including Belgium, have effectively set the age of social media access at 13. However, enforcement remains inconsistent, and many children sign up earlier by providing false birthdates.
New EU legislation, such as the Digital Services Act (DSA), is increasing pressure on platforms to better protect minors. It includes obligations to assess risks to children, limit harmful content, and improve transparency around algorithms.
A turning point for online child protection?
Flanders’ decision signals a broader European trend toward stricter oversight of social media. Rather than raising the age limit outright, regulators are increasingly focused on making existing rules meaningful—and enforceable.
Whether this approach will succeed depends largely on cooperation from tech companies and the effectiveness of age verification technologies. But one thing is clear: the era of self-regulation in this space is coming to an end. Photo by Today Testing, Wikimedia commons.
