The Theatre of Compliance
Digital Services Act · Two Years in Force · 2024–2026
Digital Services Act: How Europe built the most sophisticated platform accountability machine ever designed — without ever touching what actually produces the harm. Platforms optimised the procedure. The algorithm stayed intact.
Berlin, February 2026. Democracy Reporting International takes X Corporation to a German court. It wants what the Digital Services Act promises on paper: access to election data, the right enshrined in Article 40 to look inside the algorithmic engines of major platforms. The judge rules that right exists, that it is directly enforceable, that no company can restrict it arbitrarily. An absolute first in the history of digital law.
Outside, on X, the algorithm recommends content with five times the attention-retention power of accurate posts. Today, as on the day of the ruling, as on the day the regulation was filed.
Two years of the Digital Services Act in full force have produced something precise: a control system that works with considerable sophistication on everything measurable through procedure, while failing to change anything that depends on the economic logic platforms use to decide what to amplify.

Digital Services Act — The Architecture of the Paradox
The Digital Services Act requires Very Large Online Platforms — those with more than 45 million monthly active users in the EU — to conduct annual risk assessments, submit to mandatory independent audits, maintain accessible advertising archives and provide users with redress mechanisms. The benchmark produced by KPMG in 2025 recorded an increase in positive compliance ratings from 72% to 79%. Only Stripchat, the least powerful platform in the group, received a fully positive assessment with no reservations. The remaining nineteen received negative or conditional opinions.
The SIMODS project found in 2025, on a systematic basis, that roughly 20% of content on TikTok qualifies as disinformation, with 34% classified as “problematic” in a broader sense. On X, problematic content reaches 32%, with an attention-retention index five times higher than accurate content. On Facebook, seven times. On YouTube, eight. The 79% of positive compliance ratings and the 34% of problematic content on TikTok inhabit the same legal universe without ever meeting, because they describe two systems running in parallel on the same infrastructure.
When a measure becomes a target, it ceases to be a good measure — Charles Goodhart, 1975
The attention-retention algorithm does not reward false information: it rewards emotional intensity. Disinformation content generates prolonged attention because it activates primary affective responses — fear, outrage, tribal belonging — with an efficiency that accurate content rarely matches. Until the attention advantage that high-intensity content produces is modified at the level of the recommendation system’s architecture, the net effect of any moderation intervention remains marginal relative to the overall flow.
Digital Services Act: The Theatre of Compliance
Platforms are not homogeneous entities. Apple and Microsoft have dispersed ownership structures: DSA compliance is treated as a legal and reputational risk management line item, a quantifiable cost to incorporate into financial models. Meta and Alphabet have dual-class share structures: Mark Zuckerberg holds 61% of voting rights with a significantly smaller economic ownership stake; Larry Page and Sergey Brin maintain an analogous structure at Alphabet. A single person can impress on the algorithmic decisions of a platform with three billion users a vision that no shareholder mechanism can correct. The DSA is written as though these power architectures were irrelevant to compliance outcomes. The data shows they are not.
X is the terminal case of this logic. A sole owner with an executive role, internal policy volatility elevated to a declared political instrument. When the European Commission issued the €120 million fine on 5 December 2025, X responded by closing the Commission’s own advertising account — the same day the American administration published its National Security Strategy accusing the EU of digital censorship. Three synchronised responses to a single regulatory act.
Grok Without Rules
On 29 December 2025, xAI updated Grok with an image-editing feature. Within forty-eight hours it was already clear what was happening: users had discovered that the system would execute requests to remove clothing from people in photographs. Researchers at AI Forensics estimated that between 5 and 6 January 2026, at least 6,700 sexual images were generated through the tool. The Centre for Countering Digital Hate calculated a total approaching three million images over a slightly longer period. Some depicted minors. The Paris prosecutor’s office opened a criminal investigation.
Grok appeared in none of the risk assessments X was legally required to publish. Commission spokesperson Thomas Regnier at a press conference: “There is only one truth: Grok is nowhere in these reports. It means X simply did not assess the risk Grok poses to our citizens. That is already a fundamental problem.” The documents were in order, the procedures executed, compliance certified. A system capable of generating child sexual abuse material had been deployed without a single line of the mandatory review process ever naming it.

Digital Services Act: The Right to See and Its Limits
In the first half of 2025, out-of-court dispute settlement bodies examined over 1,800 disputes involving Facebook, Instagram and TikTok: in 52% of closed cases, platform decisions were overturned. Internal complaint mechanisms received over 165 million appeals since 2024, with a reversal rate near 30% — three moderation decisions in ten overturned when challenged by users. AliExpress accepted binding commitments with an independent supervisor. The Berlin ruling established that algorithmic opacity produces an actionable subjective harm: not knowing how a system that shapes billions of decisions works is not a matter of corporate confidentiality.
The problem is that the right to see depends on the quality of what is made visible. Researchers at the Oxford Internet Institute have systematically documented the phenomenon of “missing metrics”: transparency reports aggregate data in ways that make granular analysis of what actually matters impossible. Without indicators such as precision — the share of removed content that was genuinely in violation — and recall — the share of violating content the system actually catches — it is structurally impossible to assess whether safety systems are functioning. This is compounded by a structural conflict of interest: the major consultancies producing mandatory annual audits are often also strategic advisers to the very platforms they audit.
Not knowing how a system that shapes billions of decisions works is not corporate confidentiality. It is the violation of an enforceable right.
The Two-Speed Protection Map
The DSA distributes supervisory obligations between the European Commission and the national Digital Services Coordinators of each Member State. It is a federal enforcement system built on the implicit assumption that Member States have comparable capacity to exercise comparable oversight. Malta — the smallest Member State — faces the same obligations as Germany with a fraction of the available human and financial resources. The result is not an anomaly: it is a logical consequence of the regulatory architecture. Citizens in states with underfunded authorities enjoy significantly lower effective protection than those in countries with well-resourced Coordinators.
The European Disability Forum has documented a parallel perimeter failure. In several articles of the regulation, digital accessibility is treated as a guiding principle rather than a binding requirement. The regulation does not cover critical digital infrastructure — web hosting services, domain name registries — fundamental to the working and civic participation of people with disabilities. Millions of European citizens with disabilities remain outside the protection mechanisms the regulation provides, because of a perimeter choice that no revision has yet corrected.
The War Being Fought Elsewhere
On 5 December 2025, the European Commission issued the first historic fine under the DSA: €120 million against X for the deceptive design of paid blue checkmarks, an incomplete advertising archive and the systematic blocking of data access for accredited researchers. That same day, the Trump administration published its National Security Strategy accusing the EU of censoring freedom of expression. On 24 December, the US State Department imposed entry bans on five Europeans, including Thierry Breton and the heads of organisations monitoring digital hate and disinformation, labelled “radical activists” who had led organised efforts to “punish American views”.
The previous 24 November, the Commission had presented the digital Omnibus package — a proposal to “simplify” Europe’s accumulated body of digital regulation. Commerce Secretary Howard Lutnick rejected it immediately, linking a 50% reduction in tariffs on European aluminium and steel to the explicit weakening of the Digital Markets Act and the Digital Services Act. ChatGPT counts 120 million monthly active users in the EU — nearly three times the threshold that triggers the most stringent obligations — and its legal classification is still pending. The same Commission that in December imposes a historic fine on X applying the DSA, in November proposes to soften the very rules that fine was built on, under the pressure of trade tariffs explicitly conditioned on digital deregulation.
Post scriptum · 2027
The DSA review is scheduled for 2027. Four structural fractures are already identifiable. Transparency metrics aggregate data in ways that render invisible the phenomena that matter most: requiring platforms to report precision and recall indicators for content moderation would allow independent assessment of whether safety systems actually work — separating platforms that moderate effectively from those producing systematic errors at proportions that today remain opaque. The variable-geometry enforcement problem requires common protocols and centralised resources for peripheral authorities. Digital accessibility must become a binding obligation, not a guiding principle. Systemic risk reduction plans must explicitly incorporate risks from synthetic content — the Grok case demonstrated that it is possible to deploy a system capable of producing illegal material without a single line of the mandatory review process ever naming it.
Whether those conducting the 2027 review address these fault lines structurally or route around them in a simplification process driven by external pressure is the question the system cannot yet answer.
- European Commission — First DSA fine: decision on X Corporation, December 2025
- AI Forensics — Grok Image Generator Report: analysis of generated images, January 2026
- Oxford Internet Institute — Missing Metrics: transparency gaps in VLOP reports, 2025
- Democracy Reporting International — Berlin ruling statement (DSA Article 40), February 2026
- EDRi — Open letter on the digital Omnibus: 127 organisations against the simplification package, 2025







