The Impossible Architecture: AI Governance and Power in the Platform Society
Power of data: From Foucault to Castells (and the Platform Society), to deconstruct the illusion of democratic control over artificial intelligence
THE HOPE FOR AI GOVERNANCE
19 December 2024, Riyadh. The Internet Governance Forum (IGF) closes after five days of multistakeholder discussions. Over those days, 11,000 participants from governments, the private sector, civil society and the technical community crowd panels and side events. The final communiqué celebrates the “Global Digital Compact” adopted in September by the United Nations – described as “the first universal agreement on AI governance.” In addition, the Independent International Scientific Panel on AI, the Annual Global Dialogue on AI Governance and the ambitious goal of achieving “global AI standards that benefit all” by 2030 are announced. The “Riyadh IGF Messages” are adopted to “guide policymaking over the coming years.” Finally, UN Under-Secretary-General Li Junhua reaffirms “the enduring relevance of building a people-centred, inclusive, development-oriented Information Society.”
Same week. Real world.
NVIDIA confirms that the entire 2025 production of Blackwell chips is sold out until October. According to Morgan Stanley, in 2025 NVIDIA will consume 77% of the 535,000 global 300 mm wafers destined for AI – up from 51% in 2024. Market share reaches 86% in AI training chips. Gross margin is 78%: a number that would make any traditional manufacturing company bleed. On 7 November 2024 in Taiwan, Jensen Huang states: “Currently, we are not planning to ship anything to China.” US export controls ban sales of Blackwell chips to Beijing. Nevertheless, Bloomberg and the Wall Street Journal document that Chinese firms are still buying the chips – via intermediaries registered in Singapore, Malaysia, Vietnam – at 600,000 dollars for an eight-GPU server, more than double the official price of 240,000–320,000 dollars. Delivery time is around six weeks. Not bad, considering that officially they are sold out for a year.
In parallel, Huawei Ascend 910B and 910C are already powering 50% of the top Chinese large language models. At the same time, Cambricon Technologies posts its first-ever quarterly profits at the end of 2024 and revenues soar. The Chinese government orders that public computing hubs must purchase at least 50% of their chips from domestic suppliers. Moreover, China is deploying AI systems on 600 million cameras – Sharp Eyes, Skynet, Smart Cities – that identify ethnicity, gender and emotional state in real time. A CNN report on 4 December documents how AI is “turbocharging surveillance and control”: from predictive policing to AI-assisted courts, all the way to “smart prisons” where facial recognition monitors inmates’ expressions and triggers interventions if they “look angry.”
In practice, while universal agreements are celebrated in Riyadh, real governance is happening elsewhere. In TSMC’s cleanrooms where wafers become geopolitical destiny. In US export control decisions that weigh more than any UN resolution. In clandestine networks that circumvent bans via Singapore. In silicon, not in soft law. In this scenario, the platform society is not a neutral backdrop: it is the material stage where these decisions harden into infrastructure.
At this point, the question becomes inevitable: where is the architecture of global governance?

The Power of Data: Mapping and Continuous Evolution
Architectures of Global AI Governance: From Technological Change to Human Choice by Matthijs M. Maas (Oxford University Press, 2025) is probably the most comprehensive academic mapping of attempts to build institutional systems to govern artificial intelligence in the platform society. It offers more than three hundred pages of meticulous analysis: regime complexity, governance disruption, sociotechnical change. Maas maps treaties, institutions, soft law, proposals for an AI IAEA, for Digital Geneva Conventions, for multilateral frameworks inspired by the IPCC. He also documents summits – Bletchley Park UK 2023, Seoul 2024, the G7 Hiroshima AI Process, ITU’s AI for Good in Geneva. Finally, he catalogues initiatives: OECD AI Principles, UNESCO Recommendation on the Ethics of AI, EU AI Act, Biden’s Executive Order, proposals for a World AI Organization.
However, beneath this institutional map there pulses a truth that the book documents without naming explicitly: AI governance is already dead before it is born. Not because there are no smart proposals or competent experts. Rather, because the architecture of digital power has already been built inside the platform society – and no one asked international institutions for permission.
As Carnegie Endowment notes in “The AI Governance Arms Race: From Summit Pageantry to Progress?” (October 2024): “In a rush to be the first to regulate – or, in some cases, to avoid regulation – countries risk creating a confusing web of summits and initiatives that undermine the goals of a coherent global AI governance.” In fact, summits are “summit pageantry” – a theatre of diplomacy that produces voluntary commitments, not binding agreements. The 2023 Bletchley Declaration acknowledged the risks but did not propose pauses in development. In turn, Seoul 2024 secured signatures from 16 AI companies on Frontier AI Safety Commitments – again, voluntary and non-binding.
The problem, therefore, is not technical. It is structural. To really understand it, we need to look beyond international lawyers and turn to theorists of digital power in the platform society.
The Algorithmic Panopticon: The Power of Data
From discipline to continuous control
Maas describes “governance disruption” as if AI were interrupting pre-existing normative systems. Michel Foucault, instead, reveals what is actually happening. The French philosopher taught that modern power is not something one possesses – it is something one exercises. It does not operate through prohibition but through production: it produces truths, knowledges, subjectivities. Consequently, AI does not disrupt this logic; it perfects it.
The Foucauldian concept of “governmentality” – the art of governing populations through devices that make them calculable, predictable, optimisable – finds in artificial intelligence its ideal technical substrate. When Maas speaks of “infrastructure overhang” (ch. 1.3.2.3) – the condition in which AI can be deployed instantly because pre-existing digital infrastructures are already in place – he is unwittingly describing how disciplinary power has already materialised in CCTV cameras, government databases and urban sensors installed over the past twenty years.
The 200 million cameras in China did not become instruments of AI surveillance when facial recognition was added. They already were. AI simply made visible what Foucault called the “Panopticon” in Discipline and Punish (1975) – that power structure in which the possibility of being observed produces self-discipline. There is, however, a crucial difference: Bentham’s panopticon required the presence (or belief in the presence) of a human guard. The algorithmic panopticon, by contrast, operates 24/7, analyses behavioural patterns that no human could detect and predicts crimes before they occur.
The Power of Data: From Disciplinary Societies to the Platform Society
In his “Postscript on the Societies of Control” (1992), Gilles Deleuze theorised the shift from disciplinary societies (confinement: prisons, hospitals, factories) to control societies (continuous modulation through open flows). Today this dynamic lives within the platform society: platforms modulate access, visibility, income and reputation in real time. Byung-Chul Han, in Psychopolitics (2017), pushes the diagnosis further: we are no longer in Foucault’s biopower (control of bodies), we are in psychopower – a form of control that does not feel like control because it operates through our voluntary consent, our likes, our clicks, our quantified selves. “Intelligent power,” Han writes, “is non-aggressive; it seduces instead of forbidding.”
Meanwhile, while international lawyers draft frameworks to govern this technology, the platform society already governs one and a half billion people. AI governance is not a future challenge to be solved. It is an operative reality that pretends to still be a project.
Castells and the Power of Data: The Real Value of Infrastructure
In his trilogy The Information Age (1996–2010), Manuel Castells gave us the tools to understand why traditional institutions fail in the face of AI. In that theoretical frame he distinguished between:
- “Space of places” – the territorial power of nation-states, physical institutions, diplomacy;
- “Space of flows” – the power that operates through flows of information, capital, data that cross borders without asking permission.
Maas catalogues the “regime complexity” of AI governance – this proliferation of institutions, norms and soft law that overlap without coordination (ch. 6). In reality, complexity is not a bug. It is a strategy. While the UN, OECD, Council of Europe, G7 and US–EU coalitions debate who should sit at the diplomatic table, real power operates elsewhere.
NVIDIA does not sit at diplomatic tables yet decides which compute will be available and to whom. Microsoft Azure, Amazon AWS and Google Cloud do not negotiate treaties yet control the infrastructure on which every AI model of the next decade will run. Therefore, they do not need summits: they are already the infrastructure of the platform society.
From the Network Society to the Platform Society
Castells wrote about the network society. Today José van Dijck, Thomas Poell and Martijn de Waal explicitly speak of a platform society: a society in which media, logistics, financial and social platforms become critical infrastructure. In this platform society, power lies not only in networks but in the ability to program and reprogram the platforms themselves.
Castells distinguished four forms of power in the network society:
- Network power – the power to be in the network (who gets excluded?);
- Networked power – power within the network (who controls the critical nodes?);
- Networking power – the power to connect/disconnect different networks;
- Network-making power – the power to program/reprogram the networks themselves.
In light of this typology, who exercises these powers in the AI ecosystem?
The US uses export controls on semiconductors to disconnect China from the global network of advanced compute (networking power). NVIDIA decides who can access its H100 and Blackwell chips (networked power). Big Tech writes de facto standards through proprietary APIs, frameworks and interfaces (network-making power). TSMC in Taiwan controls 63% of CoWoS packaging capacity needed to produce practically all advanced AI chips (network power).
Meanwhile, international institutions produce PDFs. Declarations. Riyadh IGF Messages. As Castells writes: “Flows of power have precedence over the power of flows.” In other words, those who control material infrastructure beat those who write the rules. AI governance operates in the space of places. AI power operates in the space of flows and within the platform society. There is no contest.
Invisible Actors: Latour, Bratton and Hidden Materiality
Through Actor–Network Theory, Bruno Latour taught us to trace networks that include non-human actors. Science and technology studies show that technical objects are not inert – they are actants that modify action and co-construct the social. Yet Maas, like most international lawyers, traces networks of human actors only: states, institutions, NGOs, companies, individuals.
So where is the mapping of the real actors of AI governance in the platform society?
Where is TSMC’s 3-nanometre process – the most advanced manufacturing process in the world, dominated only by Taiwan, which turns the island into the most critical geopolitical bottleneck on the planet? Where are the submarine cables that carry 99% of transcontinental internet traffic, controlled by a small oligopoly and vulnerable to sabotage? Where are the data centers that consume 1.5% of global electricity, require entire rivers for cooling and concentrate compute power in a few geographies? Where are the rare earths mined in Congo by underpaid workers to produce the batteries that power the mobile infrastructure on which edge AI runs?
The Stack as the Real Digital Constitution
In The Stack: On Software and Sovereignty (2015), Benjamin Bratton mapped the true architecture of digital power through six overlapping layers:
- Earth – physical resources (minerals, energy, territory);
- Cloud – computational infrastructure (data centers, networks);
- City – urban implementation (smart cities, sensors, surveillance);
- Address – protocols and standards (TCP/IP, DNS, AI protocols);
- Interface – how humans interact (UIs, APIs, LLM interfaces);
- User – subjectivities captured and profiled.
Maas speaks of “architectures of global AI governance,” but these institutional architectures float powerlessly above the material Stack. As Bratton notes, the Stack is already a form of governance – it is “a totalized system of planetary-scale computation.” Governing AI without controlling the Stack is like making laws about gravity.
Complementarily, in Atlas of AI (2021), Kate Crawford traces the extractive chains hidden behind every model: minerals (lithium, cobalt, rare earths), invisible human labour (Kenyan clickworkers labelling data for 2 dollars an hour), energy consumption (training GPT-3 emitted 552 tons of CO2), environmental impact (aquifer depletion for data center cooling). Her thesis is clear: “AI is neither artificial nor intelligent” – it is deeply material and extractive.
Meanwhile, governance talks about “ethics by design” and “human-centric AI” while ignoring those who mine cobalt in Congolese pits. “Global governance architectures” worry about algorithmic bias but not about the supply chain that makes the very existence of algorithms possible in the platform society.
Surveillance Capitalism Finds Its Infrastructure
The Power of Data: Models and an Extractive Business
Shoshana Zuboff theorised “surveillance capitalism” in The Age of Surveillance Capitalism (2019) – the unilateral extraction of human behavioural data to produce predictions sold to third parties. It is the business model of Google, Facebook, Amazon: monitor, capture, analyse, predict, modify behaviour. Zuboff called it “a rogue mutation of capitalism.”
Maas documents how AI fits perfectly onto this already built extractive infrastructure. He calls it “infrastructure overhang” and describes it as a technical characteristic: AI can scale quickly because it can leverage pre-existing digital infrastructures. However, this is more than a technical trait. It is, in fact, a form of political architecture.
A Software Update on an Already Wired World
Decades of constructing government databases (population registries, criminal records, tax archives) have made populations perfectly indexable. In parallel, the proliferation of urban sensors (cameras, traffic sensors, Wi-Fi tracking) has turned cities into trackable spaces. In addition, the normalisation of private surveillance (our homes full of Alexa, Ring doorbells, Nest cams, smartwatches tracking heart rate and sleep) has created a continuous data collection infrastructure.
This infrastructure created the perfect substrate. AI did not have to convince anyone to install cameras or surrender data. Everything was already there. As Maas notes (ch. 1.3.2.3): while electrification required decades of infrastructural rollout to reach users, AI leverages already deployed infrastructures. The result is that a surveillance system can move from “dumb” (traditional CCTV cameras that merely record) to “smart” (AI facial recognition that identifies, tracks, predicts) with a simple software update.
At this stage, a question that AI governance never asks becomes unavoidable: can there be democratic governance of AI without first governing the extractive capitalism that powers it in the platform society? Can international institutions regulate systems whose business model structurally depends on the accumulation of behavioural data without genuine informed consent?
Evgeny Morozov calls this dynamic “digital colonialism.” The Global North designs AI governance institutions (with the expertise, conferences, papers published in Nature). The Global South, instead, provides the data (through platforms that extract it for free), the invisible labour (data labelling, content moderation), the territories for mining (cobalt, lithium, rare earths), and then endures surveillance implemented with technologies it never voted for. In practice, “global AI governance” replicates colonial structures and calls them multilateralism.
Data Power and Predatory Formations
In Expulsions: Brutality and Complexity in the Global Economy (2014), Saskia Sassen gave us the concept of “predatory formations” – institutional configurations that seem to build order but in reality extract value through destruction. They look like governance but operate as expulsions: from the economic system, from social citizenship, from dignity.
In the same way, AI governance is a predatory formation: it appears to build rules to protect citizens, but actually extracts legitimacy for a control system already in motion. It produces the illusion of democratic participation (“multistakeholder dialogue,” “public consultation”) while critical decisions are taken elsewhere – in chipmakers’ boardrooms, in export control agencies, in frontier AI labs that ship models faster than parliaments can legislate.
The Three Disruption Scenarios That Are Already Reality
Maas candidly documents this dynamic when he describes three possible “governance disruption” scenarios (ch. 5):
- Development – AI creates new regulatory gaps, ambiguities, overlaps between existing regimes;
- Displacement – AI replaces human governmental functions with automation (automated enforcement, AI judges, predictive policing);
- Destruction – AI erodes the political conditions for governance itself (concentration of power in a few actors, erosion of state capacity).
However, these scenarios do not describe a hypothetical future. In reality, they are already here. They are operative. Development: while the world debates how to regulate AI, companies are shipping GPT-5, Claude Opus 4, Gemini Ultra. The speed of innovation outpaces legislative speed by orders of magnitude. Displacement: algorithms already decide who receives welfare benefits, who gets stopped by the police, who obtains credit, who gets hired. Destruction: the concentration of compute power in the hands of three or four companies is already eroding states’ regulatory capacity. How do you regulate a system you cannot understand (a black box), cannot verify (proprietary), cannot inspect (trade secret)?
Consequently, who is being “expelled” from AI governance? Billions of people in the Global South, surveilled by AI systems they never voted for, tracked for training data they never consented to, impoverished by automation they never debated.
The Asymmetry Between States and Platforms: The Power of Data
In Seeing Like a State (1998), James C. Scott showed how modernising states impose “legibility” on society to be able to govern it: land registries (making property legible), censuses (making populations legible), standardisation (making local practices legible). State power requires simplification, mapping, quantification.
Today, however, AI has created a devastating reversed asymmetry: it has made society illegible to the state but perfectly legible to the owners of the algorithms that govern the platform society.
When Maas describes “governance disruption,” he is documenting the historical moment in which the state discovers it has become blind. It does not understand how GPT-4 makes decisions (it is a black box even to those who created it). It does not know how much data Google has collected on each citizen (that is a trade secret). It cannot verify whether algorithms discriminate (they are proprietary, and inspection is IP theft). It cannot even see the code of the systems that make decisions about its citizens’ lives.
Algorithmic transparency requirements in laws such as the GDPR or the EU AI Act clash with structural opacity: machine learning produces systems that nobody truly understands, not even their creators. Emergent capabilities, inscrutable representations, alignment failures. As Anthropic’s researchers write: “We don’t fully understand how these models work.”
In the meantime, Google sees everything. Not because it is evil, but because the business model of platform capitalism requires total information asymmetry. Zuboff called it the “surveillance dividend” – profit derived from seeing what others do not see, from knowing what others do not know.
Ultimately, AI governance comes to a table where it has no cards to play. It lacks technical expertise (concentrated in a handful of private companies). It lacks economic leverage (concentrated in control of cloud infrastructure). It lacks speed (legislative cycles last years, AI development cycles last months and deployments are instantaneous). It lacks even basic visibility over what it is supposed to govern. At that point, the question repeats itself: how can you govern what you cannot see?
The Real Architecture: Export Controls > Soft Law
Let us return to where we started. NVIDIA. 86% of the AI chip market. 77% of global wafers in 2025. Entire Blackwell production sold out until Q4 2025.
Maas’s book documents hundreds of governance initiatives: the Bletchley Declaration, the Seoul Summit, the Hiroshima AI Process, the UNESCO Ethics Recommendation, the OECD AI Principles, the Global Digital Compact, proposals for a World AI Organization, the International Scientific Panel on AI, the Annual Global Dialogue. Thousands of person-hours of diplomacy. Tons of paper (or rather, PDFs).
Yet a single US decision on export controls has more impact on the future of AI than any UN resolution. When Washington says “NVIDIA cannot sell Blackwell to China,” that is real governance. When Jensen Huang declares “Currently, we are not planning to ship anything to China” (7 November 2024), he exercises more power over the global AI landscape than 11,000 participants in Riyadh.
An Oligarchy Inscribed in Silicon
As Langdon Winner notes in “Do Artifacts Have Politics?” (1980), technologies are not neutral – they are politics congealed into silicon. The physical architecture of AI – centralised (it requires massive data centers), energy-intensive (gigawatts of power), capital-intensive (billions of dollars in training costs) – determines its governance more than any institutional architecture.
A system that requires billions of dollars of compute to be trained naturally tends toward monopoly and concentration. You do not need Marxist critical theory to see it; the numbers are enough: over 100 million dollars for a single frontier model training run. Only a few organisations in the world can afford that: OpenAI (backed by Microsoft with 13 billion dollars), Google DeepMind, Anthropic (backed by Amazon with 4 billion + Google with 2 billion), Meta, xAI (Musk), some Chinese firms. AI is intrinsically oligopolistic due to material constraints, not regulatory choices.
Moreover, this oligopoly controls not only the models but the infrastructure itself. AWS, Azure and Google Cloud together control over 60% of global cloud computing. Six hyperscalers (US + China) dominate AI data centers. TSMC produces over 90% of advanced chips. ASML (Netherlands) holds a monopoly on EUV lithography machines required to produce chips below 7 nm.
Seen clearly, AI governance is not failing. It simply cannot fail because it was never truly possible. “Architectures of global AI governance” are printed words floating above a material infrastructure that has already solidified. It is as if we tried to regulate gravity by organising conferences on theoretical physics, while the platform society keeps running on its own.
And Now? The Courage of Intellectual Honesty
Naming Power of data for What It Is in the Platform Society
Maas concludes his book with a call to action: improve international coordination, strengthen enforcement mechanisms, build new institutions. It is a noble intent. Perhaps a necessary one. Yet the most valuable lesson his work offers may be the one he did not intend to teach: the impossibility of the architecture he documents is the necessary revelation that finally shows where real power lies.
In other words, real AI power in the platform society lies elsewhere:
- Not in treaties, but in chips.
- Not in conferences, but in data centers.
- Not in soft law, but in silicon.
- Not in the space of places, but in the space of flows.
- Not in governance architectures, but in the material Stack and the platforms that orchestrate it.
From the Myth of Regulation to the Politics of Materiality
This does not mean resignation. It means, rather, intellectual honesty. It means stopping the pretence that 20th-century institutions can govern 21st-century infrastructures using 19th-century tools (international treaties, soft law, voluntary commitments).
Consequently, an honest AI politics should accept a few uncomfortable points:
- Perhaps the only honest governance is admitting that AI is not governed; it is either suffered or owned.
- States with compute sovereignty (the US, China, and partially the EU if TSMC’s Arizona scale-up succeeds) can negotiate; the others can only choose which master to serve.
- The central problem is not “how to govern AI,” but “how to govern extractive capitalism, digital colonialism, and the oligopolistic concentration of computational power that makes AI possible in the platform society.”
- Without tackling these structural knots, any “AI ethics” framework remains performative, decorative, a communication add-on.
It also means recognising that when IGF Riyadh celebrates “the first universal agreement on AI governance,” it is celebrating theatre. The real show is elsewhere. In TSMC cleanrooms where wafers become destiny. In boardrooms where decisions are made about who can buy Blackwell. In submarine cables that carry data traffic. In Congolese mines where rare earths are extracted.
The governance theatre can go on. The audience may even applaud. Standing ovations for the Global Digital Compact may fill official photos. However, the real performance is staged elsewhere.
And there – in those cleanrooms, in those 2-GW data centers, in those headquarters where Jensen Huang decides, in those export control offices where Washington disconnects Beijing – no diplomat has ever set foot. No International Scientific Panel can inspect. No Global Dialogue can influence.
The architecture is already built. You did not vote for it, but you inhabit it every day. Welcome to the Stack. Above all, welcome to the platform society.
The Power of Data: Sources and References
- Maas, M.M. (2025). Architectures of Global AI Governance: From Technological Change to Human Choice. Oxford University Press. ISBN 9780198877837. DOI: 10.1093/9780191988455.001.0001
- Internet Governance Forum, Riyadh (19 December 2024) – “Riyadh IGF Messages”
- UN Global Digital Compact (September 2024) – “First universal agreement on AI governance”
- Carnegie Endowment for International Peace (October 2024) – “The AI Governance Arms Race: From Summit Pageantry to Progress?”
- Seoul AI Safety Summit (May 2024) – “Frontier AI Safety Commitments”
- G7 Hiroshima AI Process (2023–2024) – “Hiroshima Process Comprehensive Policy Framework”
- Bloomberg (7 November 2024). “Nvidia CEO Says No Plans to Ship Blackwell AI Chips to China”
- Morgan Stanley (February 2025). “NVIDIA to consume 77% of wafers used for AI processors in 2025”
- Wall Street Journal (March 2025). “Chinese firms get Blackwell chips by ordering through nearby countries”
- Tom’s Hardware, The Register, Financial Times: coverage on NVIDIA export controls
- TechInsights (2024). “Data-Center AI Chip Market Q1 2024 Update”
- CNN (4 December 2024). “China’s censorship and surveillance were already intense. AI is turbocharging those systems”
- Georgetown CSET (2021). “China’s Sharp Eyes Program Aims to Surveil 100% of Public Space”
- Australian Strategic Policy Institute (December 2024). Report on AI-driven surveillance in China
- NPR (2019). “How China Is Using Facial Recognition Technology”
Theoretical Frameworks: The Power of Data
- Foucault, M. (1975). Surveiller et Punir: Naissance de la prison
- Deleuze, G. (1992). “Postscript on the Societies of Control.” October, 59
- Castells, M. (1996–2010). The Information Age Trilogy: The Rise of the Network Society, The Power of Identity, End of Millennium
- Latour, B. (2005). Reassembling the Social: An Introduction to Actor–Network Theory
- Bratton, B. (2015). The Stack: On Software and Sovereignty. MIT Press
- Winner, L. (1980). “Do Artifacts Have Politics?” Daedalus, 109(1)
- Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs
- Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press
- Sassen, S. (2014). Expulsions: Brutality and Complexity in the Global Economy. Belknap Press
- Han, B.-C. (2017). Psychopolitics: Neoliberalism and New Technologies of Power. Verso
- van Dijck, J., Poell, T., de Waal, M. (2018). The Platform Society: Public Values in a Connective World







