Machine Learning: The Social Ontology Embedded in Code
Machine learning, real-world cases, and the politics hidden behind technique
The Democratic Promise and the Asymmetric Reality
Arthur Samuel defined machine learning as “the ability of a machine to learn without being explicitly programmed” — a phrase that carries a seductive, almost emancipatory resonance. The idea of machines that learn autonomously conjures the image of a neutral, objective intelligence, capable of transcending the limits of human programming.
But that narrative conceals a more uncomfortable truth: machines do not learn in a vacuum. They learn from the data we feed them, and that data is always already saturated with every prejudice, every power asymmetry, every social hierarchy that produced it.
The problem does not end with dataset quality. Even if data were “clean” — assuming such purity is conceivable in a stratified society — the very architecture of neural networks, the computational infrastructure, and the protocols of access and distribution are subordinated to the proprietary interests of those who control the digital ecosystem. Machine learning is not an autonomous technology: it is a power device implemented through proprietary networks.
Machine learning and Algorithmic Bias: A Feature, Not a Bug
Algorithmic bias is not a malfunction to be fixed in some future, more ethical version of the software. It is a structural feature of machine learning when applied to deeply unequal societies.
Consider predictive policing systems: they concentrate surveillance in areas that are already over-policed, creating feedback loops that confirm and amplify the initial assumptions, turning them into self-fulfilling prophecies. More surveillance generates more arrests; more arrests “justify” more surveillance. The loop closes, and epistemic violence becomes normalized.
Marvin Minsky, one of the founders of artificial intelligence, understood that every knowledge-representation system necessarily embeds a model of the world. His theory of “frames” shows how every algorithm is always already loaded with ontological assumptions. There are no neutral models: every model embeds a social ontology, a hierarchy of values translated into a mathematical function.
No one ever voted for that ontology. No one publicly debated whether this is the society we wanted to build. Algorithmic governance operates like an epistemological coup: it replaces political debate with mathematical optimization.
Architecture as Ideology: Fuller and the Geometry of Power
Buckminster Fuller taught us that architecture is never neutral: every structure embeds a value system, a worldview, a distribution of resources and possibilities. Applied to digital infrastructure, it means something simple: design is governance.
Big Tech built a model that made previous forms of communication and social organization obsolete — not to emancipate humanity, but to extract value from every interaction, every datum, every trace of digital existence. Platform architecture — ranking, recommendations, attention economies — is not a technical detail. It is ideology turned into structure.
Fuller spoke of “tensegrity”: a structural integrity that distributes forces across a system. Contemporary digital infrastructure operates through an inverted tensegrity: it concentrates power in a few proprietary nodes while distributing responsibility (and risk) across billions of users. Data is extracted from below; value accumulates above; negative externalities are dumped onto society.
Colombo and the Broken Communication Circuit
Fausto Colombo argued that digital convergence was not a simple technological evolution, but a restructuring of the communication circuit. Control over communicative infrastructure is always also control over the production of social meaning.
In the context of machine learning, this means whoever controls the models — who decides what data to collect, how to preprocess it, what objectives to optimize, what metrics to maximize — also controls the parameters through which reality is interpreted and acted upon.

Artificial Intelligence as a Regime of Truth
Following Foucault, we can say machine learning functions as a regime of truth: a system that produces truth, that defines what counts as valid knowledge and what can be asserted legitimately. But unlike the regimes of truth Foucault studied — at least partially visible, debated, contested — the algorithmic regime often operates as an epistemic black box.
Opacity is not merely a temporary technical limit: in many complex systems it is intrinsic. This is where Kate Crawford’s lens becomes decisive: AI as a material, extractive infrastructure — labor, resources, data — presented as inevitable technological destiny.
Predictive Policing: Epistemic Violence Formalized
Predictive policing is the paradigmatic case: the model is trained on historical data reflecting decades of stratified policing. It “learns” that certain neighborhoods and populations are “high risk,” concentrates surveillance there, produces more arrests, feeds those new arrests back into the dataset: the loop reinforces itself.
This is not a bug: it is the intended functioning of a system that turns correlations into apparently natural causalities. It legitimizes structural violence through the authority of mathematics.
Machine learning: A Hierarchy of Values Turned into a Mathematical Function
Every machine learning model embeds a loss function: a metric defining what it means to “learn well.” But who chooses the metric? Who sets the trade-offs between accuracy and fairness, efficiency and privacy, performance and interpretability?
These are political questions. Yet they are resolved inside private corporations by engineering teams guided by profit optimization and accountability to shareholders — not to citizens.
The outcome is a value hierarchy that reflects the interests of digital capital: maximizing engagement becomes more important than informational integrity; optimizing conversion becomes more important than cognitive autonomy; cutting operational costs becomes more important than guaranteeing labor dignity.
Toward a Critical Ecology of Algorithms
The problem is not machine learning as a technique — which can be useful, for example, in medical diagnostics or climate modeling. The problem is machine learning as a power device, deployed on proprietary networks, oriented to value extraction, and removed from any democratic control.
A critical ecology of algorithms must work along four axes:
- Radical transparency: opening the entire pipeline (collection, training, deployment, continuous auditing).
- Democratic accountability: participatory governance and a right to contest for impacted communities.
- Public infrastructure: commons-based alternatives to extractive platforms.
- Critical literacy: not only technical understanding, but political literacy of the ontologies embedded in code.
Conclusion: Decode. Resist. Reclaim.
Machines do learn. But they learn to replicate and amplify existing power structures. They learn to turn lived experience into data points. They learn to predict behavior in ways that transform prediction into prescription. They learn, ultimately, to govern without being governed.
Machine learning: Essential sources
- A Framework for Representing Knowledge Marvin Minsky (1974) — MIT AI Lab Memo / MIT DSpace
- Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence Kate Crawford (2021/2022) — Yale University Press
- To predict and serve? Kristian Lum & William Isaac (2016) — Significance / Royal Statistical Society (Wiley)
- The Relevance of Algorithms Tarleton Gillespie (2014) — MIT Press (Media Technologies)
- Power/Knowledge: Selected Interviews and Other Writings, 1972–1977 Michel Foucault (ed. Colin Gordon) — includes “Truth and Power” and the “regime of truth” concept
- The Age of Surveillance Capitalism Shoshana Zuboff (2019) — Profile Books
- Weapons of Math Destruction Cathy O’Neil (2016) — Penguin Random House
Tags: #MachineLearning #AlgorithmicBias #ArtificialIntelligence #PredictivePolicing #DigitalInfrastructure #SurveillanceCapitalism #MediaTheory #DigitalPoliticalEconomy
Keywords: machine learning, algorithmic bias, social ontology, Marvin Minsky, Buckminster Fuller, Fausto Colombo, predictive policing, digital infrastructure, algorithmic governance, digital platforms, surveillance capitalism







