Algorithms Decoded: From Mathematical Abstraction to Computational Power

ALGORITHMS: the algorithm is naked

Algorithms Decoded: the dream of calculable reason

In the 17th century, Gottfried Wilhelm Leibniz conceived a project as ambitious as it was utopian: the Characteristica Universalis, a universal language of thought capable of reducing every line of reasoning to calculable operations. His Calculus Ratiocinator was meant to mechanize logic itself, turning philosophical controversies into simple arithmetical computations: Calculemus!, Leibniz proposed, inviting us to resolve disagreements through calculation rather than dispute.

This rationalist vision – the belief that thought itself can be formalized into mechanical procedures – runs through two centuries of the history of logic and mathematics until it materializes, in the 20th century, in computational devices that now govern every aspect of our lives. But what materializes is not the universal rationality Leibniz imagined: it is a particular, situated rationality, embedded in the power relations of digital capitalism. This is where the idea of algorithm decoded becomes necessary: decoding how these models of calculation are transformed into infrastructures that govern lives.

Theoretical genealogy: from the crisis of foundations to the computational turn

The roots: Al-Khwarizmi and systematic procedure

Before it became code, the algorithm was a procedure. The term’s etymology comes from the name of the Persian mathematician Muhammad ibn Mūsā al-Khwārizmī (9th century), whose arithmetic treatises were translated into Latin as Algoritmi de numero Indorum. In this pre-modern phase, the algorithm designates a finite sequence of systematic arithmetical operations to solve classes of problems – division, extraction of roots, the solution of equations. It is a technique, not yet a theory.

The Entscheidungsproblem: Hilbert’s programme

In 1928, David Hilbert explicitly revived Leibniz’s project by formulating the famous Entscheidungsproblem, the decision problem. Hilbert sought to prove the consistency of a formal logical system by identifying the axioms of arithmetic, building on the principle of non-contradiction already used by Grégonne for the “axioms” of Euclidean geometry.

Hilbert’s programme was ambitious: to prove that there exists an algorithm capable of mechanically determining, for every well-formed mathematical statement, whether it is true or false, provable or unprovable. If such an algorithm existed, Leibniz’s dream would be fulfilled: mathematical truth would become a question of automatic calculation.

The negation: Gödel and Church

But this turns out to be impossible. In 1931, Kurt Gödel’s incompleteness theorem shattered Hilbert’s programme by demonstrating that in any formal system powerful enough to contain arithmetic there exist propositions that are true but unprovable within the system itself. In other words, it is not possible to prove the system’s consistency from within.

A few years later, Alonzo Church’s work confirmed “the logical undecidability of first-order logic”, proving the non-existence of an algorithm capable of determining the logical correctness of every “numerical expression”.

This does not mean algorithms are useless. It means that their domain is more limited than Hilbert had hoped. Algorithms – understood as logical–arithmetical systems – can only be used to compute recursive functions, through which complex problems can be solved by reducing and resolving them into simpler problems, in a chain that leads to meaningful results.

These functions are thus linked together by elementary logical steps: the “argument” of the first function becomes the formal parameter for the next operation, and so on. A problem is therefore decidable only when it can be encoded in a decidable language – a language whose first-order logicality can be established.

Turing: from computability to the universal machine

Around 1936, Alan Turing, reducing the question of computability to a mechanical procedure, agreed on the existence of undecidable problems by proving the existence of “non-computable” numbers through the so-called halting problem. This problem highlights the impossibility of knowing in advance whether an algorithm, operating on a given input, will reach a result or continue indefinitely without terminating.

Although it is therefore impossible to build a calculator capable of proving the logicality of cognitive processes, Turing believed it was possible to build different calculating machines capable of representing, through elementary mechanisms, all relevant models of computation. Like Church, he did not consider it necessary to prove the internal formal logicality of a system in order to obtain logically meaningful answers: it was enough to ensure the correctness of the answer with respect to given reference parameters.

This epistemological shift is crucial. No longer the search for absolute formal truth, but the pragmatics of computation: an algorithm works if it produces correct outputs given certain inputs, regardless of whether its internal logic is formally provable. Truth is replaced by operational correctness. Here, the expression algorithm decoded means precisely this: reading behind operational correctness the power logics that structure these systems.

The Church–Turing thesis and the limits of algorithms computation

This is how the so-called Church–Turing thesis takes shape: ideally, there exists a Turing machine (an abstract calculator) for every algorithm capable of manipulating symbols in a meaningful way to obtain a result different from the input data. The universal Turing machine is thus an “ideal calculator” whose memory support is a linear, potentially infinite tape, capable of simulating human and animal cognitive analysis and reaching the limits of computation.

Data-processing operations therefore do not coincide in machines and animals, but machines can correctly simulate human cognitive processes and reach analogous results. A complex function whose internal formal correctness cannot be defined algorithmically can be broken down into a finite series of elementary logical–mathematical steps whose correctness is based on the result of the previous calculation, used as a parameter.

At this point, space and time emerge as measures of the “complexity” (capacity to provide answers) of a Turing machine’s computational model. If we conceive time as the number of instructions given to and executed by the machine, and space as the number of cells used on the tape, these constitute the limits of computation. Not all problems are solvable in reasonable time and space: this is how complexity classes (P, NP, NP-complete) arise, distinguishing between computationally tractable and intractable problems.

From abstraction to materialization: Von Neumann and the algorithm machine

The Turing machine was an ideal device, a mathematical abstraction. In 1945, John von Neumann designed the architecture that turned that theoretical model into an electronic computer: memory, processor, input/output units. The core intuition is the stored program: algorithmic instructions are stored as data in the same memory that holds the data to be processed. The algorithm thus becomes as manipulable as the data it processes: it can be modified, optimized, copied. It becomes software.

Shannon: reducing the world to 0 and 1

In 1948, Claude Shannon published “A Mathematical Theory of Communication”, founding information theory. Shannon showed that any information can be encoded as sequences of binary bits (0 and 1), reducing the complexity of the world to discrete state changes. This is the definitive bridge between formal logic and physical implementation: the bit becomes the elementary unit with which algorithms manipulate digitized reality.

From elite tool to domestic opacity

From the 1950s to the 1980s, algorithms remained the prerogative of scientific and military elites: nuclear simulations, cryptography, ballistic calculations. They were specialist tools, visible only to those who programmed them. With the personal computer revolution in the 1980s and 1990s, algorithms spread into households but became opaque: graphical interfaces separated the user from the underlying logic. Clicking an icon hides thousands of lines of code. The algorithm became invisible precisely as it became ubiquitous.

The network turn: PageRank and the rule of algorithms

In 1998, Larry Page and Sergey Brin published the PageRank algorithm, which turned Google into the gatekeeper of global knowledge. It is no longer the user who decides which information is relevant: the algorithm classifies, ranks, makes things visible or invisible. This is the birth of algorithmic governmentality: the power to organize access to knowledge through computational procedures that present themselves as neutral, objective, mathematically necessary.

In parallel, recommendation systems (Amazon, Netflix, Facebook) introduce a qualitative mutation: the algorithm no longer retrieves what the user searches for, but predicts what the user will want. Behavioural profiling becomes the core business of digital capitalism: algorithms that analyse digital traces to build predictive models of human behaviour. In SEO terms, this is where “algorithm decoded” means decoding who decides what you see, when you see it, and under what conditions of visibility you gain access in digital space.

Algorithms: evolution of algorithms from mathematical abstraction to digital power infrastructure in neon style
Algorithm Decoded: from logical–mathematical abstraction to infrastructures of power that govern attention, access to information and social sorting.

The age of machine learning: algorithms that learn

Since the 2010s, machine learning has marked an epistemic rupture. Classical algorithms are sequences of explicit instructions written by human programmers. Machine learning algorithms, by contrast, learn patterns from massive datasets without the programmer explicitly specifying the rules. Artificial neural networks – computational architectures very loosely inspired by the functioning of the brain – are “trained” on millions of examples until they develop predictive capabilities.

A facial recognition algorithm does not contain rules such as “if the distance between the eyes is X and the shape of the nose is Y, then the subject is Z”. It contains millions of parameters (weights) that are progressively optimized during training to minimize classification error. The result is a black box: a system that works, but whose internal reasoning no one – not even its creators – can fully trace.

The paradox of incomprehensible simulation

We thus return, paradoxically, to an inverted version of Hilbert’s problem. Hilbert sought an algorithm capable of verifying the truth of every mathematical proposition. Today we have algorithms that solve complex tasks (machine translation, medical diagnosis, autonomous driving), yet we cannot formally prove why they work nor guarantee that they will always work. The Turing machine promised the simulation of cognition; deep learning achieves that simulation but renders it opaque even to its builders.

This opacity is not accidental. It is structural: with billions of parameters and distributed architectures, complexity becomes irreducible. It is also strategic: trade secrecy protects proprietary algorithms from public scrutiny.

Pervasive applications: classify, predict, optimize, generate

Today, algorithms unfold along four main functions:

  • Classification: pattern recognition to categorize entities (faces, texts, behaviours). Used in social media content moderation, surveillance systems, credit scoring.
  • Prediction: probabilistic estimation of future events based on statistical correlations in historical data. Applied in predictive policing, targeted advertising, risk-based insurance.
  • Optimization: searching for the optimal configuration of variables to maximize or minimize an objective function. It governs dynamic pricing in e-commerce, feed curation to maximize engagement, and just-in-time logistics.
  • Generation: producing synthetic content (texts, images, audio) that imitates patterns learned from existing corpora. This is the domain of generative AI such as GPT, DALL·E, Midjourney.

In all these cases, the algorithm is no longer a neutral tool but a socio-technical actor that produces reality: it decides who sees what, who gets credit, who is stopped by the police, which price is offered to which customer.

Critical implications: power, governmentality, political economy

The historical movement from Leibniz to today reveals a fundamental shift. Leibniz imagined the mechanization of universal, disinterested rationality in the service of truth. Hilbert sought absolute mathematical certainty. Turing formalized the limits of computation. But contemporary algorithms do not implement “pure rationality”: they implement situated rationality, embedded in specific economic and political objectives.

Facebook’s algorithm does not maximize user well-being or the quality of public debate: it maximizes “time on platform” because attention time is the commodity Facebook sells to advertisers. Amazon’s algorithm does not optimize customer satisfaction in the abstract: it optimizes profit through price variability and personalized offers. Predictive policing algorithms do not reduce crime: they concentrate police resources in already over-surveilled areas, amplifying systemic discrimination.

The critical question becomes: which rationality is being mechanized? And in whose interest?

Epistemic asymmetries: the appearance of objectivity

Algorithms present themselves as objective, neutral, mathematically necessary. “Data do not lie”, proclaims the rhetoric of data-driven decision making. But this presumed objectivity hides design choices, embedded values, power relations crystallized in code.

Algorithmic bias is not a malfunction but a structural consequence. If a hiring algorithm discriminates against women, it is because it was trained on historical data that reflect gender discrimination already present in the labour market. The algorithm automates and amplifies prejudice, presenting it as a neutral technical decision. As Cathy O’Neil notes in Weapons of Math Destruction, oppressive algorithms share three characteristics: they are opaque, they operate at scale, and they harm people without meaningful avenues for appeal.

Concentration of computational power

Producing advanced algorithms requires enormous computational resources (server farms, GPU clusters), massive datasets (billions of data points), and specialized expertise. This creates insurmountable barriers to entry that concentrate algorithmic power in a handful of corporations: Google, Meta, Amazon, Microsoft, Apple in the West; Alibaba, Tencent, Baidu in China.

This concentration has three main effects:

  1. Strategic opacity: proprietary algorithms are protected by trade secrets, shielded from public scrutiny and democratic accountability.
  2. Data extractivism: algorithms require mass surveillance as an operating condition. Every interaction becomes data to extract, every behaviour a trace to analyse.
  3. Network effects: algorithms improve with more data, creating self-reinforcing natural monopolies. Whoever has more users has more data, therefore better algorithms, therefore attracts more users.

Algorithmic governmentality: preemption and subjectivation

Drawing on Foucault, Antoinette Rouvroy speaks of algorithmic governmentality to designate a form of power that governs through prediction and preemption. It is no longer about disciplining bodies or modulating conduct (biopower), but about anticipating future behaviours on the basis of statistical correlations and intervening before the event occurs.

Predictive policing arrests “potential criminals” based on risk profiles. Credit scoring systems deny loans to individuals classified as “high default risk” before any default has actually occurred. Social media feeds modulate what we see in order to maximize engagement, shaping our preferences and identities.

This form of power produces algorithmic subjectivation: identities are constructed as sets of computational patterns. You are no longer an irreducible individual but a “demographic cluster”, a “behavioural profile”, a bundle of probabilities. As Rouvroy notes, this form of governmentality produces a collapse of contingency: the future is treated as already calculated, alternative possibilities as statistically unlikely and therefore politically irrelevant.

Social sorting – the automated categorization of populations into classes of risk, merit, value – becomes a mechanism of systemic discrimination that operates without any need for explicit discriminatory intent. It is the automation of social stratification.

Political economy of the algorithm

Algorithms are the means of production of digital capitalism. Like industrial machines in the 19th century, algorithms extract value from labour (today: attention, behavioural data, user-generated content) and concentrate it as profit in the hands of those who control computational infrastructure.

Algorithms are the machinery of this extraction.

Labour precarization passes through algorithms: Uber, Deliveroo, TaskRabbit use algorithms to assign tasks, evaluate performance, determine pay. Gig workers are subject to algorithmic management operating 24/7, without negotiation, without human supervision, with no real possibility of appeal. Digital Taylorism is more pervasive and relentless than its analogue predecessor.

Beyond technological determinism: what future for algorithms?

The movement from Leibniz to today shows how the rationalist dream of mechanizing thought has materialized while betraying its original promise. We have not obtained universal rationality but a rationality of profit. Not mathematical certainty but proprietary opacity. Not cognitive emancipation but new forms of computational subjection.

The theoretical limits demonstrated by Gödel, Church and Turing are systematically ignored in AI rhetoric. We talk about “general AI” that will soon reach human cognitive capacities, forgetting that there are formally undecidable problems, non-computable functions, intractable complexities. Tech hype erases awareness of the fundamental limits of computation.

Algorithms: which rationality?

If David Hilbert in 1928 sought an algorithm capable of determining mathematical truth, today algorithms produce regimes of truth – in Foucault’s formulation: not truth as correspondence with the real, but truth as an effect of procedures of power. Google’s algorithm determines what is relevant. Facebook’s algorithm determines what is viral. Credit scoring algorithms determine who is trustworthy. They do not simply describe reality: they perform it.

The history of the algorithm from Leibniz to today is the story of how a mathematical abstraction – the formalization of procedures of calculation – has materialized into an infrastructure of power. The task of critique is twofold: on the one hand, to deconstruct the rhetoric of algorithmic objectivity by exposing the political choices embedded in code; on the other, to imagine and build alternative forms of computation that serve emancipation rather than value extraction.

The issue is not whether to use algorithms – they are now an unavoidable infrastructure of contemporary societies – but which rationality algorithms implement and in whose interest. Decoding this rationality, challenging its claims to neutrality, resisting its naturalization: this is the critical work that must be done. In other words, algorithm decoded is a political project before it is a theoretical one.

Because if it is true, as Turing argued, that machines can simulate human thought, it is just as true that human beings can refuse to think like machines.

Bibliography: ALGORITHMS

Similar Posts