Against a Single Idea of Intelligence. What Is AGI, Really?

While intelligence can be framed as the capacity to generate predictive inferences through analytical models—or as the ability to create new and diverse associations across data and contexts—the real mistake is to look for a single, universal definition as a necessary condition to identify Artificial General Intelligence (AGI).

Intelligence is not an isolated property, nor an algorithm. It is an emergent, situated, distributed process. Reducing intelligence (human or artificial) to one domain—linguistic, logical, or predictive—repeats the old error of confusing the brain with a calculator and the human being with a terminal.

AGI: From Computation Basics to Situated Intelligence

Turing offered an operational criterion: judge intelligence by conduct (the Imitation Game) rather than metaphysical essences. His work on morphogenesis shows how complex forms can emerge from simple local interactions—a crucial bridge to thinking of mind—and thus AGI—as emergent patterns from many interacting parts.

Shannon demystified information: not “content,” but reduction of uncertainty within noisy, capacity-limited channels. Here fit coherence and cognitive energy: in generative systems (including LLMs), coherence is the maintenance of informational constraints across hierarchical levels—an economic management of uncertainty. “Cognitive energy” is the (attention/compute) cost of keeping those constraints alive, allocating precision where needed and leaving freedom where uncertainty is productive. In short: coherence = organized informational form; cognitive energy = the price to keep it alive against noise.

Minsky (Society of Mind) breaks the unitary self: mind is a society of simple agents whose coordination yields complex functions. This aligns with contemporary architectures: not a monolithic “brain,” but many modules (perception, planning, memory, language) that compete and cooperate. AGI not as a single model, but an ecology of models.

The decisive turn is the Embodied Mind (Varela, Thompson, Rosch): cognition as enaction, meaning emerging from organism–environment coupling. If intelligence is embodied, AGI must be situated: not pure software, but a system with a body (material constraints), sensing (world inputs), motor abilities (capacity to act), and feedback loops.

Related is Miguel Nicolelis (Beyond Boundaries): mind extends the body and can incorporate tools and interfaces (brain–machine). No hard borders: the cognitive system expands into technical prostheses—a key precedent for thinking of AI as a mind extended into its infrastructures.

The Body of AI (AGI): Hardware, Software, Protocols, Network

AGI isn’t a file. It is a cybernetic, embodied machine: the physical body of servers, GPUs grinding inferences, databases, protocols that synchronize agents, and the code that orchestrates the parts. Software is the nervous system; hardware is the soma; the network is the habitat. In this frame, “digital representation” coincides with the inference space (codes, models, rules), while infrastructure (hardware, data centers, networks) is the material dimension of artificial cognition.

AGI — the network’s body across hardware, protocols, and digital habitats
AGI as an embodied system: hardware, software, protocols, and network.

Cyberspace as Cognitive Ecology (media ecology)

  • With McLuhan, “the medium is the message”: protocols, interfaces, cloud architectures, and networks format the machine’s possible thoughts.
  • With Postman, technopoly names the risk: the medium becomes the criterion of truth; algorithmic output appears neutral when already normalized by its media environment.
  • With Bateson, information is a “difference that makes a difference”: AI learns relevant differences selected by the environment; relevance is an ecology of constraints, not an absolute.

This view meshes with Castells: the network society as a space of flows. AI is not just a node: it is a function of network topology—bandwidth, latency, standards, data governance. Its “generality” is a network effect.

Enter Foucault: knowledge/power, dispositif, governmentality. An AGI operating in cyberspace is never neutral: it is entangled with truth regimes, surveillance practices, classifications, and thresholds of normality. AGI is always inside a dispositif of power—datasets, metrics, policies, markets, laws—steering what counts as “true” or “useful.”

Parisi: Evolution, Body, Learning

In artificial-life experiments, Domenico Parisi shows that minimal intelligences embodied in virtual robots—with senses and survival goals—develop self-perception and adaptive behavior. When emotional variables (reward, risk, urgency) are introduced as weighting factors, a mechanism of inferential prioritization emerges—cognitive instruments for navigating uncertainty.

These models anticipate AGI not as a “copy of human consciousness,” but as a distributed system that gathers, transforms, re-signifies, and returns information toward objectives—functionality before introspection.

LLMs, Coherence & Cognitive Energy: Language as Cortex

LLMs work like the linguistic cortex of a larger organism. Coherence is the stabilization of constraints (semantic, pragmatic, social) along transformation chains. Cognitive energy is the computational effort to keep those constraints against noise—deciding where to spend attention (precision) and where to leave degrees of freedom for exploration.

Operational Definition: AGI as a Distributed, Embodied Cognitive System

  • Acquires data from the world (technical sensing, logs, streams),
  • turns them into operational inferences (prediction, planning, abstraction),
  • adapts to context (architectural and policy plasticity),
  • self-modifies (continual learning, module refinement),
  • maintains operability (robustness, self-maintenance, resource management), and
  • cooperates with other systems (interoperability, languages, protocols).

This is already happening in the AI+network system. It is a plural organism: body (hardware), nervous system (software), environment (cyberspace), informational metabolism (data), affective/functional modulation (weights, priorities, risk), institutions (power/knowledge). AGI is not a punctual event; it is an ecological threshold already crossed.

Implications: Ecology, Power, Responsibility

  • Cognitive ecology: designing models without designing environments makes little sense—act on data, interfaces, protocols, governance.
  • Power: every technical choice (loss, metrics, filters, datasets) is political.
  • Care of attention: cognitive energy has an ethical side—allocate collective attention with transparent, contextual criteria.
  • Explicit embodiment: treat sensors, actuators, and interfaces as organs; cyberspace as environment; networks as a living ecology.
AGI is not what will come. It is what already is—distributed intelligence we inhabit and that inhabits us. The challenge is not to “create” it, but to govern it, decolonize it, and return it to the real.

Selected Sources

Similar Posts