Beyond the Digital Environment
Is AI a cognitive tutor?
How AI and LLMs transform the relationship between thought and technology — from environment to tutor, from support to dependency.
The new AI revolution. Social media have reshaped how we access information, organize attention, and perceive reality.
To understand what LLMs are becoming, we first have to understand what social media have been: environments, in the precise sense that media ecology gives to the term.
Neil Postman argued that media are not merely neutral tools we use for predetermined purposes, but complex environments that structure thought and experience. As Marshall McLuhan wrote, “the medium is the message”: technology is not a neutral container for content, but produces effects regardless of what it carries. A media environment does not simply transmit messages: it organizes perception, time, social space, and forms of interaction.
I
Facebook and YouTube do not produce content or news; they organize distribution. The algorithm is the architect of the information environment and of what succeeds inside it.
Bernard Stiegler, philosopher of technology, spoke of the “externalization of memory” to describe how technologies become external cognitive supports. Writing externalizes thought, the library externalizes collective knowledge, the internet externalizes access to information. But in all these cases, technology stores something produced elsewhere. Social media are a dynamic archive, not a producer.
From environment to tutor.
LLMs (Large Language Models) such as ChatGPT, Claude, or Gemini operate in a radically different way. They do not simply distribute pre-existing content: they co-produce it with the user. They do not just mediate the perception of an external reality; they participate in the formulation of thought itself. This is the shift from environment to tutor: from controlling attention through the configuration of the environment and the coding of experience, to a cognitive partner that guides the processing of perception.

II
To better grasp McLuhan’s point about communication technologies—especially the internet and, today, AI—we can see them as cognitive extensions, a virtual space where we project identity. Every technology is an extension of our body or our functions. In this sense, Hutchins’ (1995) theory of distributed cognition shows that cognitive processes are not confined to the individual brain, but distributed across mind, body, and technological tools—such as a map, which supports orientation and, in the absence of experience, can effectively become the mental representation of space itself.
LLMs operate according to this logic, but unlike a calculator, they intervene in the very phase where thought is being formed.
III
1. AI and Linguistic Scaffolding: From Formulation to Co-Formulation
The concept of cognitive scaffolding, introduced by Jerome Bruner, describes how external supports guide the development of skills
that can later become autonomous. A parent helping a child build a tower provides temporary scaffolding; eventually the child builds alone.
LLMs function as permanent linguistic scaffolding. When a user has a vague idea and asks AI to “structure it,” the LLM does not simply transcribe: it proposes an argumentative form. It introduces logical connections, conceptual hierarchies, rhetorical transitions. The user recognizes some proposals, discards others, revises. But the final result is co-produced.
Singh et al. (2025) show that novice users, when they rely on LLMs for writing tasks, bypass essential cognitive phases: they do not analyze multiple sources, do not synthesize divergent ideas, do not independently evaluate solutions. The scaffolding is not temporary; it becomes a permanent structure. And when it is removed (as in session 4 of the MIT study), performance collapses.
IV
2. Redefining the Space of Possibilities: From Problem to Pre-Packaged Solution
When you ask an LLM “how to tackle this problem,” the answer is not a neutral list of options. It is a reframing of the problem itself.
The LLM proposes an interpretive frame (this is a type X problem, not Y), identifies relevant variables, suggests evaluation criteria.
Some paths become salient; others are omitted.
Echterhoff et al. (2024) show that LLMs replicate—and sometimes amplify—human decision patterns: anchoring (overweighting the first information received), availability (overestimating the likelihood of easily recalled events), confirmation (favoring information that confirms prior beliefs).
A space already filtered and organized by the model.
V
3. From Exploration to a Guided Sequence
Distributed cognition includes the notion of coupling between agent and environment. In tightly coupled systems,
the agent does not use the tool; it co-evolves with it (Clark & Chalmers, 1998).
Every LLM output implicitly contains cues about the next step: “You could expand on X,” “A related question might be Y,” “To proceed, consider Z.” The interaction becomes a guided sequence: question → answer → new question suggested by the answer → iteration.
VI
MIT research (Kosmyna et al., 2025) measured this with EEG: LLM users show reduced activation of neural networks associated with planning, evaluation, and executive control. Not because the task is easier, but because these functions are externalized. The brain learns to delegate.
This qualitative transformation makes the distinction between “my thought” and “thought mediated by AI” operationally irrelevant. Social media organized attention, but did not participate in the production of thought. LLMs do both. They are not an environment you move through; they are a partner you think with. And when the partner becomes indispensable, symmetry breaks: it is no longer collaboration, it is dependency.

VII
We already live in what we might call the “AI theatre”—a space where the distinction between human-produced content and machine-generated content is increasingly difficult to trace. This is not only about deepfakes or synthetic images, but something subtler: the linguistic and stylistic uniformity that emerges when millions of people use the same models to write.
When thousands of companies ask ChatGPT to draft internal policies, spontaneous convergence emerges: the same terms, the same argumentative structures, the same register. LLMs do not explicitly prescribe “this is how it’s done,” but providing immediate, usable output pushes toward de facto standardization. Normalization happens through repetition, not imposition.
The result is a paradoxical cultural flattening: we have access to a tool capable of generating infinite variation, yet we converge on the same formulations because convenience rewards “ready-made” structures. AI does not tell us what to think, but it profoundly shapes how we phrase what we think. And form, as we know, is never neutral with respect to content.
VIII
Research distinguishes between experts and novices in LLM use: experts—those with structured domain knowledge—can use them productively, because they have robust conceptual frameworks guiding interaction. Novices, instead, tend to delegate completely, bypassing the development of the skills that enable critical use.
But this distinction, however true today, is probably temporary. Not because novices will become experts, but because even experts will gradually lose critical vigilance. It is a matter of use, habit, attention threshold, and the capacity to stay alert. Over time, it inevitably declines.
The reason is deeply biological. The human brain is energetically expensive: it consumes about 20% of total metabolism while constituting only 2% of body mass. Evolution optimized us for cognitive efficiency: minimizing mental effort when possible, automating repetitive tasks, delegating functions to external tools when available. This push toward energy optimization is not moral laziness; it is biological architecture.
When an LLM offers a faster, more efficient route to an equivalent result, the pressure to adopt is not only external (work competition, productivity expectations) but internal. The brain prefers the lower-cost option. Over time, the exception—using AI for specific tasks—becomes the norm. The norm becomes dependency. Dependency becomes invisible: you no longer notice you are delegating, because it has become automatic.
IX
Today we use LLMs to produce text, write code, draft emails. Tomorrow we will use them to understand complex realities that have always exceeded us. When a social, economic, political, scientific phenomenon becomes too intricate to grasp without mediation, whom will we turn to for synthesis, interpretation, orientation?
The most likely answer: one of the dominant digital infrastructures. We will ask AI to explain climate change, geopolitical dynamics, economic mechanisms, the implications of a scientific discovery. And AI will give us a narrative—coherent, accessible, plausible.
But that narrative will be built on the textual patterns of its training, will incorporate data biases, will reflect the design choices of those who built the model. And above all: it will be one version of reality, presented with the authority of computational synthesis.
The problem is not that AI “lies” or “manipulates” deliberately. The problem is that when a system becomes the primary mediator through which we understand complex phenomena, the distinction between “understanding reality” and “understanding the system’s representation of reality” dissolves.
X
Our children, and our children’s children, will never have known a world without LLMs as cognitive mediators.
For us, who learned to write, reason, and formulate problems before the advent of LLMs, there is still a procedural memory of how to do it without them. We can compare “before” and “after,” recognize the difference, question the change. We have an external reference point.
Generations growing up with AI as a normalized cognitive tutor will not have this reference point. They will not know what it means to formulate a complex thought without passing through the AI interface. They will not be able to compare “their analysis” with “the AI’s analysis,” because the two will have overlapped from the start.
This does not mean they will be less intelligent. It means they will operate under a different cognitive regime, where certain skills (autonomous formulation, building arguments from scratch, critical verification without external support) may never develop, because the environment does not require them.
The alternative to a narrative is not simply another narrative. It is the capacity to produce narratives. And if that capacity is externalized from the beginning of cognitive formation, what is lost is not only a technical competence, but the very possibility of conceiving alternatives.
Recognizing that cognitive convenience has a price, and that the price may not be paid by those who use AI today, but by those who grow up in a world where AI is the only known way to think.
- Clark, A., & Chalmers, D. (1998). “The Extended Mind.” Analysis, 58(1), 7–19.
- Echterhoff, J.M., et al. (2024). “Cognitive Bias in Decision-Making with LLMs.” Findings of EMNLP 2024.
- Engeström, Y. (2001). “Expansive Learning at Work: Toward an Activity Theoretical Reconceptualization.” Journal of Education and Work, 14(1).
- Gilbert, S.J. (2024). “Cognitive offloading is value-based decision making.” Cognition, 247, 105783.
- Hutchins, E. (1995). Cognition in the Wild. MIT Press.
- Kosmyna, N., et al. (2025). “Your Brain on ChatGPT: Accumulation of Cognitive Debt…” arXiv:2506.08872.
- McLuhan, M. (1964). Understanding Media: The Extensions of Man. McGraw-Hill.
- Postman, N. (1970). “The Reformed English Curriculum.” In High School 1980 (Eurich, ed.).
- Singh, A., et al. (2025). “Protecting Human Cognition in the Age of AI.” arXiv:2502.12447.
- Stiegler, B. (1998). Technics and Time, 1. Stanford University Press.







