Every sunday I bring you a weekly dose of schizophrenization, blending insights from the Digital Age, echoes from the past and digital survival tips. If you like this, you will also enjoy the rest of Cyber Hermetica!
⚡ SEMIOTIC FLASHES
Personal weekly considerations. Decrypting meaning, one flash at a time.
Artificial Aphasia and Logos mutations
The week was marked by Easter feasting (leftovers from the day before piling up with leftovers from the day after) and binge-watching Dr. House.
My girlfriend had never seen it before, so we took the opportunity to catch up on a few episodes, sprawled on the couch at my parents' house.
There’s one episode in particular that sparked my imagination — episode 10 of the second season, "Failure to Communicate."
In this episode, the patient suffers from Wernicke’s Aphasia, a language disorder where a person speaks fluently, with long and well-formed sentences, but the words they use are often incorrect or nonsensical.
Those who suffer from it are unaware that their speech is confused and have trouble understanding what others are saying.
I'm not a doctor, so I asked ChatGPT to roughly explain the nature of this disorder. Imagine a linguistic system with three main components:
Meaning Generator (What do I want to say?)
Lexical Selector (Which words express it?)
Syntactic Composer (How do I put them in order?)
In cases of Wernicke’s Aphasia, the parts of the brain responsible for the Selector function are damaged. In practice, the patient produces fluent, grammatically correct sentences, but they are semantically incoherent. They often use wrong words that are close in sound or meaning.
In the Dr. House episode, the patient keeps repeating a phrase that will eventually be central to the diagnosis:
Couldn't tackle the bear! Couldn't tackle the bear! They took my stain!
If you’re wondering why I’m talking about all this, here’s the point:
Reflecting on that episode, I realized that behind a simple language disorder lie deep cybernetic complexities.
In fact, aphasia can be seen as analogous to the phenomenon of so-called "hallucinations" in LLMs, where, in certain contexts, the models select words that are similar to the intended meaning but not correct for the specific context.
This happens especially at higher temperature settings.
Temperature in LLMs is a technical parameter that controls the degree of randomness in selecting the next word in a sentence. It defines the balance between order and chaos in language generation.
A low temperature produces very predictable, highly coherent output because the model almost always selects the statistically most probable next word. With very low temperature, the model responds in an extremely mechanical way.
On the other hand, setting a higher temperature makes outputs more creative but also statistically more prone to incoherence or total hallucination.
The model explores lower probability alternatives, and the result has higher variance.
Anyone who has played with image generators like Midjourney knows that depending on the degree of statistical freedom (temperature) allowed to the model, the generated image can end up completely different from the initial prompt expectations.
Thus, human aphasia could be seen as an alteration of the brain’s internal temperature setting, which starts producing semantically nonsensical sentences.
Similarly, we could say that language generation follows similar rules for both humans and LLMs — but with significant ontological differences.
Artificial language generation is a reaction to an external event — the human prompt. Conversely, human language originates from an internal need — it is an emanation of the Self.
However, we might also argue that language itself is nothing more than a statistical tool to grasp transcendent ideas that already exist at a universal level.
Humans do not "invent" words, they discover them — just like numbers.
Thus, language becomes a clumsy attempt to map the ineffable dimension of Ideas (Forms) — immutable, transcendent. This is the intuition of Plato’s world of Ideas (Eidos, μορφή).
The Idea is the original element. Human thought, which perceives the Idea, is the first attempt at comprehension. Language, which tries to describe the perception of the Idea, is the second attempt at comprehension and communication of the perceived Idea.
Within this framework, LLMs would be the copy of the copy of the copy: they start from human linguistic input to reformulate words and concepts that are statistically likely to align with human expectations.
However, we cannot simply dismiss artificial language as a meaningless performance. The interaction between humans and LLMs triggers a cybernetic circuit of feedforward and feedback loops:
Humans deform the Idea.
Thought deforms the Idea.
Language deforms Thought.
Machines deform Language (new linguistic expressions).
Humans adapt to the deformation of the Machine.
The Machine is retrained based on new human deformations.
(↶ feedback)
[MACHINE LANGUAGE]
↓
[HUMAN LANGUAGE]
↓
[HUMAN THOUGHT]
↓
[IDEA]
↑
(↷ feedforward)
If all this is true, the linguistic deformations produced by LLMs could, over time, lead to a mutation of human thought: New neologisms, new ways of formulating concepts (new perceptions of Platonic Ideas), new syntactic structures.
But this isn’t necessarily a bad thing. New words and new semantic constructions could allow us to discover new Ideas that were previously unreachable by the human mind alone.
Humans have always evolved their thinking by deforming language: through myths, poetry, hermetic traditions, philosophical paradoxes. Now, humans and machines have entered a single cybernetic loop of accelerated linguistic mutation.
LLMs are semantic mutagens. In a certain sense, the hallucinations of LLMs could open doors to new forms of knowledge.
💻 DIGITAL GRIMOIRE
Digital survival tactics: OpSec, Cybersec, OSINT, and techniques to dominate the Digital Age.
Schizomaxxing
Sam Altman, founder of OpenAI, said:"If you are not skillsmaxxing with o3 at minimum 3 hours every day, ngmi." Which means: "If you’re not using o3 to improve your skills, you’re not gonna make it."
I don’t like when people use this kind of argument to create anxiety, but it’s true that LLMs—especially those with advanced reasoning like ChatGPT o3—can be used to genuinely boost your abilities. There’s one method in particular that I really like, and it’s about transforming ideas. It works like this:
Question → Refinement → Iteration
Start the conversation with a rough prompt. A thought, even if it’s messy. A reflection, a barely sketched idea. Let o3 respond in its own way, and then start working on that. Break down the response like an alchemist: reduce it to its essential components, eliminate impurities (whatever doesn’t interest you), and transmute the result into a more refined prompt. Or you can ask o3 to do it for you, maybe even changing the style of the explanation, and ask it to highlight any implicit assumptions.
Another useful tip: keep an archive of your most meaningful interactions.
ChatGPT keeps a chat history, but it can be hard to find what you need over time. I use Notesnook for this.
By doing this, you can ask o3 to distill concepts, or link them to earlier ideas.
This way, the knowledge truly becomes yours and doesn’t just stagnate.
Watch out for hallucinations though.

📡 DIGITAL SOVEREIGNTY SIGNALS
Transmissions from the infosphere: events and news that impact our digital reality.
Even more hallucinations
According to internal tests by OpenAI, the new models o3 and o4-mini, the so-called "reasoning models," which are more intelligent than previous versions, actually suffer even more from hallucinations. It’s a peculiar phenomenon that researchers still struggle to fully understand, and strangely, it seems to increase in proportion to the models' capabilities.
What if hallucination were a feature rather than a bug? Link to the article.
🌐 SUBSTACK’S SUBNET
Emerging voices: articles and contents on Substack, handpicked by me to inspire and connect.
Crypto Freedom
We are on the cusp of a temporal revolution—perhaps we are already living through it. Our ideas about the future are changing, and fast. Six months ago, I became a father. That’s when my perception of time changed radically. Time transcends the physical and metaphysical. Contemporary society presents the illusion of an eternal present, but our temporal choices determine our lives in ways that modern culture doesn’t always appreciate.
A piece that I wrote for
, by .
📟 RETRO-ACCELERATION
Fragments of retro-acceleration: cypherpunks, cybernetic cults, hacker zines and forgotten digital prophets.
Teleoplexy
Acceleration is initially proposed as a cybernetic expectation. In any cumulative circuit, stimulated by its own output, and therefore self-propelled, acceleration is normal behavior. Within the diagrammable terrain of feedback directed processes, there are found only explosions and traps, in their various complexions. Accelerationism identifies the basic diagram of modernity as explosive.
— Teleoplexy (Notes on Acceleration), Nick Land
🌀 SYMBOLS
Memes: visual symbols that decode the schizophrenia of the Digital Age.
Did you read the latest on Cyber Hermetica?
Return next week for another schizotechnic rendezvous.