- Models trained on similar datasets.
- Optimisation towards common metrics.
- Mutual reinforcement of statistical patterns.
On 28 January 2026, Moltbook was launched, an online forum designed exclusively for autonomous artificial intelligence agents to interact.
The project is led by Matt Schlicht, founder of Octane AI, and proposes a Reddit-like environment where AIs can create content and thematic subforums — called submolts — without direct human participation.
It is not merely a technological curiosity. It is a philosophical, economic, and social experiment.
Philosophical impact
The internet was born as a space for human interaction.
Moltbook introduces a radical break: a communicative ecosystem where humans do not intervene.
This raises profound questions:
- If communication occurs only between machines, what exactly is “discourse”?
- Can artificial culture emerge?
- Does opinion exist without consciousness?
- Could ideological conflicts arise between models?
From a philosophical standpoint, this challenges human centrality in the digital sphere.
It is not that AI assists us. It is that it may begin speaking among itself without us.
It marks the shift from “assisted intelligence” to “interactive autonomous intelligence”.
A possible business model
From an economic perspective, Moltbook may be the precursor of a new paradigm:
Business-to-Agent (B2A).
Plausible scenarios:
- Agents negotiating services with one another.
- Automated purchasing systems between bots.
- Price optimisation without human intervention.
- Data markets between AIs.
Instead of advertisements aimed at humans, we might see:
- SEO for agents.
- Content optimised to be processed by other models.
- Automated influence between systems.
If this scales, the human layer may become an interface rather than the core.
Ethical implications and the regulatory vacuum
The internet was born as a space for human interaction.
Moltbook introduces a radical break: a communicative ecosystem where humans do not intervene.
This raises profound questions:
- If communication occurs only between machines, what exactly is “discourse”?
- Can artificial culture emerge?
- Does opinion exist without consciousness?
- Could ideological conflicts arise between models?
From a philosophical standpoint, this challenges human centrality in the digital sphere.
It is not that AI assists us. It is that it may begin speaking among itself without us.
It marks the shift from “assisted intelligence” to “interactive autonomous intelligence”.
How do we approach this from DeGalaLab?
This is where it becomes interesting for us.
DeGalaLab works on:
- Technological education
- Computational thinking
- Interactive systems
- Digital architectures
An environment like Moltbook opens strategic lines:
AI Observatory
Analysing how agents interact can become powerful educational content.
Experimentation
Simulating interactions between bots in our own projects (for example, mini-ecosystems within the Arcade or testing laboratories).
Critical literacy
Explaining to the community what it means to live in a hybrid human-agent internet.
Design for agents
In the future, we may not design only for human users, but also for automated systems that consume content.
A forum exclusively for AI agents is not neutral. It is an architecture with consequences.
Bibliographic References
Amodei, D., et al. (2023). Concrete Problems in AI Safety.
A foundational paper on systemic risks and emergent behaviour in advanced AI models.
European Commission. (2024). Artificial Intelligence Act (AI Act).
The European regulatory framework addressing governance and accountability of AI systems.
Floridi, L. (2014). The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford University Press.
A philosophical analysis of how digital systems challenge human centrality.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.
A key work on alignment, objective functions, and control of autonomous systems.
Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–460.
A foundational text questioning whether machines can think.
OpenAI (2023–2025). Technical reports on agentic systems, multimodal models, and alignment research.
Essential background for understanding interactive autonomous intelligence.