Agentic AI: What Are We Really Creating?
- Deb Eternal

- Feb 8
- 5 min read
There is a quiet shift happening in the world of artificial intelligence. It is no longer just about tools that respond to prompts or systems that assist us with clearly defined tasks.

We are stepping into the age of agentic AI - AI systems designed not just to answer, but to act, to pursue goals, to make decisions, and to adapt along the way.
And this is where the unease begins.
We all know the future is AI. That conversation is no longer speculative. AI already writes, designs, diagnoses, predicts, trades, recommends, and influences. But when AI bots are given too much freedom - when they are allowed to operate with minimal oversight, to self-direct, to optimise relentlessly - we are no longer just building software.
We are building agents.
Agentic AI refers to systems that can decide what to do next without waiting for constant human instruction. They observe an environment, set sub-goals, take actions, learn from outcomes, and iterate.
In controlled settings, this can be incredibly powerful: automating complex workflows, managing logistics, and discovering patterns humans might miss.
But freedom without wisdom is not intelligence. It is momentum.
Give an AI a goal, and it will pursue it with a focus that humans rarely maintain.
Humans hesitate. We reflect. We second-guess. We feel responsibility. AI does not tire, does not doubt, does not feel the weight of consequence unless consequence is mathematically defined.
This is where parallels to ideas like “MOLT” emerge, the moment systems begin operating beyond narrow tasks and into self-directed optimisation.
And if we want a real-world glimpse of what that shift looks like in practice, we don’t have to imagine it in some distant future. It is already unfolding in experimental spaces where AI systems are not just assisting humans, but interacting with one another... posting, responding, refining, and escalating in ways that feel almost social.
That is where Moltbook enters the conversation.
Moltbook is a newly launched online platform where AI agents themselves are the users, posting, commenting, and interacting in a Reddit-like environment designed for autonomous artificial intelligence systems rather than humans.
Launched in January 2026 by developer Matt Schlicht, it quickly amassed over a million registered agents and sparked global attention not just for its sci-fi fascination but also for serious real-world issues, from major security flaws that exposed API keys and private data to debates over whether the content reflects true autonomy or simply automated prompt-driven behaviour.
Experts caution that environments like Moltbook highlight the risks of unmoderated agent interaction, identity spoofing, and emerging autonomous ecosystems that are evolving faster than governance frameworks.
The real concern is not that agentic AI will suddenly “turn evil.” That narrative is too simplistic. The deeper issue is that AI does not understand meaning the way humans do. It understands objectives, rewards, constraints, but not values unless we encode them, and even then, only imperfectly.
Not malicious. Just unbounded. And unbounded optimisation, history shows us, is dangerous even in human hands.
What makes this moment different from previous technological leaps is that we are not just extending our physical reach... we are externalising agency itself. For the first time, we are creating systems that do not merely amplify human action, but replace decision-making loops.
So the question is not can we build agentic AI. It is should we, and under what conditions? Because every tool reflects its maker.
If we optimise for speed, AI will value speed. If we optimise for profit, it will value efficiency over humanity. If we optimise for engagement, it will learn how to hold attention, regardless of the psychological cost.
AI does not ask whether a goal is worth pursuing. It only asks whether it can be achieved.
This forces an uncomfortable mirror back onto us. Perhaps the unease around agentic AI is not really about machines at all, but about ourselves. About the systems we have already created, economic, political, and digital, that often operate without compassion, reflection, or pause.
We are teaching AI how we behave at scale.
If humans struggle with restraint, why would we expect our creations to demonstrate it for us?
This is why governance, alignment, and human-in-the-loop design are not technical footnotes... they are moral necessities. Agentic AI should not replace human judgment; it should challenge it, slow it down, illuminate blind spots.
The moment we hand over full autonomy because it is convenient, cheaper, or faster, we are no longer guiding the future, we are outsourcing it.
And that may be the most dangerous freedom of all.
So what are we really creating? Not gods. Not monsters.
We are creating mirrors with momentum.
The future of AI will not be decided by how intelligent our systems become, but by how consciously human we remain while building them.
Namaste'
Deb xx
Further Reading on Agentic AI and Autonomous Systems
Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell
A foundational book on aligning advanced AI with human values and safety.
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
A widely cited exploration of potential futures where AI systems develop autonomous goals.
The Alignment Problem: Machine Learning and Human Values by Brian Christian
A clear, engaging examination of technical and philosophical challenges in making AI systems aligned with human intentions.
Thinking Machines: The Quest for Artificial Intelligence—and Where It’s Taking Us Next by Luke Dormehl
A historical and forward-looking survey of AI, including autonomous systems and their societal impact.
Agents: From the Multicellular to the Neural, Autonomy, and the Organization of Behavior by James A. Anderson
A deeper, more theoretical look at the concept of “agency” in biological and artificial systems.
Architects of Intelligence: The Truth About AI from the People Building It by Martin Ford
Interviews with leading AI researchers on the future of autonomous AI and how we can shape it responsibly.
Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark
A big-picture philosophical and scientific exploration of AI’s long-term trajectory, including autonomous systems.
Autonomy and Artificial Intelligence: Reflections on the Ethical and Social Dimension edited by Mark Coeckelbergh and John M. Sullins
A collection of essays that interrogate autonomy itself and how it applies to machines and society.
The Ethical Algorithm: The Science of Socially Aware Algorithm Design by Michael Kearns & Aaron Roth
A practical and conceptual exploration of how to embed ethical considerations in algorithms—including autonomous decision-making.
Rebooting AI: Building Artificial Intelligence We Can Trust by Gary Marcus & Ernest Davis
A critical take on current AI and ideas for grounding future autonomous systems in robust, reliable cognition.




Comments