We're Obsessed With Building AI Agents. But Who's Building the Humans?
- silviya9
- 6 days ago
- 4 min read
Five years ago, every LinkedIn feed, podcast, and bookshelf was devoted to one idea: become a better version of yourself. Today, that conversation has gone almost silent — replaced entirely by the race to build better AI.
Scroll through any tech publication, startup newsletter, or venture capital blog today and you'll find the same word repeated in an almost hypnotic loop: agents. AI agents that browse the web for you. Agents that write code, manage inboxes, close sales calls, conduct research, draft legal briefs. The intelligence of the future, we're told, is artificial.
Meanwhile, something quieter has happened. The conversation about human intelligence — our emotional depth, our capacity to grow, to reflect, to lead with wisdom — has quietly left the building.

The Self-Development Decade We're Forgetting
Cast your memory back five years. Between 2018 and 2021, personal development was the cultural obsession. Atomic Habits sold tens of millions of copies. Mindfulness apps were the fastest-growing category in the App Store. Therapy was being destigmatised. Stoicism was having a renaissance. People were genuinely asking: How do I become more self-aware? More resilient? More emotionally intelligent?
These weren't superficial questions. They were rooted in a real intuition — that the quality of our outputs, our relationships, our decisions, our companies, ultimately traces back to the quality of the person behind them. The human in the equation.
And then — almost overnight — the conversation changed. ChatGPT launched in late 2022, and the cultural gravity shifted completely. Self-development didn't just take a back seat. It nearly vanished from the discourse entirely.
The Seductive Logic of Outsourcing Yourself
There's a seductive logic to the current obsession. If AI can write better, think faster, synthesise more information, and make fewer computational errors than I can — why invest in developing me?
This reasoning feels modern and rational. It's actually quite dangerous.
Because what AI systems do — at their core — is reflect us back at ourselves. They are trained on our writing, our choices, our values, our biases, our knowledge, and our blind spots. A language model doesn't invent wisdom from nowhere. It distils and mirrors the intelligence of the humans whose work shaped it.
"If the humans feeding, directing, and deploying AI are shallow in their thinking, narrow in their empathy, or underdeveloped in their judgment — the AI they build will faithfully reproduce all of that, at scale."
We are, in other words, not building a smarter world by building smarter machines while letting the humans who build them stagnate. We are simply automating our current level of development — and amplifying it.
AI Is a Mirror, Not a Ceiling
Here's what nobody wants to say at the AI conference: the ceiling of artificial intelligence is the ceiling of human wisdom.
The decisions about what to build, what to value, what to optimise for, how to deploy these systems responsibly, and how to course-correct when things go wrong — all of those decisions are made by people. By humans with varying degrees of emotional intelligence, ethical clarity, self-awareness, and inner development.
An AI agent built by a person who hasn't done the difficult work of understanding their own biases will encode those biases. A product designed by a team that hasn't cultivated genuine empathy will lack it. A system deployed by leaders who confuse intelligence with wisdom will cause harm at previously impossible speed.
We keep asking: "How intelligent can we make AI?" We rarely ask the more important question: "How developed are the humans making these decisions — and is that enough?"
What "Developing Humans" Actually Means
Let's be precise here. Human development isn't just reading more books or attending more workshops. It's the slower, less glamorous work of becoming someone who can handle complexity with grace.
It means developing the capacity to sit with uncertainty instead of reaching for a shortcut. It means cultivating emotional intelligence — the ability to read a room, manage conflict, hold space for other perspectives. It means building the kind of integrity that holds even when no one is watching. It means growing in self-awareness enough to know when you're the problem.
These qualities cannot be outsourced to a model. They cannot be prompted. They are developed through lived experience, reflection, discomfort, honest feedback, and time. They are, in the truest sense, human work.
And they are precisely the qualities that determine whether powerful technology becomes a gift or a catastrophe.
The Irony Nobody's Talking About

Here is the deep irony of this moment: the more powerful our tools become, the more important human development becomes — not less.
When a person with poor judgment had access to limited tools, the damage they could cause was limited. When that same person has access to AI systems that can move at the speed of light, execute with perfect precision, and operate at global scale — the stakes of their underdevelopment multiply accordingly.
We are handing extraordinary leverage to ordinary humans, without asking whether those humans are ready for it. And "ready" doesn't mean technically literate. It means psychologically mature, ethically grounded, emotionally intelligent, and self-aware.
"We treat AI safety as a technical problem — alignment, guardrails, red-teaming. But the deepest AI safety challenge is a human one: are the people building and deploying these systems developed enough to be trusted with this kind of power?"
Bringing Back the Other Conversation
None of this is an argument against AI. Quite the opposite. It's an argument for taking AI seriously enough to invest equally in the humans who shape it.
We should be asking, with the same urgency we bring to model development: Are our leaders growing? Are our builders cultivating wisdom alongside capability? Are we creating cultures where introspection, ethical reasoning, and emotional growth are not soft afterthoughts but core competencies?
The question "Who are you becoming?" didn't become less important when ChatGPT launched. It became more important than ever — because the answer now echoes outward through technology that touches everyone.
Five years ago we were told: develop yourself, and your world will grow. That was true. It still is. The only difference is that now, the world that grows — or shrinks — is considerably larger.
Without developed humans, there can be no truly developed AI. The work of becoming begins with you.




Comments