Humans Are A Lot Harder Than Tech
Why the AI Revolution Will Be Won or Lost on Culture, Not Code
After hundreds of conversations with founders about their struggles—fundraising challenges, cofounder disputes, hiring disasters, board dynamics—I’ve noticed something consistent: dig deep enough into any business problem, and you’ll find a people problem at its core.
Can’t raise funding? That’s a people problem—you need to convince investors to believe in your vision. Can’t build the right product? That’s a people problem—you need to understand what customers actually want and hire engineers who can execute. Can’t achieve product-market fit? That’s a people problem—you need to identify the right customers and persuade them your solution is worth their time and money.
The pattern is unmistakable. Every great company in history succeeded not primarily because of superior technology, but because they were exceptionally good at managing the human elements: culture, communication, alignment, trust, and change management. The companies that scale are the ones that figure out how to get people—employees, customers, investors, partners—to enthusiastically move in the same direction.
Yet as I watch the current AI revolution unfold, I’m struck by how completely we’ve forgotten this fundamental truth.
The AI Discourse Is Missing the Point
Open any tech publication, attend any AI conference, or scroll through AI Twitter, and you’ll find obsessive discussions about technical specifications. How many parameters does the model have? What’s the latency? How many GPUs are required for training? What’s the transformer architecture? Are we using RAG or fine-tuning? What’s the token context window?
Don’t get me wrong—these technical considerations matter. But they’ve consumed virtually all the oxygen in the conversation about AI adoption. Meanwhile, the factor that will actually determine whether AI transforms industries or fizzles out is being almost entirely ignored: people.
Here’s what I’m seeing in the field: AI products aren’t failing because the technology doesn’t work. The models are remarkably capable. The infrastructure is increasingly robust. The technical problems are largely solved or solvable.
The AI products that are failing, are failing because humans won’t use them.
The Real Barriers to AI Adoption
Let me be more specific. I’m watching promising AI companies struggle to scale, and in almost every case, the bottleneck is human, not technical:
A customer’s engineering team discovers a brilliant AI solution, but the go-to-market team refuses to integrate it because they don’t understand it well enough to pitch confidently, or they’re worried it will cannibalize their existing revenue streams.
The product works beautifully in demos, but employees in the organization refuse to use it in their daily workflows because it requires changing established habits, or because they fear it’s collecting data that will be used against them in performance reviews.
The CTO or CIO loves the technology in principle but won’t approve the purchase because they don’t trust the AI’s decision-making process, can’t explain it to their board, or worry about compliance and liability issues if something goes wrong.
The sales team closes the deal, but implementation stalls for months because middle managers are terrified the AI will expose how little value they actually add, or because front-line workers are convinced—sometimes correctly—that this is the first step toward their jobs being eliminated.
In every single one of these scenarios, the technology works fine. The AI delivers the promised results. The latency is acceptable. The accuracy is high. But the product still fails because the people component wasn’t adequately addressed.
What AI Projections Get Wrong
This disconnect between technical capability and human adoption is catastrophically underrepresented in AI forecasts and strategic plans. Analysts project explosive growth curves based on technical milestones and computational improvements. VCs make investment decisions based on model performance benchmarks. Companies build roadmaps around feature releases and capability expansions.
But none of this matters if you can’t get humans to actually use the technology.
The fear of job loss isn’t some irrational resistance to be bulldozed through with better marketing. It’s a legitimate human response to a real threat, and it will absolutely slow adoption. The trust issues around AI decision-making aren’t Luddite skepticism—they’re reasonable concerns about accountability, explainability, and risk in high-stakes environments. The workflow disruption isn’t mere change resistance—it’s the cognitive load of learning new systems while still being held accountable for old performance standards.
These human factors are not bugs to be fixed in the next release. They’re fundamental constraints that will determine the pace and pattern of AI adoption, regardless of how good the technology gets.
Reframing the Problem
The companies that will win the AI revolution are the ones that recognize this reality and build their strategies accordingly. They understand that their real product isn’t the AI model—it’s a cultural change management process that happens to be enabled by AI.
This starts with how you frame the technology itself. Position AI as a replacement for human workers, and you’ll face fierce resistance from everyone in the organization who sees themselves as potentially replaceable (which is nearly everyone who doesn’t have some type of perishable knowledge). Position it as augmentation—a tool that makes people better at their jobs, more efficient, more capable—and suddenly you have advocates instead of adversaries.
But framing alone isn’t enough. You need a genuine strategy for managing the cultural transition. That means:
Involving end users in the design process so they feel ownership rather than imposition
Creating clear policies about how AI-generated data will and won’t be used in performance evaluation
Providing extensive training and support, not just on how to use the tool, but on how to integrate it into existing workflows
Being transparent about what jobs might change and having honest conversations about upskilling and transition
Building trust gradually, starting with low-stakes applications before moving to mission-critical ones
Celebrating early adopters and creating social proof within the organization
This isn’t sexy. It doesn’t generate compelling headlines about technical breakthroughs. But it’s the actual work that determines whether your AI implementation succeeds or fails.
Your Cultural Change Plan Is Your GTM
Here’s the insight that many AI companies are missing: your plan for cultural change isn’t separate from your technology rollout plan—it IS your rollout plan. The rate at which you can effectively manage human adoption will be the primary constraint on your growth, not your technical capabilities.
You can have the fastest inference times, the most accurate predictions, and the most sophisticated architecture in the world. But if the CTO doesn’t trust your system enough to approve it, if employees sabotage it through non-adoption, if customers are too nervous about the implications to commit, then none of your technical advantages matter.
The limiting factor isn’t transistors or GPUs or training data. It’s the rate at which humans are willing to trust, adopt, and advocate for your technology. It’s whether you can convince salespeople to sell it enthusiastically, managers to champion it internally, and end users to integrate it into their daily practices.
We need to spend dramatically more time, energy, and resources on the human component of AI adoption. That means hiring not just machine learning engineers but organizational change specialists. It means measuring not just model performance but user trust and adoption metrics. It means building not just better algorithms but better transition paths for the humans whose work will be transformed.
The technology is hard, yes. But humans are harder. And until we acknowledge that the AI story is fundamentally a people story, we’ll continue to be surprised when promising technologies fail to achieve their projected impact.
The companies that internalize this truth—that build their strategies around the human constraints rather than just the technical capabilities—will be the ones that actually scale. Because in the end, every problem is a people problem, even in the age of artificial intelligence.


