AI Won't Fix What Leadership Broke
- Dr. John Dentico
- Jan 14
- 4 min read

Leaders are making a bet right now: that AI will solve their workforce problems. Replace disengaged workers. Increase productivity. Close the talent gap. Some are even treating it like the ultimate management tool, a way to become puppet masters pulling strings from above, finally achieving perfect control over every workflow, every output, every decision. It's an appealing fantasy, technology as the cure for organizational dysfunction. But isn't this exactly the frustration already fueling toxic workplaces?
Leaders operating from a position instead of influence, controlling instead of enabling, treating people like variables to be optimized rather than experts to be empowered? And here's the deeper contradiction: doesn't the puppet master approach completely contradict the entire concept of an AI-augmented workforce? Augmentation means enhancing human capability, enabling better judgment, and amplifying contribution. Puppet mastering means controlling from above, extracting compliance, diminishing agency. You can't have both.
Here's what happens when you try: AI on broken foundations creates automated chaos. You can't code your way out of a leadership vacuum. You can't prompt-engineer your way past a lack of strategic clarity. And you can’t use technology to fix systems designed for compliance rather than contribution.
The evidence is already playing out, and it's damning. KPMG's global research reveals that more than half of people won't trust AI systems, a trust deficit that carries directly into HR and customer-facing deployments. Organizations are implementing AI tools to write performance reviews that employees view as opaque "black boxes" disconnected from their actual work. They're generating training content at scale without doing the hard diagnostic work to understand real skill gaps, amplifying generic one-size-fits-all programs faster than ever before.
And they're automating customer service interactions where 53% of consumers actively dislike or hate the AI experience, with 77% finding chatbots frustrating enough that 70% would consider switching brands after a single bad interaction. They're using technology to do the wrong things faster. When you automate a broken process, you don't fix it, you just break it at scale. When you use AI to enforce rigid hierarchies and compliance-based workflows, you haven't created an augmented workforce. You've created a more efficient system for crushing exactly the human judgment and creativity that made your people valuable in the first place.
Here's why this keeps happening: leaders are reaching for technological solutions because the actual work, fixing values misalignment, restoring personal agency, building real growth trajectories, requires them to confront decades of accumulated dysfunction. It's easier to buy an AI platform than to admit your organizational structure suppresses the very capabilities you're trying to automate. It's faster to implement a chatbot than to rebuild trust with customers; your systems have been failing for years. And it's far less uncomfortable to blame "implementation challenges" than to acknowledge that 68% workforce disengagement isn't a training problem or a communication problem: it's a leadership problem. AI doesn't solve that. AI reveals it, amplifies it, and makes it visible to everyone you were hoping wouldn't notice.
There's a different path, but it requires honesty about sequence. Organizations that get AI right don't start with the technology; they start by fixing the foundation. They address value misalignment before automating workflows. They restore personal agency before they deploy productivity tools. They build real growth trajectories before they scale training programs.
When you do that work first, AI becomes what it was supposed to be: an amplifier of human capability, not a replacement for the leadership you refused to provide. But here's the uncomfortable part: most organizations won't do that work. They'll keep reaching for technological shortcuts, implementing tools on broken foundations, and wondering why their best people are still walking out the door despite the shiny new AI strategy.
There's a different path, but it requires honesty about sequence. Organizations that get AI right don't start with the technology; they start by fixing the foundation. They build a structure that enables rather than constrains, a structure designed for a living system, not a machine. A structure able to adapt to the speed and complexity of the change your organization faces. This isn't a one-size-fits-all framework you buy off the shelf. It's the structure each organization creates for itself, aligned with its specific mission and operational philosophy.
Structure that focuses on what drives engagement: values alignment, personal agency, and growth trajectory, the sought-after and undeniable elements of a truly engaged workforce. Clear enough for alignment, flexible enough for adaptation, strong enough to scale without calcifying into the rigid hierarchies that created the crisis in the first place. Structure that provides clarity without crushing autonomy, that creates alignment without demanding conformity.
When you do that work first, AI becomes what it was supposed to be: an amplifier of human capability, not a replacement for the leadership you refused to provide. But here's the uncomfortable part: most organizations won't do that work. They'll keep reaching for technological shortcuts, implementing tools on broken foundations, and wondering why their best people are still walking out the door despite the shiny new AI strategy.
The question isn't whether AI will transform work. It will. The question is whether you're building the foundation that makes that transformation productive instead of destructive. Because right now, you're implementing technology on the same broken systems that created 68% disengagement in the first place. You're automating dysfunction, scaling misalignment, and digitizing the very leadership failures that drove your best people out the door.
Next week, I'll show you what happens when organizations see the warning signs and ignore them anyway, when leaders choose technological shortcuts over structural fixes and hope the problem solves itself. Spoiler: it doesn't. It compounds. Are you fixing the foundation, or just automating the collapse?



Comments