Article

Stop saying 95% of AI projects fail. Start looking at HOW AI changed everyone's job.

What AI did to Ted's job

“AI projects fail” is a problem you can outsource. Pick a better model, find a better use case, run another pilot, fire/hire the SI. But having to look inwards and honestly say “we have no idea what AI did to our workforce” is a leadership challenge that compounds every quarter that question goes unanswered.

BCG surveyed 1,000+ C-level executives and found 70% of AI implementation challenges are people- and process-related. Not technology. Not algorithms. McKinsey found that companies that intentionally redesign workflows around AI are nearly 3x more likely to achieve meaningful business impact. Deloitte’s 2026 survey of 3,200+ respondents found 84% of organizations have not redesigned a single role for AI capabilities.

The tools are deployed. The organization is untouched.

This week, HBR published an 8-month ethnographic study of the human-level impacts of AI. Researchers embedded at a 200-person tech firm and found that AI didn’t reduce workloads. It intensified them. As AI eroded capability boundaries, employees started picking up tasks that were never theirs. The lines of what was and wasn’t their job began to blur. The buffer between work and rest started to erode, as no one knew if doing more faster meant going home earlier or filling the gap with more work. Multi-tasking became the normal operating mode as people ran multiple AI threads at once, stretching prioritization and focus to the breaking point.

Because leadership hadn’t taken it upon themselves to define what the new AI-led workplace means on a human level, the constraints the roles were designed around were gone. Without that definition, work doesn’t shrink. It compounds.

Look at it practically. There’s Ted. Ted is a technical copywriter with 20 years experience. Ted now has AI.

What is Ted’s job? Is he writing, or editing what AI drafts? If his core skill just shifted from creation to judgment, are we measuring him on output or on quality of decisions? Does he still need the same reporting structure, or can he run autonomously with AI as his first-pass reviewer? Does his manager become a strategist instead of an editor? Can three Teds do what five did, and if so, what do the other two become?

Every Fortune 500 has ten thousand Teds. Almost none of them are asking these questions. They’re issuing AI subscriptions and counting adoption metrics while the actual work stays frozen in place. The roles, the scope, the reporting lines, the success criteria. All untouched.

The priority AI question facing leaders shouldn’t be whether their AI pilot worked. It’s whether anyone can tell Ted what his job is on Monday morning.

Start a conversation

cory@haldeman.co