23.02 2026 · 8 min read
Part 5: Redefining Roles
The biggest barrier to successful AI adoption isn't technology, it's that organizations haven't redefined what their people actually do.
Early Release: This is a draft version of the article and will continue to evolve.
Co-authored with AI: This article was written in collaboration with AI tools.
The biggest barrier to successful AI adoption isn't technology and it isn't budget. It's that organizations haven't redefined what their people actually do. This is a leadership problem, not an IT project.
We've spent the previous articles talking about knowledge, agent architectures, and interfaces. All of that is important. But none of it works without the people who operate, evaluate, and improve the agents daily. And the question of what that actually requires from a workforce deserves more attention than it typically gets.
We've Been Here Before
A decade ago, companies invested heavily in dashboards and analytics tools. The promise was that data would transform decision-making at every level. For many organizations, that promise went unfulfilled. Not because the tools were bad, but because the people using them couldn't interpret the data, ask the right questions, or challenge the outputs. Companies had dashboards everywhere and data literacy nowhere.
The same pattern is repeating with AI. Organizations are deploying agents without investing in the skills their people need to evaluate, teach, and maintain those agents. The agent gives a confident answer. Is it right? Is it drawing from the right sources? Is it applying the policy correctly, or just generating something that sounds correct? If your teams can't answer these questions, the system becomes a black box that nobody can improve. And as we established in article 2, a system that nobody improves is a system that degrades.
The data literacy lesson is straightforward: tools without the skills to use them are expensive decorations. The organizations that got value from analytics were the ones that invested as much in their people's ability to work with data as they invested in the data infrastructure itself. The same will be true for AI.
How Roles Actually Change
It would be reassuring to say that roles simply evolve and nobody loses their job. That's the comfortable narrative, and it contains some truth. A support agent who used to answer routine questions now reviews the cases the AI couldn't handle and feeds learnings back into the system. An operations analyst who followed SOPs now designs them and evaluates whether the agent follows them correctly. A compliance specialist translates regulatory requirements into rules that agents can enforce. The work changes character. The domain knowledge remains essential.
But that's only one version of what happens, and probably not the most common one.
Some companies will find that a small team of capable people, supported by well-built agents, can do what previously required a much larger organization. Brex took this approach: rather than integrating agents into every existing team, they built a dedicated AI operations group that could move fast and rethink processes from scratch. The agents didn't augment existing roles. They replaced entire workflows, and the people who remained were the ones who could design, evaluate, and steer the AI systems.
Other companies, especially those with deep domain complexity, will need more specialized roles integrated directly into their teams. A hospital can't centralize its agent management in a single department because the knowledge required to evaluate a clinical decision support agent is fundamentally different from the knowledge required to evaluate a billing agent. In these organizations, agents become tools within existing team structures, and roles evolve rather than consolidate.
And there's a third pattern that will become more common over time. As agents improve and accumulate institutional knowledge through the feedback loops we described in earlier articles, companies filling open positions will increasingly ask whether the role still needs a person or whether it can be handled by an agent operating under human oversight. Not a dramatic round of layoffs, but a gradual thinning: fewer hires for routine work, higher expectations for the people who remain, and a slow rebalancing of headcount toward the roles that require judgment, creativity, and the kind of contextual reasoning that agents still struggle with.
Being honest about this range of outcomes matters. Leaders who promise their teams that "nobody's job is going away" will lose credibility when the org chart changes. Leaders who frame it as "your role will evolve and you'll have the support to evolve with it" are closer to the truth, but still only telling part of it. The most honest framing is probably this: the need for domain expertise isn't going away, but the number of people required to apply that expertise is going to decrease in many functions, and the nature of the work will change for everyone who remains.
A New Skill Profile
Across all of these patterns, one thing is consistent: the people who thrive will be the ones who can bridge domain knowledge and technical capability.
Your compliance person doesn't need to become an engineer. But they need to understand enough about how the agent reasons to teach it effectively. Your support lead doesn't need to write code. But they need to evaluate whether the agent's responses reflect your company's actual values, not just its documentation. Your operations manager doesn't need to build infrastructure. But they need to be able to design a test case that catches the edge case they spotted last Tuesday.
This is a new skill profile that didn't exist five years ago. It sits in the gap between pure domain expertise and pure technical skill. And most organizations haven't started building it deliberately. They're waiting for it to emerge organically, which is roughly as effective as waiting for data literacy to emerge organically was in 2015.
The hiring implications are real. When you're evaluating candidates, the ability to work with AI systems, to evaluate their output critically, to teach them through structured feedback, becomes a meaningful differentiator. The training implications are real too. Your existing people need structured paths to develop these skills, not a lunch-and-learn about prompt engineering, but sustained investment in the capabilities that make someone effective at working alongside agents.
Fluency as Strategy
You can't leave this to individual initiative. Some people will experiment on their own, develop intuitions, and become effective AI collaborators. Most won't, not because they're resistant, but because they have day jobs and nobody gave them a framework for building these skills.
Brex formalized this by defining AI fluency levels across the organization: User, Advocate, Builder, and Native. Each level comes with specific capabilities and expectations. Quarterly assessments track progress. The system isn't punitive. It's a structure that aligns incentives with capability development and makes progress visible.
The underlying goal is worth adopting regardless of the specific framework: the people closest to the domain should be able to develop and maintain their own agents without depending on engineering for every change. When the compliance team can update their agent's knowledge base, create test cases, and refine its behavior directly, improvements happen at the speed of domain insight rather than the speed of engineering sprints. When they can't, every correction sits in a backlog.
Leadership, Not IT
If you delegate AI integration to your engineering team alone, you'll end up with technically sophisticated agents that don't reflect how your business actually works. The domain knowledge lives with your people. The technical capability lives with your engineers. Someone has to connect the two, and that someone is leadership.
This means several concrete things. It means investing in training programs that build AI fluency across the organization, not just in the engineering department. It means redefining role descriptions to reflect the new skill profile. It means creating career paths that reward the ability to work effectively with AI systems. And it means being honest with your organization about how roles will change, even when the honest answer is uncomfortable.
The leaders who treat this as a technology deployment will end up with expensive tools that their people can't fully use. The ones who treat it as an organizational evolution, who invest in their people's ability to work with AI as seriously as they invest in the AI itself, will build the kind of compounding learning systems we've described throughout this series.
People Remain Central
Every previous wave of automation changed what people do. This one is no different, except that the change sits closer to what knowledge workers consider "their job." Answering questions, analyzing information, making recommendations, following procedures. These used to be purely human activities. They're becoming collaborative activities, and in some cases, automated ones.
That makes this transition harder to navigate emotionally and organizationally. But the core challenge is the same one that every successful technology adoption has required: investing in people's ability to work with new tools, being honest about what's changing, and building the structures that let an organization learn its way through uncertainty.
The organizations that get the most from AI won't necessarily be the ones with the best models or the cleverest architectures. They'll be the ones whose people learned to work with it, who built fluency, redefined roles honestly, and treated the cultural work as what it is: the hardest and most consequential part of the whole effort.
This article is part of a five-part series exploring AI agents and organizational transformation: