The Undefined Process Gap

Or: Don't just train agents.

I recall seeing discussions online suggesting that this current automation wave will unearth a major documentation gap. It will highlight the difference between information that’s properly documented and the informal, institutional knowledge that lives only in the hive mind and in coworkers’ informal networks. Many of these processes can’t (yet) be automated because their true nature has never been properly documented—if it even can be. Some parts may even rely on anecdote rather than data and are never formally captured.

Just as the details and nuances of a human identity can’t be fully captured in databases with reductive attributes, the complexities and nuances of modern organizations may not be fully captured in standard operating procedures—and therefore may never be fully automatable.

That raises a question: why is so much effort dedicated to training AI tools, while training for human employees is often neglected? Onboarding often amounts to self-serve links filled with stale content, and “training” rarely goes beyond mandatory compliance modules. Beyond that, training is frequently left to grassroots efforts—employees teaching peers through lunch-and-learns—with little organizational structure, oversight, or support.

I appreciate that organizations are using this moment to codify processes and strengthen their standard operating procedures. We’re seeing teams try to define exactly where AI agents can make decisions independently, and establish criteria for when to require input from a human in the loop.

I’d argue those same criteria and guardrails should have existed for human employees all along. We’ve often assumed we can leave things vague and rely on “common sense” and people’s ability to interpret nuance.

But if we’re going to the trouble of codifying principles and rules, we might as well use them to define desired organizational behavior—whether they’re consumed by agents or employees.