Standards Development in the Age of AI: A Reflection

We are in a period of significant change, which sounds dramatic until you realize we're always in a period of significant change. The timescale shifts, the technologies shift, the players shift - but the fundamentals of good governance don't.
I come into standards development with the mindset that the best governance frameworks are the ones you barely notice. If you've written a standard well, it should feel like the intent was to cover best practices and natural order of industry - not like someone sat in a room for three years trying to prescribe every possible scenario.
Although some technical standard are prescriptive, and rightfully so, in a Management System or Program this approach does not work well. I want to be very clear - I am not talking about a technical enigneering standard im talking about the higher level management and systems standards here. In this instance, its not valuable to write prescriptive standards (when we should be writing risk-driven performance-based ones) and then wonder why they're outdated before the ink dries.
My core belief about good governance is this: keep it light, keep it lean, and for the love of everything - if it's expired or no longer serving its purpose, delete it. Standards are living documents, not monuments.
The Slowest Moving Machine in the Room
Policy development moves in cycles of 3, 5, 10, sometimes 20 years. If you come into this work expecting the pace of a software sprint, you will burn out or become insufferable to work with - possibly both. The mindset shift required to do this work well is less about urgency and more about patience with purpose.
What keeps impact alive during those long development cycles? A few things, in my experience. Human connection matters more than most technical people want to admit, and humour is an underrated tool in rooms full of people who disagree.
Foresight matters too - specifically, the ability to write standards that measure outcomes rather than prescribe methods, which is the only real way to stay relevant across a long policy timeline. And honestly? You have to make peace with the fact that you cannot control how this unfolds. I think about this in terms of a Tao mindset: you're not steering the river, you're helping define its banks.
Performance-based standards survive time. Prescriptive ones risk redundancy, additional revisions and amendments. When you shift your thinking away from "what technology should they use" and toward "what outcome are we trying to achieve, and is it reasonable given real-world resources and costs," the standard you write has a much longer shelf life.
That shift also forces honesty about what is actually achievable - which is the cornerstone of credible regulation.
Where AI Fits In
When ChatGPT launched in 2023, I read OpenAI's published papers alongside the media coverage. I'm not an AI algorithm engineer, but I understand systems. The gaps in those early papers were real and worth paying attention to - chief among them, the problem of hallucinations.
The technology has improved significantly since then. Anyone who watched the viral post from the creator of Claude Code demonstrating what agentic processes can do with sufficient compute knows the ceiling has risen considerably. But most of the world, including many of the people writing policy around AI, hasn't kept pace with that information - and frankly, most of them don't need to.
Here's the plain version of what I think anyone developing AI standards actually needs to understand: AI hallucinates, and many of the most significant risks can be mitigated by keeping humans in the loop. That second point isn't just about protecting jobs - it's about professional accountability, which is a foundational concept in engineering and industry. An AI can generate a technically correct drawing. That doesn't make it an engineer. An engineer carries responsibility for their professional judgement, and that accountability cannot be delegated to a model. Likewise, just because a language model can write your email doesn't mean it sent it. You did.
I also I think there's something useful in recognizing that AI systems, like all systems built by humans, have natural limits. Energy constraints, implementation limits, infrastructure dependencies - these are real. Engineers understand this intuitively because we work with constraints all the time.
We regulate wireless spectrum in part because of the physics - bandwidth and power have an inverse relationship, and some frequencies cause harm to human biology. The regulation formalizes what the physics already constrains. AI is no different. When we accurately define what AI is within the context of a standard, and when we identify the natural limits of the technology as applicable to that standard's scope, I'd argue we're about 80% of the way to regulating it well. The remaining 20% is continual improvement. That's not a weakness in the framework - that's the point.
For this to actually work, the standards themselves need to be built for change. Public review cycles, amendment processes, sunset clauses - Canada's own CSA has some recent AI Management System Standards with a lot to teach here. The infrastructure for iteration needs to be part of the design from day one.
Looking Ahead: 2027-2030
The organizations and standards bodies that will adapt to the changing technology environment well are the ones that resist the urge to over-prescribe in a moment of anxiety, that write for outcomes rather than methods, and that build in the humility to be corrected by what actually happens.
There's a passage I keep coming back to in the tao: the Master doesn't talk, she acts. When her work is done, the people say, "Amazing: we did it, all by ourselves!". That's what good regulatory development looks like when it works - invisible, durable, and attributable to everyone and no one.
Aside: The tricky thing about defining AI
I'll borrow Arvind Narayanan and Sayash Kapoor's example about how using the blanket term "AI" to describe everything from chatbots (generative AI) to credit scores (predictive AI) is as unhelpful as using the word "vehicle" to refer to bicycles, cars, and rockets simultaneously. So take this post with this big grain of salt.

