Our Engineering Metrics Are About to Get Even More Misleading in the Age of AI
The more AI increases visible activity, the less visible activity tells us about real engineering performance.
The more AI increases visible activity, the less visible activity tells us about real engineering performance.
For most of our industry’s history, the people with the most authority were the people closest to the act of building.
For decades, engineering has been shaped around a set of principles that we rarely question. Maintainability, testability, modularity, and reusability have been treated as foundational qualities of good systems. They are deeply embedded in how we design architectures, review code, and evaluate technical decisions.
For decades, we repeated a simple idea: code is read more than it is written.
Execution is no longer scarce. It has been compressed by years of tooling improvements and, more recently, by AI. The cost of producing software continues to fall.
For years, we optimized engineering speed.
Two weeks ago, I built an MVP for StrengthsOS in under 12 days. At the same time, I started rewriting Octolaunch from scratch. That’s not the interesting part.
As organizations scale, governance expands. Reporting structures multiply, compliance requirements mature, alignment rituals increase, and cross-functional touchpoints become more frequent. None of this is inherently problematic. In fact, process often emerges to reduce chaos and increase predictability.
In technology-driven organizations, engineering is not merely a delivery function — it is the execution engine of the business. Strategic ambition, product vision, commercial commitments, regulatory obligations, and operational reliability ultimately depend on engineering capacity.
For almost a year, I didn’t publish anything. Not because I didn’t have opinions. Not because I stopped caring. But because I was struggling.