We need to have a very uncomfortable conversation about the millions of dollars we just poured into Generative AI licenses.
You approved the budget for GitHub Copilot, ChatGPT Enterprise, and the latest “AI-powered” marketing suites based on a simple, seductive promise: Productivity. The sales pitch was flawless. “If a developer can write code 50% faster, we can ship features 50% faster.” “If marketing can write copy in seconds, we can dominate the channel.”
It sounds logical. It is also completely, catastrophically wrong.
I am looking at our latest operational metrics, and I see a different story unfolding. Our “Output”—the sheer volume of lines of code, pages of documents, and number of emails—has exploded by 300%. But our Throughput—the actual value delivered to the customer—has stagnated. In some critical areas, it has actually declined. Our cloud infrastructure bills are rising faster than our user growth. Our senior staff is reporting record levels of burnout.
Here is the brutal reality: We haven’t purchased “productivity.” We have purchased a tool that allows our junior employees to launch a Distributed Denial of Service (DDoS) attack on our own infrastructure.
We are paying our own people to flood our system with noise, and we are paying our most expensive experts to clean it up. This is not innovation. It is an act of financial suicide.
The Physics of the Flood
To understand why “faster coding” is killing us, we must stop thinking like Project Managers and start thinking like Systems Architects. Every business process is essentially a pipe with two distinct ends: an Upstream (Generation) and a Downstream (Consumption/Verification).
Upstream is where we write code, draft emails, and create strategy documents. Downstream is where that work is reviewed, tested, verified for legal compliance, and read for decision-making.
Before the AI boom, there was a natural, biological balance between these two. Humans write slowly. Humans read slowly. The friction of creation acted as a natural filter for quality. You had to think before you wrote because writing was expensive.
Enter GenAI. We have effectively handed a nuclear-powered firehose to the Upstream. A junior developer who used to write 100 lines of buggy code a day can now generate 1,000 lines of buggy code in an hour.
But what about the Downstream? We haven’t upgraded the biological processors of your Senior Architects, your Legal Counsel, or your QA Leads. They are still humans. They still read at 250 words per minute. They still need to sleep.
This imbalance is the definition of a DDoS attack. In cybersecurity, a DDoS happens when a network of compromised machines floods a server with more requests than it can handle, causing it to crash. In our organization, the “compromised machines” are our AI-enabled staff, and the “server” is our management and QA layer.
We are seeing a Queueing Theory nightmare unfold in real-time. As the arrival rate of work (AI output) exceeds the service rate of review (Human verification), the wait time doesn’t grow linearly. It grows exponentially. We are not moving faster. We are in Gridlock.
The Economics of Bullshit
This isn’t just an operational theory. The data is already in, and it confirms the worst-case scenario regarding the quality of this new “velocity.”
To understand the economics of this failure, we must look to the Bullshit Asymmetry Principle, formulated by Alberto Brandolini. It states: “The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.”
AI has reduced the cost of generating plausible-sounding content—hallucinated code, generic strategy, derivative copy—to near zero. However, the cost to verify that content remains high. In fact, it has gotten higher. Spotting a subtle logic error in a 500-line AI-generated script is infinitely harder and more cognitively draining than writing it from scratch. We have inverted the economic equation of work.
If you think “AI code is getting better,” you need to look at the data. GitClear, a developer analytics firm, analyzed over 150 million lines of code changed between 2020 and 2023. Their 2024 report dropped a bombshell: “Code Churn”—the percentage of lines that are thrown out or rewritten less than two weeks after being authored—is on the rise.
The report explicitly links this degradation to the rise of AI assistants. We are seeing more “copy-paste” code that looks correct on the surface but fails under scrutiny. We are building a codebase of disposable garbage.
Compounding this is the illusion of security. A 2023 study from Stanford University found that developers using AI assistants wrote significantly less secure code than those who didn’t. But here is the kicker: They were more likely to believe their code was secure. This is the Dunning-Kruger Effect weaponized by automation. We have empowered our least experienced staff to create our most critical risks, and we have given them the false confidence to bypass scrutiny.
The Financial & Strategic Fallout
When you translate this operational gridlock into the language of the Balance Sheet, the picture becomes grim.
First, look at the OpEx Inversion. We are currently allocating our most expensive resources—Senior Engineers and Architects billing $150k++—to act as janitors for our cheapest resources. Instead of designing the future architecture of the company, your best talent is spending 40 hours a week reviewing AI-generated pull requests from juniors. This is a gross misallocation of capital. We are burning cash to induce burnout.
Second, we are facing the “Wirth’s Law” Tax. Niklaus Wirth stated that “Software gets slower faster than hardware gets faster.” AI generates bloated, unoptimized code. It doesn’t care about memory efficiency or clean architecture. This leads to a direct explosion in our Cloud and Compute bills. We are essentially paying AWS and Azure for the privilege of hosting inefficient code.
Strategically, the risk is even more profound. We are facing a Signal-to-Noise Collapse. When 90% of internal memos and reports are AI-generated fluff, nobody reads anything anymore. Critical strategic warnings are being buried in the noise. We are flying blind because our instrument panel is jammed with spam.
And finally, there is the long-term killer: Skill Atrophy. Junior staff relying on AI are not learning the “First Principles” of their craft. They are becoming “Prompt Operators,” not Engineers. If the AI gets it wrong, they lack the fundamental knowledge to fix it. We are hollowing out our future talent pipeline for short-term gains that don’t even exist.
Building the AI Firewall
We are suffering from the Feature Factory syndrome, accelerated by AI. We have confused Motion with Progress, Output with Outcome, and Efficiency with Effectiveness.
The uncomfortable reality is that our organization cannot handle the speed of AI. Our processes, our governance, and our review structures were designed for a human-speed world. By plugging a Ferrari engine into a horse-drawn cart, we haven’t built a faster vehicle. We have just guaranteed that the wheels will fly off at the first turn.
We must stop the bleeding. I am not suggesting we ban AI. I am suggesting we stop using it like amateurs. Here is the remediation plan to restructure our flow:
1. Shift Investment: From Generation to Consumption
Stop buying tools that help people create more. Start buying tools that help people review faster. We need to equip Senior Reviewers with AI tools that summarize changes, detect anomalies, and auto-run unit tests. The “Filter” must be as powerful as the “Source.”
2. The “Liability Summary” Protocol
We need to re-introduce friction. Friction is necessary for quality. Any code or document submitted that was generated (>50%) by AI must be accompanied by a Liability Summary. This is a human-written paragraph explaining why the logic is sound and how it was tested. If the employee can’t explain it, they can’t ship it. This stops the “Copy-Paste-Pray” loop.
3. The Automated “Firewall”
We must adopt an Inverted Human-in-the-Loop model. Humans should not be checking basic syntax; machines should check machines. We need to implement strict “Quality Gates” in the CI/CD pipeline. If AI-generated code does not pass 100% of automated tests with high code coverage, it is rejected automatically. It never reaches a human reviewer. We do not waste human brain cycles on garbage.
4. Change the Metrics (Goodhart’s Law)
Stop measuring lines of code. Stop measuring number of commits. We must measure “First Time Right”—the percentage of work that passes review without rework. If a developer uses AI to ship fast but gets rejected 5 times, their performance score drops. Incentivize precision, not volume.
The Final Word
AI is a leverage multiplier. It multiplies what you have. If you have a disciplined, high-flow process, AI multiplies value. If you have a chaotic, low-trust process, AI multiplies chaos.
Right now, we are scaling chaos. Let’s build the firewall before the system crashes.
Discover more from HogoFlow
Subscribe to get the latest posts sent to your email.