Respect the SDLC
If you skip the process, you inherit the mess
Everyone talks about how AI will change software development. We need to talk about what won’t change.
If anything, AI is forcing a return to first principles, increasing the importance of good hygiene in the SDLC:
If code is generated, specs need to be precise. Bad instructions produce scalable garbage.
If output is probabilistic, reviews need to focus on intent, not syntax.
If no one wrote every line, testing becomes your only source of truth.
AI may turn out to be less a replacement for engineering rigor than a very efficient machine for punishing its absence. The abstraction is rising, but surface area of complexity is exploding underneath: more services stitched together, more hidden dependencies, and more non deterministic behavior. Someone still needs to reason about architecture, failure modes, and tradeoffs.
To be clear, this is not a hand-wringing rant about the hype. The technology is excellent and getting better fast. But using it well still requires systems thinking and technical judgment.
Generating code is not the same thing as building software, in roughly the same sense that producing words is not the same thing as having a legal argument. Engineering starts after the code exists. It’s the work of validating it, integrating it, deploying it, and ensuring it doesn’t break something three services downstream at 2 a.m. Today, everyone can build but very few can build systems that last. That distinction is about to get expensive.
Engineering orgs run by non-technical leadership are especially likely to learn this lesson the hard way. Expect a wave of premature cost-cutting followed by expensive rehiring cycles once the Sev 1s start stacking. If you run an engineering org without understanding the distinction between “more code” and “more reliability,” AI will eventually explain it to you in production.
This dynamic is made worse by a familiar feature of large engineering organizations: internal politics. Scope is claimed by promising more than can realistically be delivered, timelines are pulled forward to win resourcing, and complexity is deferred. AI pours fuel on this. It becomes easier to show rapid early progress, easier to justify bigger surface area, and easier to paper over weak foundations, at least temporarily. The bill still arrives.
On the ground, engineers are already seeing the pattern. AI works best when it is dropped into an existing codebase with established patterns. There, it is genuinely useful. It learns the local style, extends prior decisions, and gives you leverage. But if you ask it to vibe-code a system from scratch, you often get something like a software taxidermy project: all the right shapes are present, but the animating logic is a little unclear. In the absence of strong patterns, the model invents some. They are not always good. You end up with inconsistent abstractions, dubious data flows, awkward interfaces, and code that looks fine until it is asked to scale, at which point it fails in new and time-consuming ways.
So teams have two choices:
Let the system emerge organically through prompting → fast, messy, brittle
Do the upfront work: define architecture, constraints, and guidelines → slower, but durable
Right now, many teams are choosing (1). Not because it’s better - but because it’s incentivized. When output is measured in PRs and velocity, there’s very little reward for thinking a few steps ahead, designing clean systems, and enforcing consistency. Short-term progress wins, long-term coherence is someone else’s problem .So complexity gets deferred and paid back later with interest.
That is why some very old-fashioned building blocks suddenly matter a great deal.
Specs: A surprising number of people seem to think they can prompt, iterate, and ship their way into coherence. That’s the path to misaligned systems. If your specs are fuzzy, your system becomes inconsistent across components, fragile at scale, and hard to debug. Spec quality is now a first-order engineering skill.
Reviews: AI output looks plausible but can hide silent failures: edge cases missed, security vulnerabilities, subtle logic errors that look correct. Reviews now need to focus on big picture logic: does this actually do what we intend? what assumptions is this making? where will this break?
Testing: Cheap iteration does not make failure cheap. It mostly makes failure more frequent. If the operating model is “we can always patch it later,” the result is software that demos well, degrades in production, and regresses constantly. At AI-generated scale, you cannot rely on understanding every line of code. You have to rely on verifying behavior. Unit tests define expected behavior. Integration tests catch system-level failures. Evals matter for model-driven outputs. The less you can rely on direct authorship, the more you have to rely on verification.
AI is making code generation easier but its not going to make engineers less ‘technical’. In fact, I think eventually everyone is going to learn the core principles of the SDLC: version control, testing discipline, structured iteration. The bottleneck shifts from can you code → can you operate like an engineer.
AI doesn’t just make everyone better - it widens the gap good and average.
Teams with strong SDLC → move faster and more reliably.
Teams without it → accumulate invisible debt at 10x speed.
AI is a force multiplier for your existing engineering discipline - whatever it may be.


