We Need to Learn How to Write Software
Everything Traces Back to Bugs
Strip away the methodologies, the management frameworks, the process improvements, and software engineering has exactly two categories of problems:
- Bugs - the reliability and testing problem
- Dealing with the unpredictability of bugs - the management and scheduling problem
Every other concern in software development ultimately reduces to one of these.
- Why do we need extensive code reviews? To catch bugs before they reach production.
- Why is estimation so difficult? Because we can't predict how long debugging will take.
- Why do we need elaborate deployment processes? To minimize the impact when bugs make it through.
- Why do teams implement feature flags, canary deployments, and rollback mechanisms? Because bugs are inevitable and we need to contain their damage.
Even concerns that seem unrelated - like team structure, architectural patterns, or technology choices often trace back to bug management. We adopt microservices partly to isolate failures. We choose statically typed languages to catch errors at compile time. We organize teams around services to limit the scope of changes and thus the scope of potential bugs.
This is revealing. After decades of software engineering as a discipline, with all our advances in languages, tools, and practices, we're still fundamentally in reactive mode. We're not engineering software to be correct. We're engineering systems to survive and recover from incorrectness.
Managing Around the Mess
The industry's response to software's inherent difficulty has been to develop elaborate strategies for managing around it. Since we can't reliably write correct software, we'll manage the process of writing incorrect software and gradually fixing it.
Hence:
- Agile methodologies that embrace changing requirements because we know we won't get it right the first time.
- DevOps practices that make deployment frequent and safe because we know we'll need to deploy fixes constantly.
- Site reliability engineering that treats failures as normal and builds systems to handle them.
- Observability platforms that help us understand what's breaking in production.
These aren't bad responses. They're often quite sophisticated and genuinely helpful. But notice what they all have in common: they accept that software will be buggy and unreliable, and they build processes around that assumption.
We've gotten very good at managing the mess. What we haven't done is figure out how to make less of a mess in the first place.
Consider scheduling. Why is software estimation notoriously inaccurate? The standard explanations are that requirements change, that unknown unknowns emerge, that different programmers work at different speeds. All true. But there's a deeper issue: most of the time in software development is spent debugging, and debugging time is fundamentally unpredictable.
You can estimate how long it will take to write the initial version of a feature. That's relatively straightforward work. What you can't estimate is how long it will take to make it actually work - to find and fix the bugs you introduced, to handle the edge cases you didn't consider, to resolve the conflicts with existing code you didn't know about.
Some bugs take five minutes to fix. Some take five days. Some force you to redesign entire subsystems. There's no way to know in advance. So schedules slip, not because programmers are lazy or management is incompetent, but because the core activity - making buggy software work correctly is inherently unpredictable.
The industry response? Better estimation techniques. Monte Carlo simulations. Story points. Velocity tracking. All trying to predict the unpredictable by looking at historical data and adding buffers. We're managing around the fundamental problem: that we're bad at writing correct software in the first place.
The Checklist Trap
When confronted with the difficulty of writing good software, the natural impulse is to codify what good software looks like. Hence the proliferation of best practices, design principles, and coding standards. If we can just remember all the things we're supposed to do, surely we'll write better software?
So we get lists. Long lists. Clean Code distills its wisdom into dozens of principles. SOLID principles give us five. The Pragmatic Programmer offers what, 70 tips? Or was it 90? Various sources will tell you there are 23 design patterns you must know, or 97, or 177 pieces of advice for writing better code.
Each item on these lists is probably correct. Keep functions small, yes, that helps. Don't repeat yourself - usually good advice. Make illegal states unrepresentable -powerful idea. But here's the problem: lists don't tell you how to write software. They tell you what properties your software should have after you've written it.
This is like trying to teach someone to paint by giving them a list of properties good paintings have. "Use complementary colours. Balance composition. Create focal points. Vary brushstroke texture." All true. All useless if you don't know how to hold a brush or how to see what you're painting.
The checklist trap is that it provides the illusion of knowledge without actual capability. You can memorize all 177 pieces of advice and still not know how to design a system. Worse, when you try to apply them all simultaneously, they conflict. Keep functions small vs. keep cohesive logic together. Don't repeat yourself vs. avoid premature abstraction. Make it work vs. make it right vs. make it fast. Which principle wins depends on context, but the checklist doesn't teach you how to reason about context.
What We Actually Need
We don't need more advice. We don't need better ways to manage around the mess. We need to learn how to write software.
Not how to code - most programmers know syntax and algorithms well enough. Not how to use this framework - that's just learning another API. We need to understand how to engineer systems so they're comprehensible, modifiable, and reliable.
This knowledge exists. Experienced programmers have it. You can see it in how they approach problems: they don't mechanically apply patterns or follow checklists. They reason about their code. They think about what can change and what must stay stable. They identify the sources of complexity and isolate them. They make tradeoffs consciously rather than accidentally.
But this knowledge is largely tacit. It lives in individual programmers' heads, accumulated through years of experience. We haven't figured out how to transfer it systematically. Instead, we point junior developers at codebases and say figure it out, we give them lists of principles to remember, and we hope that through enough trial and error they'll eventually develop good judgment.
What would it look like to actually teach this? Not to give people principles to memorise, but to teach them how to think about software engineering?
It would probably involve showing, repeatedly, how real programs can be improved through the application of a few fundamental principles. Not 177 pieces of advice, but maybe three or four core ideas that you apply over and over in different contexts. Principles that are about causality, boundaries, constraints - the deep structure of software, not the surface features.
It would involve incremental improvement. Not rewrite everything the right way, but here's how to make this specific piece of code better, and here's why this improvement helps. Do that enough times and you start to develop intuition for what better means.
It would involve common sense more than cleverness. Most software problems aren't solved by brilliant insights. They're solved by clear thinking about what the code is supposed to do, what it actually does, and how to close that gap.
The Incremental Path
Here's what's frustrating: improving code isn't actually that mysterious. Take any program. Look at it critically. You'll find things that are confusing, things that are fragile, things that are tangled together. Pick one. Make it better. Repeat.
The improvements compound. Each small change makes the next change easier. Code that's clear is easier to modify than code that's obscure. Code that's modular is easier to test than code that's coupled. Code that makes its assumptions explicit is easier to reason about than code that leaves them implicit.
This is how experienced programmers work. They don't set out to write perfect code. They write code that works, then they improve it. They refactor as they go. They see something awkward and they fix it before it becomes a problem. They keep their code in a state where the next change won't be terrible.
But somehow we don't teach this as a methodology. We teach it as best practices - isolated pieces of advice about specific situations. We don't teach the underlying habit: continuously improving your code through small, deliberate changes guided by a few core principles.
Maybe because it's hard to teach. It requires judgment, which comes from experience. It requires seeing the same patterns repeatedly until you internalise them. It requires making mistakes and learning from them.
But that's what we're trying to avoid, isn't it? The years of trial and error, the accumulated mistakes, the slow development of intuition. We want a shortcut. We want rules to follow that will produce good code without requiring judgment or experience.
There is no shortcut. But there might be a more efficient path than pure trial and error. One that involves systematic thinking about software engineering rather than memorising practices. One that teaches principles over patterns, understanding over advice.
Beyond Management
The software industry has become expert at managing complexity. We have tools for tracking bugs, processes for deploying safely, methodologies for adapting to change, frameworks for organising teams. All useful. All necessary given the current state of software development.
But we've mistaken managing complexity for solving it. We've built elaborate scaffolding around the core activity - writing code, without improving the core activity itself. We're like a construction industry that's perfected project management, supply chains, and safety protocols, but still hasn't figured out how to make buildings that don't leak.
The consequence is that software development remains expensive, slow, and unpredictable. Not because we lack tools or processes, but because the fundamental activity - creating and modifying code is still done without systematic engineering methodology. We rely on individual programmer judgment, accumulated through experience, applied inconsistently across the industry.
This is why two programmers with the same years of experience can have vastly different capabilities. They've accumulated different experiences, drawn different lessons, developed different intuitions. There's no standard body of knowledge that makes a programmer competent, just a scattered collection of practices and principles that each person assembles individually.
What we need is to step back from all the management apparatus and address the core question: how do we write software so that it's understandable, modifiable, and reliable?
Not how do we manage the process of writing software or how do we recover when software fails, but how do we write it well in the first place?
This is the problem we've been avoiding. It's harder than creating management processes or adopting new tools. It requires rethinking how we teach programming, how we structure development work, what we consider good practice and why. It requires moving past checklists and methodologies to something more fundamental: a way of thinking about software engineering that produces reliably good results.
What's Missing
We have languages, frameworks, tools, methodologies, and an entire industry of people writing software every day. What we don't have is a systematic approach to engineering software that's teachable, learnable, and reliably produces good results.
We know what good software looks like. We've written down its properties endlessly. We know many practices that help. They're catalogued in dozens of books. What we don't know, collectively as an industry, is how to think about software engineering in a way that leads to those properties and practices emerging naturally from your work rather than being bolted on afterward.
This is the gap. Not between theory and practice, we have plenty of practice. Between reactive and proactive - we're excellent at fixing problems, weak at preventing them. Between advice and methodology - we can list what to do but not teach how to think.
Until we close this gap, software development will remain what it is: expensive, unpredictable, and heavily dependent on the accumulated experience of individual programmers. We'll keep managing around the mess because we haven't learned how to make less of a mess.
The solution isn't another framework or another set of best practices. It's learning, finally, how to write software.
If you want to learn How To Write Software that works - visit stackshala. We compress the knowledge down to basic principles that apply irrespective of domain/technology or language.