You should learn to code.
Yes, you! No, I don’t know who you are or what interests you, but I know you can and should learn to code. There are many obvious and not-so-obvious reasons why learning to code is one of the best intellectual investments you can make in yourself.
The most obvious reason is that coding is a high-paying skill, and being good at it pretty much guarantees you can find a good job at any of the million software companies hiring like crazy out there.
Yes, I’m sure you’ve heard many stories of developers struggling with coding interviews and getting rejected dozens of times. And sure, even some of the best coders out there can fail at a given time for a myriad of reasons, including bad luck.
But the thing no one tells you is how few of the people applying for a coding interview today can actually code. And it’s not their fault. Many have learned superficial coding practices from dubious YouTube instructors or cheap online blogs. But learning to code, in the sense I’m using the term, implies an entirely different level of understanding.
To be fair, there are also plenty of cases where the interviewer doesn’t understand what a good programmer looks like. Some coding interviews focus way too much on programming language trivia or algorithm design challenges, which are, at best, imperfect proxies for measuring good coding skills.
In any case, learning to code well is a very marketable skill. If you want a job in the tech industry, being a reasonably good programmer will get you halfway there.
But even if you don’t want a job in the software industry, there is a very good reason why learning to code is one of the best intellectual investments anyone can make today. The reason is that coding teaches you to think in a completely new way, making you extremely good at solving problems in all domains, even without a computer.
But to explain why, let me first tell you what coding is really about.
What is coding about
Coding is part science, part engineering, and part art. It is a science because it requires understanding and applying well-established, formally defined principles and techniques to solve complex problems. It is engineering because it requires the ability to design and build systems with complicated trade-offs. It is art because it requires—and boosts—creativity and imagination to create beautiful solutions to challenging problems.
Beautiful, right? But, in practice, what the hell does this mean? At a surface level, coding seems similar to any other technical skill, like, I don’t know, using advanced industrial machinery and tools. In a sense, coding is about using computers to do stuff, right? Why isn’t this just the same as using, say, a hammer and some nails to build a house?
For starters, yes, the computer is a kind of tool, but it is a completely different kind of tool. Most tools extend our physical capabilities or help us overcome our physical limitations. They make us stronger, faster, more precise.
The computer, on the other hand, is a tool to extend our minds. Computers help our brains do stuff they cannot do easily by themselves. They help us overcome our cognitive limitations.
But programming is far less about computers than you think. The great Edsger Dijkstra is often quoted claiming that computer science is as much about computers as astronomy is about telescopes. Computers are just a tool—an incredibly powerful tool—but the important part is not the tool you use, but what you do with it.
Programmers use computers to solve problems.
This is the deeper reason why learning to code is a different type of skill—a more general skill than it initially seems. Coding is general-purpose problem-solving. In fact, there is no problem that can be solved at all that cannot be solved with a computer—this is the Church-Turing thesis. Learning to code is, ultimately, learning to solve problems.
Why? Coding relies on one fundamental ability called abstraction, which is the process of taking a problem, situation, or concept in general and stripping it down to its core defining features, forgetting everything that is irrelevant or useless for the concrete purpose you have at hand.
And the thing is, abstraction is the most useful cognitive ability in the modern world. It is at the core of problem-solving. Any sufficiently complex problem requires you to think at several levels of abstraction and switch between them constantly. In fact, the ability to think at the right level of abstraction at any given point in the process of solving a concrete problem is probably the most critical ability to master to become a really good programmer.
Let me draw a cheap analogy here. Consider driving a car from home to work. Many things happen inside your car, including a bunch of physical and chemical processes, that are fundamental for the car to work correctly. But thinking about the combustion process going inside your engine is useless for reaching your destination—if your car is working correctly, at least. Actually, worrying about your engine's chemical reactions while driving arguably makes you a worse driver, not a better one.
Instead, when driving in the city, you often need to deal with three different levels of abstraction. The lowest level is just to make the car move: shifting gears in due time, pressing the pedals, etc. The middle level is about not crashing: avoiding incoming traffic, staying in your lane, etc. The higher level is about navigation: picking the right path to reach your destination most efficiently.
When people learn to drive, they often learn to think at the lowest level of abstraction first. They struggle to make the car move. But once they master this level, it becomes almost automatic, and they can start to think in terms of what other cars are doing and how to negotiate the road. And finally, they are able to navigate around town seemingly without effort, while casually talking about, I don’t know, programming maybe.
Similarly, in computer programming, you will often deal with three simultaneous levels of abstraction.
First, we have the code level. At this level, all you care about is expressing the solution to a problem in a language that a computer can understand. Computers are very dumb machines, you see. For all their power, you need to talk to them in very, very precise terms using a programming language. This may seem daunting at first, because it requires a very strict use of syntactic rules, but almost everyone can learn to write and read computer code, just like almost everyone can learn to accelerate, break, and turn. This is the easier task.
The second level is the algorithm level. At this level, you care about actually solving a problem by designing what is called an algorithm. In short, an algorithm is a very precise set of instructions that is guaranteed to provide a solution for a given problem. Coming up with the best algorithm for a problem is a pretty advanced skill, one that is almost always left for college-level advanced programming courses. But fear not, is one skill that can be mastered with enough practice.
The third level is the system level. At this level, you think about the design of a complex system made out of multiple interacting components, often hundreds of tiny algorithms working together to solve a big problem. Also, in this level, you have to think about the end user of your code, either a human on the other side of the display or a computer on the other side of the planet. Systems thinking is probably the hardest skill in this combo, but it can also be mastered in due time, and it is tremendously useful even outside computer programming.
In larger organizations, people tend to specialize in roles that focus most on one level of abstraction. For example, software architects might spend most of their time thinking about the system as a whole and far less about low level details. Still, the ability to move through abstraction layers is there.
So, learning to code will teach you to think at several levels of abstraction simultaneously: to decompose problems into parts, solve each in the best possible way, and then make those parts work together to become a solution to the original problem. It will teach you to think rigorously about why some potential solution doesn’t actually work and to detect subtle reasoning flaws in code—and, by extension, in other people’s behaviours as well as your own.
In short, coding makes you a better thinker. And thinking better is arguably the best recipe for success in general life, all other things being equal.
Why learning to code (well) is hard
But if learning to code is such a great cognitive investment and automatically makes everyone better at solving real-life problems, why are so many computer programmers out there not… being great at life?
Just like learning to drive, most people will master the lowest level first. They will learn how to express known solutions to known problems in computer lingua. They will learn to write working code.
But then, most people stay here. They can make the car move, but they can’t navigate to their destination, nor can they plan the best route and avoid the morning traffic. And the reason they don’t advance more is, often, because they don’t even know there is something else to be learned.
You see, most resources to learn how to program out there will actually just teach you how to write code. They will not teach you how to think about problems in the way good programmers do, so you can come up with clever solutions to these problems. There are books on this, too, of course. But most are college textbooks or advanced technical books that aren’t marketed at or even written for the first-time learner.
On the other hand, you will find a lot of books on how to do software engineering, and how to design systems that are reliable, maintainable, and a bunch of other fancy adjectives. But these are most often, I kid you not, written as if for people who already know this stuff. Much like college-level math books—that you often need to already know the math to even be able to read the book—the majority of books on software engineering are written by software engineers for software engineers, and they are talking among themselves, repeating the same mantras they already agree on.
Don’t get me wrong, there are good resources out there. But there are vastly, vastly more bad resources that won’t get you very far down the road that really matters: learning to think like a computer programmer. We already said this requires thinking at different levels of abstraction, and most learning resources never touch more than one of these levels.
But beyond the lack of good holistic learning resources, the deeper reason why learning to code is hard is precisely because it requires that you rewire your brain to think in a different, novel, more abstract and rigorous way. And your brain doesn’t want to do that.
Your brain evolved to keep you alive—actually, to transmit your genes—and for that, it needs to focus on mastering only two skills: getting food and getting laid. All else is secondary.
So, you have to trick your brain into believing that this is a critical skill to acquire. And that takes a lot of time and a lot of patience. No one learns to code well in 24 days or two months or even a full college semester. And, more importantly, no one learns to code by watching others code. That is a small part of learning, but you have to do the chores.
Coding is much more an acquired skill than learned knowledge. There is some knowledge involved, for sure. There are algorithms you can memorize. There is syntax and semantics and rules. There are design patterns and reusable ideas. But more than all of that, coding is the process of taking a complex problem, breaking it down into manageable parts, and then explain in the most precise possible language how to solve each of those parts, and how to put them together again.
That is, coding is about doing stuff much more than it is about knowing stuff. And there is only one way to learn how to do something: practice, lots of practice. And most people who think they can code out there simply haven’t put enough time into it.
But what about AI?
Yes, the elephant in the room! Will AI make coding skills irrelevant? Aren’t programmers coding themselves out of job?
Good that you ask. I’ve written many times before about why the current language modelling paradigm is, on the one hand, incapable of true reasoning and, on the other hand, unlikely to replace programmers anytime soon. So let me just repeat a few arguments here for the sake of completion.
First, code generators are indeed powerful tools that any professional programmer would do well to learn how to integrate effectively into their workflow. They can automate some of the most boring tasks and provide some much-needed unblockers from time to time.
However, code generators, at least those based on large language models, are inherently unreliable. So anyone using them to solve a given coding problem without a deep understanding of the problem and the proposed solution is shooting themself in the foot. AI-generated code can and will have errors, often subtle ones, that you need to be sufficiently experienced to catch.
On the other hand, language models struggle to deal with context. They often have a very limited context size compared to a reasonably large software project. But even if context size wasn’t a limitation, LLMs are known to arbitrarily ignore relevant parts of context and overfocus on irrelevant parts. They fail to grasp the big picture when things become complex enough.
In the end, this means that you can trust LLMs to generate small chunks of code that you can easily verify and understand, and that are relatively atomic and independent of the rest of the system.
They can definitely help at the code level—e.g., so you don’t need to remember or even search how that specific method or endpoint is invoked. They can help a little bit in the algorithm level, especially for applying relatively well-known strategies that you might have forgotten or not heard about. But they are almost useless at the system level.
The reason is that, the higher the level of abstraction, the more context matters. When you’re thinking about the whole architecture of your application, you need to understand how it all fits together, and even tiny details can be decisive for system-wide design choices.
So far, LLMs are incapable of such high-level reasoning. They can provide abstract, standard advice on software architecture, but the cannot reason that deeply about your specific application context and propose truly useful, novel, and accurate solutions for system-level design. And it seems we need a paradigm shift to overcome this limitation.
Thus, the better you are at coding—the better you are at thinking at the highest levels of abstraction—the more you can get out of an AI code generator. LLMs don’t make bad programmers better. In fact, they make them worse.
How should you learn to code
Finally, let me address the question of how. What follows is a very personal opinion—based on years of experience, but an opinion nonetheless.
There are two schools of thought about how to learn to code. The first one is the foundations-first school. In this school, you start by learning the syntax of a programming language and then learn how to write successively more complex programs in that language. This is the more traditional bottom-up approach, and it is often the design in college-level programming courses in engineering or computer science majors.
The upside with this approach is you get very strong foundations. After a full college year learning this way, you’re ready to tackle some reasonably complex coding problems. The downside is it takes a long time before you can do anything exciting that you can showcase. You spent most of your time solving abstract problems rather than building useful apps.
The second approach is championed by the applications-first school of thought. It is the complete opposite approach. You are thrown into a fully working application—often a rather simple one, but still far more complex than a single algorithm—and you get a high-level explanation of how the whole things work, as well as some deep dives into the important parts. This top-down approach is most common in online tutorials, bootcamps, and videos.
The upside is, of course, the pragmatism. You learn something immediately useful that you can replicate and maybe tweak for your own use case. And you get to see a lot of (hopefully) well-written code. However, if you only learn like this, you can end up with lots of disconnected ideas and no sense of the big picture. And more importantly, you don’t get a chance to develop the skill to think on your own and come up with your own ideas.
It probably won’t sound too bold or innovative to claim that I think the right way is a combination of these two. Of course it is! But the devil is in the details.
Good programming pedagogy is more than just doing algorithms one day and applications another. It’s not just an heterogenous mix. It requires a thoughful merge of these two approaches in a way that you are doing both things in unison: you’re thinking at the high level and coding at the low level simultaneously.
In my programming classes, I try to do precisely this: I create a fully working application—often a small demo, but something that has a concrete purpose and is designed for a real, human user.
We first spend some time thinking at a very high level about how such a system could be designed and discussing possible architectures. Then I crack open the code and show the parts that involve new content: maybe it’s a new instruction, a design pattern, or an algorithm.
At this point, I go back and forth between showing actual code and working on abstract ideas on the blackboard. This is time to think about why the code works, and to generalize these concrete ideas to more abstract patterns that can be learned.
Finally, I will challenge the students to make changes to the code. Some changes are superficial and only there for the coding level, so they get to practice with the new syntax. Other exercises tackle adding functionality or modifying methods to do extra stuff. These require the ability to think at a higher abstraction and understand what’s going on under the hood.
This is the best way I’ve found to get students to think about their code at several levels of abstraction simultaneously. If you’re interested in seeing this process in action, let me know in the comments. If there is enough interest, I may come back with some beginner-level tutorials using this approach.
Great read, as always. Like you, I'm a strong advocate of the "middle-of-the-road" approach to learning. Small concrete projects rather than standalone exercises, but with each project carefully designed to cover the key concepts I want to convey. And one carefully designed project after an other covers the whole curriculum.
Sounds interesting. Alright, I’ll take the bait. Show me an example.