Redefining Failure
What is failure?
I’ve always been pretty “good” at school. This trait is pretty common among Turing students: we’re used to working hard and we’ve enjoyed the correlation between hard work and favorable results. And this concept generally holds pretty true for most academic pursuits: study like crazy, get an A.
When I started at Turing, I assumed things would be the same. And when I got my first project, I applied the same mentality: I worked really, really hard. The project was also really, really hard. But I worked my butt off.
I failed the first project.
The project assignment was to build an Enigma Machine. My code encrypted messages, it decrypted them, it even (mostly) cracked them. It was well-organized and followed SRP. I wasn’t super sure what it meant for the code to be well-tested, so I gave that part my best shot.
The test coverage sucked. And so, I failed the first project.
Later, I would start to understand the vital role that failure plays in learning how to program. I honestly can’t really remember much about my thought process during that first project, but I do know how much time I’ve spent thinking about testing since then. On subsequent projects, I consistently scored the highest on testing. And now, I honestly can’t imagine how I would implement features in Rails without first writing a really solid integration test to guide me.
Failures are the most important teaching tools in programming.
You’ll learn far more from the failures, and the subsequent struggle to learn, than from randomly stumbling upon the right answer. You’ll dive deeper into the subject matter and build a more solid understanding of how a particular piece works. And most importantly, you’ll learn that there’s no such thing as an “A” grade in programming, or a finish line. Learning to program is an ongoing endeavor, one in which there are bound to be mistakes and hiccups. The process is never-ending, and that’s what makes it so rewarding.