Nearly every time the word ‘metric’ comes up in software development, is to drop another diss about it. This post is not meant to rant about the latest fad on how to measure code quality (wtfs/minute), monitor your developers (ROI), or count your sushi.
Instead, I would like to share how complicated it is to measure software developers quality, and how I struggle to get better at some areas.
The wall
I struggle because either me or this profession lack a standard ladder of merits to climb. It sort of looks like a maze in some ways. Surely other developers trying to improve their skills can relate to the feeling of studying up some exotic topic until you hit the wall and think: “How is this really making me improve as a software developer?”.
At that point, you either walk back your steps and rethink what do you want to learn next, or you break the wall and connect other paths to it. The decision is up to you, but there are a number of factors that can help you lean towards smashing the wall. I can’t really help with this, but here are some of the questions I pose to myself before continuing:
- Are there plenty of positive reviews of the topic?
- What do my friends say about it?
- Is this something fundamental to Computer Science or some applied technology?
- Will this broaden my knowledge in unexpected ways?
Again, these questions are just personal and more often than not their answer is blurry when you have just hit the wall. However, I do think it’s a good exercise to outline what are the benefits of devoting a big chunk of your free time to learning something.
These questions help me decide whether the wall is worth breaking or not. Moreover, how do I know how thick is it, and when will I start making out paths cross?
Skill acquisition measurement
There are not so many models that try to measure how far ahead are we with a certain skill. Dreyfus model of skill acquisition, four stages of competence, are two possible ways of merely figuring it out. I see at least two cons with the way these models can be applied to software development.
First off, software development is a vast field. It covers stuff from splay trees, to linear bounded automata or device drivers programming. The point is, it’s quite difficult to measure ‘raw software developer quality’. It’s not so difficult to measure skills.
Nonetheless an expert on some skills is needed to measure these skills. I would venture to say most people exploring a field, do not have access to experts who can let them know how much have they learned. Students get hopeless, and walk back without proper knowledge of the topic, and mostly having wasted a lot of time. I acknowledge and I’m grateful universities and moocs help fixing this issue, but still I’d love to find a way self-learners can mimic the experience themselves at home.
According to Dreyfus model of skill acquisition, experts in a topic make 1-5 % of the population, they write the books, push the research boundaries, and have developed great intuition. I believe this intuition solving problems is what makes the rest of the world believe they are geniuses.
In the end, no one is neither an expert nor a novice at all things software. Systematizing human knowledge into an array with defined boxes is hard. Hell, even knowing what ‘quantity’ to put in each of the boxes is really hard. Is it even worth it worrying about this?
To put things in context, I am spending a lot of free time digging these walls:
- SICP
- Contributions to MRI
- Conway’s game of life
- Clojure
Before you head on to more interesting places, two questions:
- How do you know whether a topic is worth exploring or not?
- How do you know when should you stop digging and move on?
Some useful links:
Pragmatic thinking and learning