I often talk to non-programmers who think I must have to be extraordinarily clever to be a software developer, and that they could never do it. In my experience, people often underestimate their own ability. I thought I’d write down some heuristics to help you determine if you have the brains it takes to be a programmer.
Linus Torvalds is infamous for periodically tearing into kernel hackers who submit patches he doesn’t like.
Dave Eisenberg took it upon himself to rewrite one of Linus’ rants. Without the invective it’s a fraction of its original length. It still gets the same information across. Arguably, it conveys more information, by putting the essential message front and center instead of burying it in a lot of speculations about the recipient’s intelligence.
There’s no question that writing a solid rant can be cathartic. But writing doesn’t imply sending. Dale Carnegie tells a story about Abraham Lincoln. Lincoln wrote an infuriated letter to General Meade in 1863, castigating him for not pressing the advantage after the battle of Gettysburg. The letter was discovered in Lincoln’s effects, written but un-sent.
In the classic How to Win Friends and Influence People, Carnegie observes that Lincoln, like most of the renowned leaders he studied, made a conscious choice to avoid giving criticism. Instead, he focused on praising people for what they did well.
Some view Linus’ angry protectiveness of the kernel code as an essential element of his success. But this view ignores the many large, successful open-source projects with leaders who don’t regularly insult and belittle contributors. When was the last time you saw a Hacker News item about an Apache contributor being treated like a dog who piddled on the rug?
And as FreeBSD hacker Randi Harper has repeatedly pointed out, while it’s easy to find stories of people who have exited Open Source development due to a toxic environment, that doesn’t really represent the true extent of the damage. It’s impossible to know how many potential contributors have been silently driven off after observing how other developers are treated.
There’s a line of thinking that says this is just the acid test of whether someone “belongs”. In other words, “if you can’t take the heat, get out of the kitchen”. But when exactly did we decide that Open Source development was supposed to be like an episode of a Gordon Ramsay show? I don’t remember voting on that.
For that matter, even Mr. Ramsay himself is reportedly far more supportive off-camera than most people would imagine. Like many successful people, he has a reputation for hiring talented people and then getting out of their way. His TV fame may be built on abrasiveness, but that’s not the personality trait his restaurant empire rests on.
Being an abusive jackass isn’t an essential piece of the leadership puzzle. In fact, there is research to suggest that selecting for “alpha asshole” behavior is actively damaging organizations.
All of this reminds me of a conversation I used to hear in various forms as I was growing up as a home-schooled kid:
Concerned Adult: But how will the kids learn to deal with normal, real-world challenges like bullying?
Homeschooling Parent: …you consider bullying “normal”?
As an adult, I feel like my homeschooled background equipped me to opt-out of, or route around, bullying behavior wherever I encountered it. I assumed I was worthy of a certain level of basic human respect, and if I didn’t find it, I moved on. This ability has been a kind of super-power in my career.
(Note: this “super-power” was also predicated on the pre-existing privilege of having a lot of options. I’ll probably write more about this another time.)
If you’re just getting started in this community, know this: abuse in open source, or in software generally, is not a given. It’s not a rite of passage. It’s something we can choose to accept, participate in, and enable. Or not.
Today I had reason to verify the exact semantics of ActiveSupport’s
Object#try extension for an upcoming RubyTapas episode.
#try is usually used to paper over
nil values. Unfortunately,
#try does more than this, and as a result it can easily hide defects.
Consider: we have a variable with a number in it. We want to round the number down.
num = 23.1 num.floor # => 23
Something that sometimes happens in our applications is that we take in some numeric information in string form, and forget to convert it to a number before using it as one. This is a bug that we need to be aware of, and Ruby is more than happy to let us know with an exception.
num = "23.1" num.floor # => # ~> NoMethodError # ~> undefined method `floor' for "23.1":String # ~> # ~> xmptmp-in49044q82.rb:2:in `'
Now let’s say we know that the value of
num is sometimes
nil. Rather than removing this
nil incursion from our code, we decide to turn the
nil case into a harmless no-op with
num = nil num.try(:floor) # => nil
Remember the previous example, where we got a string and forgot to turn it into a number, and Ruby told us all about it? Guess what happens now:
num = "23.1" num.try(:floor) # => nil
Congratulations: we now have a silent defect. Good luck finding it in the absence of a
#try doesn’t actually care about nils. It’s only concerned with whether an object responds to the given message. This is consistent with the method’s name. In my experience, though, it is not consistent with what most programmers actually want when they use
Incidentally, Ruby’s recently-added “safe navigation” operator does not share this drawback. It is strictly nil-sensitive.
(I’ll be covering this issue and a whole array of related techniques in an upcoming RubyTapas miniseries.)
UPDATE: A few people have pointed out that at least as of Rails 4.0, there is now a
#try! variant with the safer semantics. At the time of writing, this version has yet to make an appearance in the official ActiveSupport Rails Guide.
If you’re using Rails 4 or later, this new version should probably be your default choice.
This my 3-year-old daughter Ylva, trying to make one of our computers do what she wants.
Don’t get me wrong: Ylva is perfectly adept at using computers. This one, however, is frustratingly unresponsive. Nothing happens when you try to interact directly with the pictures on the screen. Instead, you have to translate your desires into indirect manipulation of a mouse or touchpad and keyboard. It’s not intuitive, and it’s taking her some time to adjust.
It’s as if I told her to draw a picture, but that she had to do it with oven mitts on.
…And so, as we can see, with sufficiently powerful type constraints, the implementation practically writes itself!
I know, right??
…although didn’t you just write the whole function inside the type declaration?
THE IMPLEMENTATION PRACTICALLY WRITES ITSELF!!