Ruby is defined by terrible tools

Look, let’s face it: Ruby tools are terrible.

If you’ve worked in any Lisp you know what I’m talking about. If you’ve worked in Java or C# anytime recently you know what I’m talking about. If you’ve worked in Haskell you know what I’m talking about.

“But Avdi! Lisp is homoiconic, and those other languages are statically typed! That’s an unfair comparison!”

Well, if you’ve worked in Smalltalk you definitely know what I’m talking about.

Recently Sarah Mei tweeted:

To which Steve Klabnik responded with that wonderful old Smalltalk saw:

I’ve only spent a tiny amount of time with Smalltalk, but the fact that “everything happens somewhere else” hasn’t really bothered me there. It’s ludicrously easy to navigate code in a Smalltalk environment. Half the time you develop by tracing execution until you see where code is doing the wrong thing; rewriting the code in-place to do the right thing; and then hitting continue. There is no “OK, now let’s open the file where that was defined” step.

(“ZOMG that’s not TDD!” – relax, the “wrong thing” might well be “making the tests go red”).

All this, and so much more, is possible because Smalltalk was evolved hand-in-hand with its tools; more, it is its own tools. From that point of view, Ruby is Smalltalk but with one entire lobe of its brain lobotomized.

Ruby has terrible tooling. We can write new code at runtime, but we can’t hit a button to see what the generated code looks like, the way we can with Lisp macro expansions. Tracing and debugging is primitive and unreliable; I’ve learned to avoid Ruby debuggers because it’s too easy for funky code to render misleading debugger output, or just crash the debugger outright. We have duck-typing, but no reliable way to get a comprehensive list of every object that might be able to quack. We have rudimentary warnings, but no way to granularly control which ones we see. Most of our editor-integrated code-navigation tooling boils down to glorified grepping. We wait for our CI servers to tell us about potential code smells instead of our editors pointing them out to us as we type. We automatically reach for Google when we want to look up library documentation, rather than use any of the local tools built for that purpose. And don’t even talk to me about debugging concurrent code.

Strictly speaking, the information available to us at runtime should always be a true superset of the information statically available at compile time. And yet, there is no way to “dry-run” a method to get a list of all possible objects it might communicate with. Let alone get us the kind of detailed picture that any modern statically-typed language can give us. And anything we do discover through dynamic runtime analysis then has to be mapped back to static code on disk, which has to be loaded again and configured again and run again before we can see if our changes were the right ones.

And yet.

And yet, in the early 2000s, long before Rails drove throngs of people to learn Ruby, very smart people who knew about Lisp and Haskell and good tooling, fell in love with Ruby. People like Dave Thomas and Martin Fowler and Jim Weirich.

And I did too, for reasons that still aren’t wholly clear to me.

Why do we love this language which is, in so many ways, the worst of many worlds? A Smalltalk without an image or browser; a Lisp without code-as-data; a Java without the static comprehensibility that automates refactorings; a Perl without… no, never mind, it’s still a better Perl than Perl.

It’s not because of corporate hype, because there was none. There’s some kind of weird worse-is-better thing going on here.

It reminds me of UNIX vs. Windows. Windows had COM and ActiveX objects which, whatever else they were, were wonderfully self-describing. You ever develop in VBA? You could poke around through the APIs and object models exposed by every single DLL on the system, core or third-party alike, with full method signatures and hyperlinks to documentation. You could learn how to use Excel programmatically without ever leaving the IDE, just by using the API explorer.

And then there was UNIX, with its horrible mess of different config file formats, and inconsistent command-line interfaces with dodgy documentation—if you knew where to find it. Or maybe you just needed to write the correct magic number to a magic file. And yet somehow, it still felt better. And we glued things together and we got shit done.

There’s something subtle going on here, and I still don’t fully grok what it is. A sweet spot that has no business being as sweet as it is. But I think we ignore it at our peril.

That said: it’s also a mistake to think that the way we do things in Ruby, or the way to do things well in Ruby, are universally best practices. An enormous amount of what we consider “best practices” or “lessons learned” or whatever are really just coping patterns for a language with horrendously bad tools.

Or, flip it on its head: the way people do things in other languages are only possible because they have the crutches of tooling to lean on. You can look at it either way.

Bottom line: the Ruby ecosystem was shaped and molded by awful tooling. It still is. You can reject the language; embrace it anyway; or try and fix it. But to ignore this truth is to lose a vital element of context.

EDIT: Because I’ve been too close to the problem for too long, I left something implicit which I should have made explicit. Ruby’s lack of decent tooling has nothing to do with laziness or distaste for tools on the part of its community. It simply makes writing advanced tooling extraordinarily hard. The choice to be inspired by Lisp but to eschew homoiconicity makes it difficult to edit code semantically instead of syntactically. The choice to be entirely dynamic makes it impossible to do more than basic and/or heuristic static analysis. The choice to be like Smalltalk except to also have keyword control flow makes it difficult to write some dynamic analysis tools (e.g. you can’t write a probe object which is treated as “falsey”). The choice to drop the “image” aspect of Smalltalk means that there is always a gulf between the dynamic behavior of a program, and the static code that creates that behavior, a gulf that is difficult to bridge in a reliable automated way. The choice to let anything be [re]defined anywhere means that unlike in Java, tools can’t trivially find the on-disk definition of a given class or method. Writing nice tools for Ruby sucks, simply because of the trade-offs that went into its design.

EDIT 2: And yes, if you throw money and humanpower at the problem it might improve slightly. But again, you’re working against the language. See the RubyMine IDE, which I am increasingly using for Ruby work: after many years of effort, it has analysis engines and “smart” refactoring tools unavailable anywhere else. But for anyone familiar with tools for C#, Java, Lisp, or Smalltalk, after all that time and effort it still feels like stepping back a couple of decades.

EDIT 3: If you were watching carefully, you may have noticed that I’ve taken two positions without saying one is “right”. I hesitate to tie this up with a neat red ribbon, because the whole point is to make you think about the forces that shape language cultures.

But if you want a suggestion of what to think on further: In the year 2000, nobody programmed in “C++”. They programmed in “Visual C++”. Tools were synonymous with languages. If you told a group they were going to be doing a project in Perl, they’d ask you where to find the install disks for Visual Perl 1998.

And then The Pragmatic Programmer came along, and said: Stop! No! Wake up! Your tools are not your language! Know your language, don’t rely on your tools! In a way, it was a survivalist’s approach to programming: the power may go out someday, so program like it has already gone out!

And it was good. Because we were being burned by bad tools. Our thoughts were constrained by tooling. Our mental models were atrophied because the tools thought about all that stuff for us.

Perhaps most importantly, The Pragmatic Programmer freed our minds to consider new languages, even if there wasn’t a corporate-approved IDE yet. And so we started using languages like Ruby, which is in many ways a survivalist’s language. If the power went out and your intellisense went away, which language would you rather type character-by-character: Ruby, or Java?

But the absence of tools shapes an ecosystem and a culture just as surely as the presence of them. Today I can watch myself hesitate to create a new class, and dig down deep and realize that my resistance comes from years of conditioning about the opportunity cost of pulling out the code, making a new file, making a new test file, dis-entangling the tests from their old home; as well as the hindbrain knowledge that I’ll have very little tooling help in seeing the code’s old home and the new class as a communicating family rather than as isolated, unrelated classes. Today I can watch Ruby programmers talk about how essential it is to unit-test all of the things without realizing just how much we do that to make up for uncertainty that doesn’t exist in other language ecosystems.

The point of all this, you ask? Programming languages cannot be considered separately from their ecosystems. Our sense of goodness is shaped by what is easy, and what is hard, and what seemed easy but burned us over and over until we made rules about not doing it. I’m not here to tell you to use Ruby, or Smalltalk, or Lisp. But if this article has made you think a little more about how much of your judgements about what makes for “good code” are influenced by the ecosystem you work in, then I’ll call it a success :-)

(Featured image by Derek Finch, made available under the Creative Commons Attribution 2.0 license)