Menu Sidebar
Menu
Edison Phonograph ad

Hacker terminology: “Golden”

When I was starting out as a young programmer, I worked on (among other things) an air-traffic control system. A full set of hardware for the system—including multiple rack cabinets containing maybe a couple dozen computers and specialized boards, user consoles, testing apparatus, and the cabling to connect it all was called a “channel”. A full installation consisted of the radar itself, plus two redundant channels.

We had one channel that lived in our lab. It was known as the “golden channel”. As a newbie, it was a puzzle to me why it was called “golden”. No part of it was made of gold, nor was it gold in color. But I was junior and I didn’t want to ask silly questions, so I just accepted it. It was the golden channel, and that was that.

There’s another usage of the term “gold” or “golden” that you’re more likely to be familiar with. It used to be, when a software shop decided that their project was “done” and ready to ship, they would create a “golden master” copy. This was known as “going gold”.

In the days when CD-ROMs were ubiquitous, this often meant burning an official “golden master” CD, and putting it somewhere safe. Subsequent copies of the software would be copied from the golden master, and then distributed.

The writeable CD media of the time was often golden in color, but I think this was a (potentially confusing) coincidence. As far as I can tell, the use of the term “gold master” dates back much farther than the use of CDs to distribute software.

A third usage of the term “gold” or “golden” in the software field is in the context of testing. “Gold master testing” refers to a technique wherein you produce some output with the software, manually verify its correctness, and then save that output as a “gold master”. Later, after making changes to the code, you can compare the new output to the master output to ensure that the output only changed in ways that were intended and expected. Once verified, the new output replaces the old gold master.

I’ve seen speculation that the usage of the terms like “golden master” stems from the recording industry’s custom of producing a gold CD when an album sells a million copies. But believe this is another coincidence. For one thing, as I said I think the term predates CD technology. For another, “going gold” in software terms means “ready to ship”. Whereas in the recording industry it means “shipped and very successful”.

The most likely source of the term that I’ve run across also comes from the recording industry, but from a much earlier era. In 1902, Edison developed a new “Gold moulded cylinder” process for mastering wax phonograph cylinders. The cylinders produced by this process had much better consistency and fidelity to the original recordings than the ones which preceded them.

Whatever the etymology, this is a term which I’ve heard throughout my software career, but which was never explicitly explained to me. In hopes that it will save a few younger programmers some head-scratching, here’s my definition of “golden” as it pertains to software development:

In the context of programming, to call some artifact “gold” or “golden” means that it is the reference copy or version. It is the version from which other copies should be made, and/or to which other systems or artifacts should be compared for correctness.

About the Ruby squiggly heredoc syntax

To my list of Ruby achievements, I can now add the dubious distinction of having come up with the syntax for the new “squiggly heredoc” introduced in version 2.3. Thus playing my part in making the Ruby grammar just a little bit more chaotic.

For reference, the syntax looks like this:

module Wonderland
  JABBERWOCKY = <<~EOF
      'Twas brillig, and the slithy toves
      Did gyre and gimble in the wabe;
      All mimsy were the borogoves,
      And the mome raths outgrabe.

    -- From "jabberwocky", by Lewis Carroll
  EOF
end

Using this syntax intelligently strips indent from the quoted string, as explained in this blog post by Damir Svrtan.

As I recall, the syntax I suggested for that feature was more or less the first thing that popped into my head. That said, it wasn’t a completely arbitrary choice. There’s a mnemonic reason for it, that I figure I probably ought share now that it’s an official Ruby feature.

The mnemonic is this: the text quoted by a squiggly heredoc is squished, or more accurately the leading whitespace is squished, over to the left margin. Thus, the syntax is that of a “dashed heredoc”, except that the dash has been squished as well, leaving it “squiggly”.

I guess I should have called it the “squished heredoc” when I suggested it.

The real 10x developer

Ask me for a 10x developer, and I’ll show you a hacker who can take a list of 10 tickets, and reject 9 of them because they don’t appreciably advance the project goals.

Like some kind of “project manager”, or something.

The Inescapable Pragmatism of Procedures

Dr. Ben Maughan writes:

At the moment I am rewriting some LaTeX notes into org mode to use in lecture slides. This involves several repetitive tasks, like converting a section heading like this

\subsection{Object on vertical spring}

into this

** Object on vertical spring

Whenever I come across a problem like this, my first inclination is always to write a regular expression replacement for it.

A regular expression solution would likely be concise. It would be elegant. It would neatly state the abstract transform that needs to be performed, rather than getting bogged down in details of transformation. A clean, beautiful, stateless function from input to output.

And by definition, a regular expression replacement solution would have a well-defined model of valid input data. Only lines that match the pattern would be touched.

Like I said, a regular expression is always my first thought. But then I’ll work on the regex for a while, and start to get frustrated. There’s always some aspect that’s just a little bit tricky to get right. Maybe I’ll get the transform to work right on one line, but then fail on the next, because of a slight difference I hadn’t taken into account.

Minutes will tick by, and eventually I’ll decide I’m wasting time, throw it away, and just do the editing manually.

Or, on a good day, when I’ve had just the right amount of coffee, I will instead remember that macros exist. Macros are the subject of Maughan’s article.

The trick to making a good macro is to make it as general as possible, like searching to move to a character instead of just moving the cursor. In this case I did the following:

  1. Start with the cursor somewhere on the line containing the subsection and hitC-x C-( to start the macro recording
  2. C-a to go to the start of the line
  3. C-SPC to set the mark
  4. C-s { to search forward to the “{” character
  5. RET to exit the search
  6. C-d to delete the region
  7. Type “** ” to add my org style heading
  8. C-e to move to the end of the line
  9. BACKSPACE to get rid of the last “}”
  10. C-x ) to end the recording

Now I can replay my macro with C-x e but I know I’ll need this again many times in the future so I use M-x name-last-kbd-macro and enter a name for the macro (e.g.bjm/sec-to-star).

If I ask Emacs to show me an editable version of Maughan’s macro, I see this:

C-a			;; move-beginning-of-line
C-SPC			;; set-mark-command
C-s			;; isearch-forward
{			;; self-insert-command
RET			;; newline
C-d			;; delete-char
**			;; self-insert-command * 2
SPC			;; self-insert-command
C-e			;; move-end-of-line
DEL			;; backward-delete-char-untabify
C-x

C-x

This is the antithesis of a pattern-matching, functional-style solution. This is imperative code. It’s a procedure.

Let’s list some of the negatives of the procedural style:

  • It reveals nothing about the high-level transformation being performed. You can’t look at that procedure definition and get any sense of what it’s for.
  • It’s almost certainly longer than a pattern-replacement solution.
  • It implies state: the “point” and “mark” variables that Emacs uses to track cursor and selection position, as well as the mutable data of the buffer itself.
  • It has no clear statement of the acceptable inputs. It might start working on a line and then break halfway through.

Now let’s talk about some of the strengths of the procedural approach:

  • It is extraordinarily easy to arrive at using hands-on trial and error.
  • The hands-on manipulation becomes the definition, rather than forcing the writer to first identify the transforms, then mentally convert them into a transformation language.
  • It has a fair amount of robustness built-in: by using actions like “go to the next open bracket”, it’s likely to work on a variety of inputs without any specific effort on the part of the programmer.
  • It can get part of the work done and then fail and ask for help, instead of rejecting input that fails to match pattern.
  • It lends itself to a compelling human-oriented visualization: a cursor, moving around text and adding and deleting characters. In other words, it can tell its own story.
  • You can edit it without thinking too hard. You don’t have to hold a whole pattern in your head. You can just advance through the story until you get to the point where something different needs to happen, and add, delete, or edit lines at that point.
  • As the transforms become more elaborate, a regex-transformational approach will eventually hit a wall where regex is no longer a sufficiently powerful model, and the whole thing has to be rewritten. There’s no such inflection point with procedural code.

Time after time, the pattern-matching, functional, transformational approach is the first that appeals to me. And time after time it becomes a frustrating time-sink process of formalising the problem. And time after time, I then turn to the macro approach and just get shit done.

The procedural solution strikes me as being at the “novice” level on the Dreyfus Model of Skill Acquisition. We tell the computer: do this sequence of steps. If something goes wrong, call me.

By contrast, more “formal” solutions strike me as an attempt to jump straight to the “competent” or even “proficient” level: here is an abstract model of the problem. Get it done.

One problem with this, at least looking at it from an anthropomorphic point of view, is that this isn’t how knowledge transfer normally works. People work up to the point of advanced beginner, then competent, then proficient by doing the steps, and gradually intuiting the relations between them, understanding which parts are constant and which parts vary, and then gaining a holistic model of the problem.

Of course, we make it work with computers. We do all the hard steps of modeling the problem, of gaining that level-three comprehension, and then freeze-dry that model and give it to the computer.

But this imposes an artificially high “first step”: witness me trying, and failing, to get a regex solution working in a short period of time. Before reverting to the “dumb” solution of writing a procedural macro through trial and error.

And I worry about the scalability of this approach, as we have to do the hard work of modeling the problem for every last little piece of an application. And then re-modeling when our understanding turns out to be flawed.

This is one reason I’m not convinced that fleeing the procedural paradigm as fast as possible is the best approach for programming languages. I fear that by assuming that a problem must always be modeled before being addressed, we’re setting ourselves up for the exhausting assumption that we have to be the ones doing the modeling.

(And I think there might be a tiny bit of elitism there, as well: so long as someone has to model the problem before telling the computer how to solve it, we’ll always have jobs.)

This is also why I worry a little about a movement toward static systems. The interactive process described above works because Emacs is a dynamic lisp machine. A machine which can both do a thing and reflect on and record the fact that it is doing a thing, and then explain what it did, and then take a modified version of that explanation and do that instead.

I’ve recently realized that I’m one of those nutjobs who wants to democratize programming. And I think in order for that to happen, we need computing systems which are dynamic, but which moreover are comfortable sitting down at level 1 of the Dreyfus model. Systems that can watch us do things, and then repeat back what we told them. Systems that can go through the rote steps, and ask for help when something doesn’t go as expected.

Systems that have a gradual, even gradient from manual to automated.

And then, gradually, systems that can start to come up with their own models of the problem based on those procedures. And revise those models when the procedures change. Maybe they can come up with internal, efficient, elegant transformational solutions which accomplish the same task. But always with the procedure to fall back on when the model falls apart. And the users to fall back on when the procedure falls apart.

Now, there are some false dichotomies that come up when we talk about procedural/functional, formal/informal. For instance: there’s no reason that stateful, destructive procedures can’t be built on top of persistent immutable data structures. The bugbear of statefulness needn’t haunt every discussion of imperative coding.

But anyway, getting back the point at hand: there is an inescapable pragmatism to imperative, procedural code that mutates data (at least locally). There is a powerful convenience to it. And I think that convenience is a signal of a deeper dichotomy between how we show things to other people, vs. how we [think we should] explain things to computers. And for that reason, I’m nervous about discarding the procedural model.

P.S: I’m going to flagrantly exploit the popularity of this post to say: if you like software thinky-thoughts such as this one, you might also enjoy my newsletter!

Contempt as a sign of organizational incompetence

A few months back I wrote on my personal journal about how incompetently-written firmware in a VTech child’s camera led to my 5 year old daughter losing cherished memories. I also recorded their dismissive response to a flaw that would be considered recall-worthy in any camera made for adults.

Sadly, there is a deeply ingrained seam of reflexive apologetics for negligent software among hackers, as several of my peers tried to tell me it was my own fault.

Here’s the thing about negligence, though: it’s rarely found in isolation. This week it came to light that VTech had been hacked. Turns out, it was far worse than first thought: the attackers were able to access not only home addresses and passwords, bu 190 gigabytes of children’s photos.

A couple weeks ago, hackers successfully broke into the servers of connected toy maker Vtech and stole the personal information of nearly 5 million parents and over 200,000 kids. What we didn’t know until now: The hackers stole pictures of kids, too.This is very bad. The hacker’s identity is still unknown, but he’s been updating Motherboard with details about the hack. When the story broke a couple days ago, the site reported that the hacker broke into Vtech’s servers and stole the names, emails, passwords, download histories, and home addresses of 4,833,678 parents who bought the company’s devices. The massive batch of data also contained the first names, genders, and birthdays of over 200,000 children.

Source: The Horrifying Vtech Hack Let Someone Download Thousands of Photos of Children

Just in case there was any doubt as to whether this was a case of negligence:

For example, there is no SSL anywhere. All communications are over unencrypted connections including when passwords, parent’s details and sensitive information about kids is transmitted. These days, we’re well beyond the point of arguing this is ok – it’s not. Those passwords will match many of the parent’s other accounts and they deserve to be properly protected in transit.

Obviously, VTech should never be trusted again. In an ideal world they would face criminal prosecution, be dropped from store shelves, and/or be driven into bankruptcy by civil suits.

But this experience also serves to reinforce a larger lesson: outward contempt for users is always a sign of deeper organizational flaws. And the more data we entrust to corporations, the more their flaws become our problems.

I think this also suggests that an old rule for evaluating people is just as true for organizations. That rule being: how you treat children reveals a lot about your values and overall integrity.

Older Posts

Virtuous Code

"The three virtues of a programmer: laziness, impatience, and hubris" — Larry Wall

Newsletter

News, notes, and commentary from Avdi, weekly-ish.

Books and Screencasts

RubyTapas Screencasts

RubyTapas Screencasts

Small plates of gourmet Ruby code.

Confident Ruby

Confident Ruby cover

32 Patterns for joyful coding.

The Making of Cowsays.com

Confident Ruby cover

Watch me build an app in Sinatra and Rails

Objects on Rails

Objects on Rails

A developer notebook on applying classic Object-Oriented principles to Ruby on Rails projects.

Exceptional Ruby

Exceptional Ruby

The definitive guide to exceptions and failure handling in Ruby.

Archives

Categories