Lost in a World of Data

Lately I’ve been using the Erlang “Hackney” library to interact with HTTP APIs from Elixir. The principle entry point to that library is a function called request. At its simplest, it looks like this:

request(:get, "http://example.com")

But calls to request can get a lot more complex. Here’s a rough approximation of a call I used recently.

:hackney.request(:post,
                 "http://example.org",
                 [],
                 {:multipart, [{"api_password",
                                api_password, 
                                {"form-data", []}, []},
                               {"project_id",
                                project_id, 
                                {"form-data", []}, []},
                               {:file, 
                                filename, 
                                {"form-data", [{"name", "file"}]}, 
                                [{"Content-Transfer-Encoding", "binary"}]}]},
                  {:proxy, proxy_host, proxy_port})

Of course, I didn’t actually call it like that all at once; I assigned smaller parts to intermediate variables and used those variables to build up the overall structure piece by piece. But I still had to understand the overall shape of the data that was expected, and know where to slot in each individual part of it.

Note in particular that even where I didn’t care about the value a particular piece of the data, and was fine with the defaults, I still needed to supply a blank list ([]) to hold its place. Looking back at that call I’m not sure what some of those instances of [] mean; I just know they need to be there.

Now, I want to make it clear that I don’t think Hackney has a bad API at all. I think Benoit is doing terrific work on it. As far as I can tell its design is consistent with FP and Erlang best practices. This is a good library.

On top of that, Benoit has done a great job documenting this API. It’s probably fair to say that I probably never would have succeeded in using it without his documentation.

But there is no denying that the function call above is a monstrosity. It took me a long time, building it piece by piece, looking at my code, then at the docs, then at the code again, over and over, to get it right. And when I got it wrong, the failures were less-than-obvious pattern-matching failures deep down inside the hackney source code, wherever the offending bit of data was actually being used.

Because as in any well-decomposed functional program, each part of that mass of data passed to request is handled by its own dedicated function. And those functions delegate to other functions, and so on. See that section that starts with {:multipart, …}? Everything inside there is handled by a family of functions named stream_multipart. These functions process the many variations on multipart/form-data that Hackney can handle. One of them handles the form {:file, …}. It, in turn delegates to other functions. Which eventually delegate to functions in a completely different project, called hackney_lib.

At each level, functions break apart the data and operate on the fragment that they care about and understand. Individually, these functions are small and easy to understand. Individually, these functions know when the data is in the wrong format, and can complain about it.

From the top level, those functions are all implementation details. And intentionally so; I’m not supposed to know about how the Hackney library is decomposed. As a client programmer, I am cheerfully oblivious to how the sausage is made.

Except I’m not, really. Because despite not knowing about all those little functions, I’m responsible for coming up with arguments that can satisfy them: tuples with the precise right number of elements; empty placeholder arrays; atoms with the right name, and so forth.

Of course, Benoit can document all these little data “shapes”. Elixir (and Erlang) even have a special language for documenting them, called typespecs. But if I don’t know which functions are called for which part of the request arguments, this doesn’t do me a lot of good. And Elixir doesn’t have any way of automatically seeing how the request function delegates its data to subsidiary functions, and taking all those little typespecs and assembling them into a great composite typespec for request.

Another option would be to write a great big mongo-typespec for the request function, either duplicating the information on the individual smaller functions or simply skipping the smaller typespecs entirely. But this means keeping that typespec, which is far away from the individual functions that define it, in sync with all those little functions scattered around. What’s worse, that great big typespec would likely turn out to be nearly unreadable, and not terribly useful as usage documentation.

Still another possibility would be to use Records for each of these little bits of an HTTP request. For instance, there might be a ProxyOptions record, with :host and :port fields. This might make it easier to understand how to build up a complex HTTP request.

But this goes against one of the principle philosophies of programming in dynamic functional languages like Elixir or Clojure, which is that it is better to use “plain old” data structures like lists, tuples, and maps over specialized ones whenever possible.

Instead, Benoit has reasonably opted to document all the different usage possibilities, in English and markup, on the main request function. As a result, he has to manually keep this documentation up-to-date every time he makes a change to any of dozens of different functions, in two different libraries, which will ultimately process the request options.

Consider now what a typical Object-Oriented API for the same task might look like. Here’s some pseudocode, in no particular language.

request = HTTP::Request.new();
request.url = "http://example.org";
request.method = "post";
request.body.multipart = true;
request.body.add_multipart_field("api_password", api_password);
request.body.add_multipart_field("project_id", project_id);
file = request.body.add_multipart_file(filename);
file.content_disposition = "form-data";
file.content_disposition_params["name"] = "file";
file.add_header("Content-Transfer-Encoding", "binary");
request.proxy_options.host = proxy_host;
request.proxy_options.port = proxy_port;
request.send();

I don’t want to get too bogged down into whether this is an optimal OO API, or on details like why I would need to manually set the file’s content-disposition to “form-data”. Let’s instead talk about some of the ways this differs from the Elixir code.

First, obviously, it’s more verbose.

Second: let us assume that all we know about this API is that we need to start with an HTTP::Request object. We have no other documentation. Assuming we have either a REPL and some introspection capabilities; or an IDE with code completion, we can immediately discover what methods our new request responds to. We can see, for instance, that it has a url=(url) setter method. It’s easy to surmise that we need to set this to determine what host the request will be submitted to. In Ruby, this might look something like this:

irb> request.methods
=> [:body, :url, ...]

Likewise, once we discover the body attribute, we can try it out and discover that it returns an HTTP::Request::Body, which we can then probe for methods.

irb> request.body
=> #<HTTP::Request::Body>
irb> request.body.methods
=> [:add_multipart_field, :multipart, ...]

This kind of discoverability is particularly powerful when we come across existing code we need to modify. Let’s say we need to add proxy authentication to our code. If we see that part of the request is {:proxy, {proxy_host, proxy_port}}, there is no way to “ask” that data structure how to add a login and password. Nor is there a way to find this information out from the request function, without painstakingly tracing down through the code until we find where the {:proxy, …} data structure is actually used.

On the other hand, in our OO version, once we see:

request.proxy_options.host = proxy_host;
request.proxy_options.port = proxy_port;

It is trivial to discover that request.proxy_options is an HTTP::ProxyOptions object, and then to introspect on that class and see that it has proxy_user and proxy_password attributes.

Third: remember those empty “placeholder” arrays in the call to :hackney.request? We have nothing of the sort in our OO version. Configuration that isn’t needed simply isn’t specified, and doesn’t clutter up the code.

Fourth: most of the lines of code in the OO version could be re-ordered without changing the outcome. Having method names for every part of the request configuration means that the meaning of any given line is explicit, rather than implicit based on position.

Fifth, and finally: every part of this code is unambiguous. We don’t have to refer back to a pages-long description of request arguments to remember what each piece of it is about. Nor do we need to break it down into a series of local variables just to remind ourselves of the significance of each bit of data.

You’re probably expecting me to say something like “…and this is why OO is better!” at this point. But that’s not what I’m trying to get across. For one thing, this is only really a critique of dynamic functional languages like Clojure and Elixir/Erlang; programs in statically-typed languages like Haskell tend to spend a lot more time on defining (hopefully self-documenting) types for things. And given data of a particular type, it’s easier to ask the system “what can operate on this type”?

Also, it is clear that functional programming in general does have a number of compelling advantages. And a lot of good arguments are made, by people like Rich Hickey and others, for the value of functions that act on simple data structures rather than opaque and specialized objects. I’m not going to rehash all the arguments here; I just want to emphasize that they are real and shouldn’t be casually dismissed.

But it seems to me that there is an approach-ability gap here between the dynamic functional approach and the OO approach shown above. There may also be a comprehensibility gap as well. The Elixir code may be semantically equivalent to the OO code. It might break down into a similar number of nicely orthogonal pieces internally, albeit organized very differently. But it’s difficult to make a case that it has the same level of explore-ability as the OO version, or that the finished product is as easy to comprehend and modify without lengthy consultation of documentation.

It seems to me that programmers in languages like Erlang, Elixir and Clojure are going to have to find strategies to contend with that gap in order to build programs that are accessible, either to developers who are new to the project, or to people who are new to programming in general. I don’t know if this will take the form of additions to the languages, better tools, or best practices for building libraries. I’m curious what people with more dynamic functional programming experience have to say about this, and what strategies they have for making functional APIs welcoming and approachable.

This entry was posted in Rants. Bookmark the permalink.
  • knewter

    I really enjoyed this. I would probably handle this with records, which I thing goes against orthodoxy but also just makes it easier for me to reason about and test meaningfully I believe. In that case, you would have records that you could inspect to see things like proxy_options.password’s availability, etc.

    But yeah, there’s tons of work that needs to go into ‘explorability’ of these types of interfaces – or at the very least, into documenting how people that *truly get* this stuff deal with these sorts of difficulties. I have had similar experiences trying to dig into how to use other libraries, and when they aren’t as well documented as hackney or maintained by someone as accessible as benoitc, it can basically feel insurmountable without feeling like you just took up mental maintainership of some library you just wanted to use…

    • http://avdi.org Avdi Grimm

      “Mental maintainership” is an apt phrase.

  • Hector

    This is a very interesting post, and for somebody like me that has been trying to learn a new language every year (functional programming and Haskell being the one for 2014) I can relate to the feelings that you describe.

    However, I have mixed feelings about the need for more better discover-ability of the options available to the developer as you described. Keep in mind that this is coming from somebody with C# experience where we have a strong and statically typed system and yet I see JavaScript being incredibly successful without one. I even consider TypeScript (Microsoft’s approach to give a type system to JavaScript) superfluous.

    I think what you are describing is just a learning process, though, and learning is usually a slow process. The methods that you mention to discovery functionality in a Ruby class are something that I would not have thought to try if I was new to the language. They are obvious to you given your experience with the language.

    Anders Hejlsberg the father of C# and TypeScript uses the example that there are 15+ possible overloads for the jQuery constructor function and it’s insane to expect a developer to know them all without a type system like the one that he suggests for TypeScript…and yet, millions of JavaScript developers thrive everyday without a type system or the powerful “IntelliSense” feature provide by Microsoft Visual Studio.

    I am always amazed to find examples on the web on most libraries that I need and the documentation provided in those examples tend to be enough to get one going. Once you get past the “hello world” example then digging into the code of the library in question seems to be the next step. Not always easy or the most efficient way, but I only need to do that once I am committed to really use a particular tool/module/library and I am past the “hello world” example and nobody else on the web has had the same issues/need.

    It seems like an ad-hoc process but I think most learning feels like that at the beginning.

    • michaelxavier

      I don’t think you can ascribe JavaScript’s success to language design or th ease-of-use of dynamically typed languages. JavaScript is thriving because it has to: it is more or less the only universally supported client side programming language available. If the choice is use JavaScript or not do any client side programming, of course JS will thrive.

      I’m primarily a backend developer but I’ve been working more with JS lately and I can say that on moderate sized JS projects, the dynamic typing becomes a living hell. JQuery has a bit of an infinite protocol problem where every API call has 4 ways to call it. Even when using JQuery every day, I have to look up API docs ever single time I use them.

      I’ve been writing recent client-side libraries to TypeScript and have found it quite helpful, even though the type system is relatively loose compared to the type system of Haskell, for example. One of the main benefits is that it allows projects like [DefinitelyTyped](https://github.com/borisyankov/DefinitelyTyped) to exist. You get the best of both worlds there where someone has (carefully) put a typed interface over a popular library like JQuery, Angular, etc. The advantage static typing has over dynamic is tight, statically checked protocols and weeding out sneaky errors. If you pass an invalid option to something, it will refuse to compile. It won’t give you a mostly-working page that you have to poke and prod at for an hour to find that you’ve fat-fingered an option. I feel like we’re at the point with software where devs should no longer tolerate wasting time like that.

  • Николай Рыжиков

    Hello Avdi,
    I think the possible answer is in this post
    http://michaelfeathers.typepad.com/michael_feathers_blog/2012/03/tell-above-and-ask-below-hybridizing-oo-and-functional-design.html

    Functional style is great “below” – code is clear, composable, reusable etc.
    There nothing more single responsible then pure function :)

    But OO shine on organizational (system-subsystem) level.

    Nothing stop you in clojure to make object api (records, protocols, multimethods) for your library.

  • Andy

    I don’t see the harm in creating a spec for a top-level function that contains many, complex arguments. I do understand that in creating such a spec, the author is committing him-/herself to updating the spec when one of the deeper, internal functions change, but the documentation will have to be updated anyway, so I don’t see it as much more work. (though a way to auto-generate specs from internal functions would be pretty darn cool)

    That being said, I’d probably use a record, but I’m still pretty new to the erlang/elixir world. Actually that brings up a question…since a record is just syntactic sugar over tagged tuples, would it really be so unorthodox to use one?

  • pminten

    A big amount of configuration in tuples of tuples is really an Erlang idiom. In Elixir this is less the preferred style. For example the Elixir Supervisor module has a helper function for creating child specs.

    A perfectly reasonable Elixir approach would be to have a module Request.Config with functions like `post(url) :: config`, `add_multipart(config, name, value, whatever_that_third_field_is, headers // []) :: config`, etc. Then using the `|>` convention it becomes easy to create a configuration: `config = post(“http://example.com”) |> add_multipart(“api_password”, api_password, …); Request.run(config)`.

    This does not hide the underlying value, it just provides a simple set of functions to create a configuration incrementally, much like how the OO solution works.

  • donaldball

    I am by no means a Clojure expert, but it’s my primary job language now, so I’ll take a stab at it.

    I would suggest that passing in a deep data structure as the argument to hackney as you do is not the prevalent pattern in Clojure. I think it’s more typical to iteratively build that kind of data structure from a series of api pure helper functions, maybe something like:

    (-> “http://example.com/” build-request (add-proxy “host” “password”) (add-multipart …) … submit!)

    That’s probably the primary mechanism by which collaborators would integration with the library. Applications can skip the helper functions and construct the request data structure directly, but by doing so, they’re trading the convenience, documentation, and support guarantees of the public api for brevity or power or whatever. (I’ve done this when using korma, for example, to change the database queries will run against on the fly.)

    As for discoverability without referring to the api docs, I get a lot of mileage out of calling doc on the library namespace(s) and by using tab complete on the namespace(s) in the repl to get a list of public symbols. I find this is about as useful to be as exploring objects in the ruby console.

  • Colin Jones

    I agree with a lot of what you’re saying. Though I mostly do Clojure & Ruby, I’m of course speaking only for myself, not all of Clojure/FP-land.

    It’s tough – there’s a lot here to think about, and there are competing goals for sure. Some of these issues feel inherent, and some feel specific to this FP example. #2 is the most interesting to me, but I’ve tried to address all the points below:

    1) Sure.

    2) Right, for a given map/dictionary/hash, you don’t necessarily know what data to put in without knowing what the lower-level functions expect. And so as data flows from a high level of a system to a low level, it can be tricky to figure out what shape the data is allowed to have, when you’re looking at the high level. You need to look at the low level functions to find out. If there are lots of levels, it seems increasingly useful to have some abstraction in place to make things easier to follow. So yeah, I agree there’s that tradeoff with the FP separation of data & functions: going from an arbitrary piece of data to find the functions that can operate on it isn’t easy. I tend to accept that tradeoff in the default case (mostly in exchange for simplicity) and create abstractions when I need them, which turns out to be not often for the projects I work on. In Clojure you can actually make these abstractions via `defrecord` and still use all the usual map functions on your data (with some language-specific caveats).

    On the other hand, in retrospect it seems like a truism that when all we know is what kind of data we have, an OO REPL will let us find out more about what operations are legal than an FP one will. And since with FP, the data isn’t used for organizing/introspecting on code (that’s a job for namespaces/modules), the place you’d look for *functions* wouldn’t be the class of the data, it’d be the namespace of the top-level function, and then other namespaces it depends on. I definitely work in the REPL differently in FP than I do in OO: I auto-complete functions under namespaces, not values. Docstrings and source code in the REPL help, as does stuff like vim-fireplace where you can jump from your editor to namespaces, & function definitions. So I feel that some of the discoverability issue is a tooling concern, as I think you’re alluding to with the REPL discoverability stuff. And part of it is knowing where to start / what to look for. Seems like the docs for the high level could also refer clients to the lower-level functions, too, instead of including all the details at the high level. I do generally the problems here, but I think either I haven’t encountered them often in Clojure-land, or my tooling has just made it easy to deal with.

    I think the really inherent thing here is that when you use objects, you get a free schema + box for functions. In Clojure, anyway, you need to opt into those things. This means when you want it and don’t have it, it sucks. But it also means you’ve got more flexibility and [warning: rampant speculation here] you may be less inclined to create god classes/namespaces/modules since your data is separated from the functions by default.

    3) I must admit a bit of skepticism about needing to pass default values into this top-level function. This doesn’t seem like something inherent about functional programming, dynamic or otherwise. I’m sure the library author has a good reason for doing it this way, but in general, lower-level functions can certainly supply their own defaults, in the same way that the lower-level objects can in the Ruby-ish example.

    4) It seems like the re-orderability issue could easily be solved in the data-oriented/functional version by using maps to replace arrays and positional arguments, right?

    5) The naming of the objects feels like the real win to me in making the OO version easier to understand. And giving names to the bits of data is easy to do by giving them their own key/value pairs in maps.

    Overall) I wonder to what extent these specific problems are a natural consequence of putting a facade in front of a bunch of internal functions/objects/etc. This feels like a leaky abstraction where you need to know what the internals do and expect, whether you’re in FP or OO. Maybe the discoverability-in-the-REPL bit just makes it so easy in Ruby that it makes figuring out the internals a smoother process?

    • http://avdi.org Avdi Grimm

      Re 3: this is partly a result of Erlang (and thus Elixir) being strongly tuple-oriented.

      Re 4: maybe a little, but maps I’ve had problems with APIs that expected me to assemble big maps/hashes as well.

      Re conclusion: The thing is, in the FP view it’s a facade over a bunch of internal functions. In the OO view, as library author you’re building an object model of HTTP requests. The client isn’t so much discovering “internals”, as they are incrementally discovering the object model you built. The model, when put to work, effectively becomes a DSL for the domain without any extra effort.

  • Zach Kessin

    I’m not sure its a functional vs OO thing that is going on here. I think you could have a good functional interface that is discoverable in the way that the ruby code you showed is. I think what you would want to do is have a bunch of pure functions each of which build part of the input structure, which is then passed onto the final request function.

  • michaelxavier

    I’d like to read more about where this wisdom about using primitive types for everything in FP comes from. I have a suspicion that it may be linked to an over-reliance on pattern matching and using functions that are overly specialized (or coupled) to specific types. I don’t think this is a good approach a lot of the time.

    I don’t have much choice but to set up a few straw men because I don’t know the other side of the argument, so please forgive me for that and for any ignorance I have about Erlang, I didn’t dive that deep into it for production code. The only reason I can think of to focus so much on primitives is that the core functions you want to use: maps, folds, etc are either destructuring their arguments with pattern matching or they are using functions in the standard library which operate on lists and *only* lists for example. If that is the case, make no mistake: this is forcing a highly coupled design. Because you want to use a function from the standard library that takes a list and only a list, for example, you are forcing your datatypes to be made completely from primitives to make that easier, or in this case forcing your users to build these big, poorly structured primitive structures that do not mirror the domain language at all.

    Pattern matching is great and all but in most languages I’m aware of, it does impose quite a bit of opinion on how the data is shaped. That is fine for functions that are at the same level of abstraction, but for truly generic operations like maps, folds, etc, it seems like a mismatch to me. Haskell attempts to solve this with typeclasses, which are similar to interfaces in other languages, where they define a protocol for a typeclass like [Foldable](http://hackage.haskell.org/package/base-4.6.0.1/docs/Data-Foldable.html), so the user is free to use much more expressive types on the end user’s side but still take advantage of loose coupling when doing lower level operations. For generic, lower-level operations in FP, I think this approach is much more powerful than pattern matching.

  • kisai

    I use Clojure nowadays, i have used C,C++,Java,VB.NET, Ruby, Python, Common Lisp, Racket, Haskell, Erlang.

    What i understand from the 2 API, they do 2 different things, the Hackney request function, just sends the request, expects it already built, and the OO api, builds and sends the request. This is more a declarative vs imperative. You could build the same kind of API in any other languages, procedural, OO, functional, logical, etc. It’s just easier in languages with data structure literals, in Ruby you could do the same.

    I agree with @donaldball:disqus the threading macro and helper functions to build the request data structure is the way in Clojure, and can be the way in Erlang.

    The discoverability with auto-completion can be from putting the helper functions inside a module. But writing this helper functions will be kind of repetitive on the pattern matching. You could assemble the request in a process and have chain of messages.