HATEOAS, REST, and the quest for a general hypermedia client

What does that even mean?
Web browsers navigate HTML through link following. The links, in the form of <a href="foo.html">, <img src="content.png">, <iframe href="nested.html"> etc, form a graph. Googlebot crawls and indexes that graph. Web browsers browse the graph. What you see on your screen right now, is a projection of that graph.
If the graph changes -- if the HTML links change -- the web browser doesn't stop working. Web browsers don't have C++ code in them specific to geocities.com, or aol.com, or yahoo.com. Web browsers are general. They understand web pages and links between web pages, defined as HTML.
This idea of link-following, is why the world wide web is so good. Links are always changing, but web sites don't break:
Do you know does care? Anyone who's written a screen scraped Amazon automated client. Someone who has likely painstakingly sniffed web traffic, read HTML pages, etc. to find what links to call when and with what payloads. And as soon as Amazon changed their internal processes and URL structure, those hard coded clients failed -- because the links broke. Yet, the casual web surfers were able to shop all day long with hardly a hitch. That's REST in action, it's just augmented by the human being that is able to interpret and intuit the text based interface, recognize a small graphic with a shopping cart, and suss out what that actually means.
Link-following means the web doesn't break when the links change. This design pattern is called HATEOAS -- Hypermedia As The Engine Of Application State. The "web browser" is a Hypermedia Client . Making the general Hypermedia Client possible is the whole point of the REST architecture, upon which the world wide web rests.
HATEOAS ... The principle is that a client interacts with a network application entirely through hypermedia provided dynamically by application servers. A REST client needs no prior knowledge about how to interact with any particular application or server beyond a generic understanding of hypermedia.
Unlike the web, our applications break all the time! We change an API, and now we need to update all the application code in lock-step to account for that change.
How might we apply the pattern of link-following to make our applications more robust to API changes?
Restcookbook.com has a great example, of what that might look like:
HATEOAS is the design pattern of link-following. They should have just called it that.
Alas, we stopped using the web for its hypermedia aspects, and instead started using it as a delivery mechanism. A way to ship application updates to the user, in an era of desktop software where you'd have to support every version of your software ever released, because the users never updated.

Enter Javascript

So we started building javascript applications for the web, we use the web to distribute our content, and stopped using web browsers as browsers, but just as dumb application code delivery pipes. The application is not HATEOAS, it doesn't use link following, and there is no general client, there's no generic api browser for applications.
But HATEOAS was so good, it's why the web even worked in the first place! People have been trying to apply HATEOAS design to applications and data. We would love a general hypermedia client for APIs! Since they are all the same, we can actually build higher level abstractions! Swagger is perhaps the most widely known tool for making REST APIs.
The promise of Swagger is that of a general hypermedia client; a browser for APIs. Make all your APIs using Swagger, and the general Swagger client will work with all of them. You can auto-generate documentation, you can effect state transitions in your application with the Swagger client, you don't need to write any javascript. The Swagger client never breaks.
  • Swagger lets you make REST APIs with tools like this
So this means we're modeling our API as a graph of links. Some of the links might be embedded like an <iframe>. For example, here is facebook.com with various embeds highlighted. Wouldn't it be great if we could think at the link/graph granularity of abstraction?
  • facebook.com visualized as HATEOAS links, some embedded
Spoiler alert, Hyperfiddle works like this too (and, doesn't break the web!):
  • A hyperfiddle with embedded links highlighted
Hyperfiddle is HATEOAS. But Hyperfiddle is not REST. We'll come back to that.
If you'd like to understand HATEOAS as it applies to APIs, I suggest Mike Amundsen's book, Restful Web Apis (2013), co-authored by Richardson of the Richardson Maturity Model. I read it three times and should read it again. AFAIK it is still the best book on REST APIs ever written.
  • Mike Amundsen's book, Restful Web Apis (2013) -- Buy this book

But -- REST SUCKS!!!

Because link-following is SLOOWWWWWW! Each link followed, is network round trip to a server!
And how do we make our app faster? We start hand-coding optimizations. We craft a specific purpose backend server which is coupled to a UI, so it knows exactly what the UIs needs are, and can join that all together into one huge database query, and send it all down in one go. Fast.
Fast, but not general. We have to run custom code in the server now, to make the UI fast. Server coupled to UI. No general client.
(This is essentially the same problem as the Object/Relational Impedance Mismatch, btw. ORMs have an N+1 query problem just like HATEOAS does, the thing in common is they are both graph traversal and navigating graph edges has latency. See: Datomic vs the Object Relational Impedance Mismatch)

And that is why REST is a failure:

REST does a shitty job of efficiently expressing relational data. I mean REST has its place. For example, it has very predictable performance and well-known cache characteristics. The problem is when you want to fetch data in repeated rounds, or when you want to fetch data that isn't expressed well as a hierarchy (think a graph with cycles -- not uncommon). That's where it breaks down. I think you can get pretty far with batched REST, but I'd like to see some way to query graphs in an easier way."
  • Pete Hunt, Facebook, 2014 April

REST is slow at graph traversals, due to the latency between client, server, and database.

I like to call this, REST's "subresource problem."
So Swagger turns out to not even be all that useful! HATEOAS REST with link traversal is too slow for anything but a toy. So wtf are we even using REST for in the first place? Because abstractions, but our abstractions cost too much performance, and we make the devils tradeoff: the [Backend for Frontend pattern (anti-pattern). It is so horrible that I can't even say the word "pattern" with out following it with "anti-pattern" lest someone be confused that I endorse such a monstrosity.

Backend for Frontend pattern (anti-pattern)

The backend-for-frontend anti pattern equals O(n) growth in backend code that we need to maintain. Facebook.com has hundreds of UIs, each running on hundreds of different devices. You need thousands of backends. Thousands of engineers. And the organizational overhead of coordinating all these humans and teams causes your org to slow to a crawl. And we have no choice but to do this. The cost of network I/O breaks our abstractions. This is why all valuable companies eventually get big, and all big companies eventually grind to a halt.

This is what Hyperfiddle solves.

In 2013 I was talking with Mike Amundsen, the author of that book, on the book's mailing list. And here, on 11/27/2013, you can witness the birth of Hyperfiddle: an idea for solving the inline/subresource problem.
And here is the core of Hyperfiddle. Underneath Hyperfiddle is Hypercrud Browser, the general hypermedia client.
To solve the subresource problem efficiently, Hyperfiddle makes some very unusual architecture choices. Hyperfiddle apps are coded in javascript, but the javascript runs in both the browser and the server. The subresource problem (known as N+1 query problem in database land) is solved by moving this javascript actually inside the database query engine. If queries have zero latency it doesn't matter how many queries we make. And of course we need an acid database with distributed reads so we can execute our application javascript in the same machine as the database query engine, so anything SQL is out, but Datomic is suitable.
So that's how it works. That's how Hyperfiddle solves the problems with REST.