The whole point of REST archiecture is to enable a general hypermedia client. But REST is slow!!! General purpose graph navigation by link following means client/server round trips for each link followed. I like to call this, REST's "subresource problem." To solve the subresource problem efficiently, Hyperfiddle makes some very unusual architecture choices.
Comparison of Hyperfiddle with Walmart's Lacina, a GraphQL impl for Clojure
The most elegant hack is when somebody says, These 2000 lines of code end up doing the same thing as those 2 lines of code would do. I know it seems complicated, but arithmetically it’s really the same.
This is a collection of discussion about network effects.
Datomic's key insight is that the Object/Relational Impedance Mismatch is the source of non-essential complexity in CRUD apps, and that immutability in the database permits a solution.
Around 2015/16, the state of art UI thinkers began to explore backing UIs by graph-like state rather than immutable tree-like state. This is a very rough sketch about this and how it relates to the design of Hypercrud.
Brain dump about UI rendering performance state of art in 2016.
"The problem [with REST] is when you want to fetch data in repeated rounds, or when you want to fetch data that isn't expressed well as a hierarchy. Think a graph with cycles -- not uncommon"
Hyperfiddle changes the balance of power between offense and defense.
This 10min talk about why Datomic is a basis for solving the problems of REST, which are essentially complex data-fetching costs to display readonly data on a page. Getting rid of read complexity is the whole point of Datomic.
Datomic Pro's code/data locality model assumes perfect caching which works out "because immutability" (major handwaving). To make this work out, Datomic Pro makes some very specific tradeoffs wrt indexes and storage implementation, and at scale this abstraction does leak, which is why Datomic makes us give hints in the form of storage partitions.
Datomic is strongly consistent and linearizable, like git. In researching this I learned that CAP is no longer an effective razor.
MICHAEL GAARE: My observation is that there's a fundamental, philosophical problem. Semantic web has essentially Platonic underpinnings. There's some perfect description of the world, "universal truths" as one guy I worked with would put it, and we "merely" have to describe our data in those terms. That flies in the face of everything we as engineers have come to understand about how people, processes, and language works.
I get asked this a lot so I wills tart collecting quotes and stuff here.
Our async abstractions have ways to propagate the error through our asynchronous pipeline, and like, that’s okay, but what if we could make it not fail in the first place? What if we could make failure impossible?
This post compares Datomic (today, 2017, post-Datomic Cloud announcement) to Facebook's graph datastore as described in 2013. They are almost the same, except TAO is a triple store & eventually consistent; Datomic is a 5-store and strongly consistent. Contrary to popular belief, Datomic's single-writer-per-database ACID is exactly the same as Facebook's single-writer-per-shard, so I think Datomic Cloud can absolutely scale up to to Facebook-sized write loads, and then beyond due to immutability.
Short questions answered from slack and reddit
I think there is a deep reason for the success of Mongo, which is no-friction reference traversal. Reference traversal is the killer feature that programmers instinctively reach for in the database, because references are deeply baked into modern programming languages, references is how we think.
HTTP caching is annoying, here is a collection of useful links and notes, specifically about Datomic and Hyperfiddle immutable data-oriented architecture which has some interesting opportunities for "correct" caching.
"Among other reasons, I think they failed because they suck at writes. Like relational databases, they're "place" oriented."
I think this is possible to do, here are some notes
Code/data locality means "you would just build an in-memory specialized data structure prior to querying, then pass it as a parameter of your Datalog query and call it using regular function calls."