Reflections of knowledge
Designing Web APIs for sustainable interactions within decentralized knowledge graph ecosystems.
Web services emerged in the late 1990s as a way to access specific pieces of remote functionality, building on the standards-driven stability brought by the universal protocol that HTTP was readily becoming. Interestingly, the Web itself has drastically changed since. During an era of unprecedented centralization, almost all of our data relocated to remote systems, which appointed Web APIs as the exclusive gateways to our digital assets. While the legal and socio-economic limitations of such Big Data systems began painfully revealing themselves, the window of opportunity for decentralized data ecosystems opened up wider than ever before. The knowledge graphs of the future are already emerging today, and they’ll be so massively large and elusive that they can never be captured by any single system—
Web APIs: a short history of remote functionality
As its various names indicate, the concept of a Web Application Programming Interface
—Web API
or Web service
for short—browser APIs
, a Web API offers access to remote functionality for browsers, servers, and all other kinds of clients.
Action-oriented APIs
The initial thinking was action-oriented, mimicking Remote Procedure Calling (RPC) behavior from well-known programming languages such as Java and C++. Whereas it could be tricky to get RPC with specialized protocols working across the public Internet, the universality of the Web’s protocol HTTP made an ideal candidate for exposing remote functionality. After all, the Web was already exposing documents remotely, so system administrators were unlikely to block the TCP port 80 that channels WWW traffic. The RPC approach takes operations as first-class citizens. For instance, placing an order would be achieved by sending a <createOrder>
XML body over HTTP to a fixed endpoint URL such as http://shop.example/order-service
. These initial Web service implementations had little in common with the Web’s philosophy, only considering it a convenient tunnel to smuggle RPC across firewalls.
Document-oriented APIs
The universality of the Web is actually not based on taking a custom function approach, but rather on a resource-oriented design that models information and interactions as hypermedia documents. In Web APIs adhering to this approach, data documents are first-class citizens corresponding to real (or virtual) things in the domain of discourse, using the Web as an interface to inspect and manipulate the state of that world. A transition to such document-oriented Web APIs—REST APIs
—aims to embrace the Web’s underlying nature by exposing functionality as a set of resources. For example, products can be ordered by modifying an order resource via a POST
request to https://shop.example/orders/5787
with a JSON document describing the desired product and quantity.
Graph-oriented APIs
A growing focus on data-driven platforms meant that an increasing proportion of interactions were reading and changing data as opposed to affecting real-world processes. Client-side applications evolved to elaborate data viewers with a high degree of personalization. Web APIs with pre-determined document boundaries were seen as too coarse-grained, because these apps would typically have to combine parts of different documents. Graph-oriented Web APIs emerged as a subclass of resource-oriented APIs, wherein first-class citizens are data results of granular questions. For example, a client would send an HTTP GET
request to a URL such as https://shop.example/graphql?query={order(id:5787){product}}
, containing a GraphQL query as a parameter.
This API subclass only subtly differs from other resource-oriented APIs, in that the client has a larger degree of control over the representations returned by the API. In most other resource-oriented APIs, the server would typically be the main driver of what documents are exposed, often providing a limited or even finite list of available URLs or URL patterns. In contrast, clients of a graph-oriented Web API play a prominent role in identifying its resources because there is large or infinite list of possible data documents that can be requested, each one corresponding to a query.
Addressing common Web API misconceptions
The above analysis shows that the current thinking underpinning Web APIs was strongly influenced by the ideas of offering remote functionality. It then morphed into remote access to large-scale data graphs accessible at a single location. However, interactions with such APIs do not translate well to environments with different characteristics.
Furthermore, many discussions incorrectly distinguish between these API approaches and hence the assumed benefits or drawbacks that arise from them, some of which also stem from vague or incorrect labeling. For example, hundreds of blog posts will claim to compare GraphQL and REST, while failing to understand that GraphQL APIs are just a particular kind of resource-oriented APIs, in the same way that we cannot meaningfully compare orange juice to juice (but we can compare apples to oranges).
In particular, let us examine 3 corrections to logical flaws that affect how Web APIs and their clients are developed today.
1. Clients and servers are not limited to the abstractions of their APIs
Quite a bit of the argumentation touting the benefits of graph-oriented APIs such as GraphQL and SPARQL fail to distinguish between the query language and the API. Both GraphQL and SPARQL are actually two distinct concepts identified by the same name:
- a query language to express data selections and updates
- a Web API for remotely processing queries in that language
When this distinction is ignored, supposed benefits are regularly incorrectly attributed to the entire concept as opposed to one of its two distinct components. Server-side support of a given query API is too often incorrectly assumed as a prerequisite for the client-side usage of the query language.
In reality, it is the other way round: it is perfectly possible to develop client-side apps with GraphQL or any other query language, without requiring a GraphQL API on the server side. After all, a client-side library could translate that query into concrete HTTP requests and query the resulting data locally. In contrast, a GraphQL API can only process GraphQL queries. So it’s the API that is limited, not the client. This not only applies to queries, but also to any kind of abstraction clients might want to use.
We can dismantle a similar but less common fallacy on the server side: a server’s storage does not need to mimic the API through which it is exposed. For example, just because the API organizes JSON data in certain ways across certain resources, does not mean this structure is reflected on the server side in a similar on-disk document organization. While there can of course be performance or other reasons to establish such equivalence, storage and API can each go their own way.
2. Every API is a query API by definition
While hardly new, GraphQL’s claim to fame was that it self-identified as a query API
. What they mean is that every request involves the client sending a structured query, to which the server responds with matching results. However, it is key to understand that literally any Web API satisfies that definition, the only difference being the expressivity (and perhaps explicitness) of the used query language. To see this, let’s compare a generic resource-oriented request to a GraphQL request:
https://shop.example/orders?id=5787
https://shop.example/graphql?query={order(id:5787){product}}
Although the generic resource-oriented request might look less like a query language than the GraphQL request, both can equally be labeled and studied as query languages in their own right. Valid queries in the first language include the list of all positive integers; valid queries in the second include all GraphQL queries.
To make their equivalent status as query languages easier to see, consider these slight syntactical variations on the above two requests:
https://shop.example/resources?filter={orders_by_id:5787}
https://shop.example/graphql?queryId=3567967574
Note how the query language of the generic resource-oriented API now has a slightly more complex syntax, while the GraphQL interface’s syntax has now been reduced to integers. After all, because of the properties of the GraphQL language, it is perfectly possible to create a piece of code that assigns a number to each GraphQL query imaginable, thereby introducing an equivalent syntax using only integers. The mapping I happen to have chosen here, assigns the ID 3,567,967,574 to the order query above.
As such, whether any of these options constitutes a query language or not, is purely the result of our subjective interpretation; if you wouldn’t call the /orders?id=5787
API a query API because its query syntax is limited to numbers, then by the same logic we cannot call a GraphQL API a query API, because there exists a syntax with only numbers. And while some languages indeed allow for increased detail, there are always certain selections you cannot express, so we cannot draw a line between query and non-query. Hence every API is a query API and distinguishing is not fundamentally meaningful.
Let’s instead look at what both APIs have in common: in one way or another, they select a subset of the data on the server and thus return the result to some query. What exactly the language looks like is irrelevant in this determination. The only thing that matters at a Web scale is the variety in requests that is created by clients as a consequence of the chosen resource granularity, because this affects functional requirements such as authorization and scalability aspects such as caching.
3. No universal API exists to satisfy data selection needs of all clients
Another flaw about Web API thinking is that the interface or query language can always be made sufficiently expressive for the API to accommodate any kind of client-side needs regarding data selection. The GraphQL API started indeed from a mismatch between what clients need and what traditional resource-oriented APIs offer.
Web APIs whose partitioning of data across resources is primarily server-driven, can cause superfluous data transfer and an elevated number of requests. This occurs when clients are only interested in a part of the data returned by the server (such that the rest is sent and parsed unnecessarily) or when the data they need is spread across multiple resources (leading to more resources being fetched). With GraphQL responses being more client-driven because of the higher expressivity in individual requests, the client gains an increased degree of control as to what data is returned by any single request. This resulting finer granularity can lower both the number of needed resources and the amount of data inside their representations.
While such solutions were born out of a certain necessity, especially with regard to lower processing and bandwidth contexts of mobile devices, the resulting solutions are not always measured as a whole. As we have demonstrated in previous research, higher granularity leads to lower cache effectiveness, so the decrease in bandwidth is not a net gain but rather a trade-off between bandwidth and server versus client processing time. Which is fine if that’s what you’re after, since multiple scenarios favor server over client processing time. However, the benefits are not universal and should be re-considered when circumstances are different, based on measurable evidence.
But even if the impact of API design decisions is measured, such experiments often also compare different client-side development styles because of the aforementioned assumption that client and server need to use the same abstractions. So the choice of API trade-offs is thereby considered jointly with the choice of programming abstraction, unnecessarily balancing objective measurements with subjective preference. Moreover, we have discovered that measurements and assumptions of a single API do not necessarily extend to the same client accessing multiple such APIs. This is of course exactly what happens when building apps on top of distributed data sources.
The challenges of decentralization
Imagine a large and complex knowledge graph that cannot be stored in a single place, because of a variety of reasons (practical, societal, legal or other). Instead, this knowledge graph is spread across multiple sources; perhaps to the extent that every person has their own space where they store their own data. And there is no coordination of which data is stored where, and how data is exposed.
This is what we call a decentralized knowledge graph. An additional complexity is that each piece of data in this graph can have different permissions associated with it: not all data is public, and some of it can only be read or written by certain people.
For example, I can store my data in my own data vault, you can store your data in yours, Dani and Luka store their data in theirs. Interestingly, even though they are stored separately, decentralized knowledge graphs can be connected. For example, you and Luka could like the same movie, and perhaps Dani has commented on a blog post that I wrote, which you are now reading. So if we were to fetch those 4 individual knowledge graphs, each through their own API, and we would put them together, an app would be able to show a social feed relating the four of us.
While the sum of many individual knowledge graphs is a bigger knowledge graph, the sum of many individual APIs is not a bigger API, but rather a mess of interactions in which neither the server nor the client can improve the situation much within the traditional framework of thinking about APIs. For instance, even if every single source offered the most expressive API imaginable, requests to each of the sources are still needed, some of which might depend on data from each other, further increasing the number of requests. Additionally, clients might not always know exactly what to ask for, so the increased expressivity also imposes the burden of identifying the right queries.
This explains why decentralized knowledge graphs on the Web require a fundamental reframing of how we think about exposing and consuming Web APIs. The key trick we are trying to pull off is giving applications the impression that they work with all of the knowledge, whereas they can only ever access a very small subset of it at any given point in time. As in Plato’s allegory of the cave, our apps have to reconstruct the world based on shadows projected on a wall. We will explore an ecosystem of Web APIs, in which apps access and manipulate reflections of Web-scale knowledge graphs, while maintaining the illusion of entirety.
Exposing decentralized knowledge via Web APIs
Abstract interface to abstract knowledge
Taking into account our insights from earlier, let us take a step back and consider what it is fundamentally that we are aiming to do. Our core mission is to make browser apps act upon decentralized knowledge spread across the Web. We need to figure out how to get the relevant knowledge to to the app. At this point, we’re not committing to any specific data format, neither on the server nor the client. So very schematically, our problem statement looks like this:
More specifically, our app needs to read from and write to specific parts of this knowledge graph. These might be the parts the user has access to, or parts of interest to the app at some particular point in time. In any case, the app is not able to load the entire knowledge graph because it is too large and too intangible. The app will appear to be operating on the entire graph regardless, and conceptually it looks like this:
Note how the image formed by the app is incomplete, so it will have to maintain the illusion of completeness toward the user.
A Web API to a single data source
Of course, abstract knowledge cannot be transferred across the wire, so we need to digitize that knowledge into concrete representations of resources that can be sent from a server to a client over HTTP. If the knowledge is located on a single server, we could place a JSON-based Web API in front of that server, through which the client can request specific pieces of knowledge. Only briefly still, we will assume nothing about how the knowledge is structured inside the server itself.
Conceptually, the client thus sends HTTP requests, in response to each of which the Web API finds one or more pieces of knowledge, and returns them to the client.
Note how the knowledge graph is now contained within a single server:
By using a concrete API, we materialize the knowledge in some concrete serialization format, in this case JSON. We also commit to a certain structural organization of content across multiple resources.
In general, we don’t need to know how the knowledge is structured internally on the server side. In some cases, however, the storage structure can resemble the resource-oriented structure of the API. The API itself remains necessary for concerns such as authorization, and to minimize the impact of changes to the storage for the client. In this example, incoming requests loosely correspond to documents on the server:
The above two diagrams visualize some of the design motivations behind GraphQL: the app is making multiple requests to the API, and might not need all of the data returned in each JSON response.
A GraphQL API, in essence, thus aims to correct for the mismatch between client needs and the way the API models the underlying knowledge by providing more granular resources that correspond to multiple model instances on the server. This is how a GraphQL API conceptually accesses data from multiple underlying resources:
So when people refer to an interface as a query API
, what they seem to mean is “an API that can collect data from multiple things that are each modeled as an individual unit within the server’s singular model of its domain of discourse”. For instance, if the server’s domain consists of orders and products, then an API exposing single orders and single products as resources would colloquially not be called a query API
, whereas an API that can combine data from multiple orders and products would receive that label.
Clearly, these informal definitions strongly depend on the specific way an API chooses to concretely model abstract knowledge, and—
Web APIs to multiple data sources
Because decentralized knowledge graphs are stored across multiple locations, the app needs to construct its image of that knowledge using data residing in multiple servers. So let us update the earlier conceptual image of clients accessing a knowledge graph to take this location aspect into account:
As an aside, not pictured in this diagram is data discovery: in addition to being distributed, decentralization means that this distribution of data does not happen in a centrally coordinated way. Therefore, the app needs to have or acquire insights into how data is spread in order to read or write data. Such insights could come from additional APIs within the network. We will not further pursue this topic here.
In order to translate the abstract decentralized knowledge graph into concrete Web APIs offered by each of the data sources, we need to commit to a data paradigm that allows for partial and distributed data. This is not supported out-of-the-box by the default interpretation of common formats such as JSON.
Rather than reinventing such a format from scratch, the remainder of the examples will use Linked Data in the RDF format, which is also available in a JSON syntax. In addition to supporting decentralized knowledge, this format also allows us to maintain a consistent interpretation of data, regardless of where it is stored. To exchange Linked Data, we need to materialize our abstract knowledge into concrete vocabularies, similar to how a concrete JSON structure must be chosen for JSON-based Web APIs. For example, our products could be modeled using the Schema.org or Wikidata vocabularies, which gives the API concrete data to expose.
Apps then collect Linked Data from remote sources via some Web API that exposes RDF representations, and integrate those by combining their individual RDF graphs into a partial local knowledge graph:
Just like GraphQL aims to be a standard Web API for general JSON-based APIs, a couple of specified or standardized Web APIs exist for Linked Data, such as:
- a Linked Data Platform API (LDP) that uses a domain-specific document organization
- a SPARQL API that allows data access based on SPARQL queries
- a Triple Pattern Fragments API (TPF) that allows data access with more simple queries
- a TREE API (TREE) that facilitates browsing and traversing collections in multiple ways
- …
We can reuse those and other Web APIs within the knowledge ecosystem. As with JSON APIs, the underlying implementation might or might not use the same structure as the API to organize resources in the server storage:
An LDP API resembles traditional JSON APIs most closely, in the sense that it is up to the server to decide how it structures its application domain. For example, every product and order could have its own document. SPARQL and TPF APIs are more alike to GraphQL, in the sense that they provide data from across multiple resources in the application domain. For example, they could allow access to data from multiple products or orders.
Different Web APIs have different characteristics. For instance, the resource partitioning of an LDP API might be chosen to coincide with the authorization structure, such that any user can see a document either fully or not at all. This same example can be more tricky for a SPARQL API, which combines data from multiple resources that might have different permissions attached to them.
Furthermore, the same knowledge graph can be exposed through multiple APIs—even multiple APIs of the same kind. Indeed, given the flexible structures of knowledge graphs, the same data could be available through multiple LDP document structures depending on the purpose. As a simple example, we could organize products by department, by vendor, or by occasion; and all of these organizations can coexist simultaneously. And of course, different kinds of APIs can also be mixed:
In decentralized data ecosystems, it’s not just the APIs that can be heterogeneous. Independently of the choice of API (LDP, SPARQL, TPF, TREE…) and the choices within an API (different resource structures in LDP, different fragmentations in TREE…), the same knowledge can be materialized in different data models, leading to another dimension of variation.
Abstracting away Web APIs in the app
The previous diagram leads us essentially back to square one: the application code is now making multiple requests to resources of different granularity. So there’s the temptation to either live with this situation and accept it as an inherent complexity of a decentralized API ecosystem, or to summon a GraphQL-like solution that can reduce the complexity of the app code to expressing a single data selection.
A quick attempt at inserting a GraphQL API reveals it is not straightforward:
There indeed is no single server to which we can attach the GraphQL API, because each server only exposes a part of the knowledge graph. (And before we start moving data around, let’s recall there are reasons why data needs to be in multiple locations. We aim to address the complex problem, not to artificially simplify the problem space.)
So if we cannot attach GraphQL to a server, can we somehow attach it to the client? Here it’s crucial to remember that client, server, and API abstractions can be different, so we can indeed provide the app with a GraphQL or SPARQL interface without needing an actual GraphQL or SPARQL endpoint into the data ecosystem. In fact, we can write the app code using any interface or abstraction we want. To enable this, we provide a client-side interface with the desired abstraction, such that app code is still written using GraphQL or whichever developer experiences are preferred:
A reusable implementation of this client-side library then translates the abstraction into concrete HTTP requests to the relevant Web APIs of the correct servers, thereby relieving the app and its developer from interfacing directly with an ever evolving myriad of servers and APIs. If this implementation is sufficiently flexible and robust, it can take signals from multiple sources such as indexes and caches to deliver results in more optimal ways than any individual developer could envisage.
From API integration to data integration
The Web API ecosystem as we know it today has been designed for a very different Web than the Web we want for the future, as our analysis has revealed. Data will still be remote, but also decentralized: spread across multiple servers and different APIs.
Let us reflect on how our revised way of thinking about Web APIs addresses the 3 misconceptions introduced at the start of this blog post.
At the heart of our solution is the notion that client and API abstractions can differ. There is a need for a seamless developer experience in which developers don’t need to express how to fetch data, but only what data they want to work with. GraphQL as a language can still fulfill that need, even through GraphQL as a single remote Web API is impossible because of the spread of data across the network. Furthermore, when developers skip from the what to the how of data fetching, then the libraries they interface with are at liberty to perform context-dependent optimizations that can improve efficiency beyond what individual developers and apps can anticipate. In other words, whereas developers can outsmart single APIs, a network of knowledge is so complex that it requires a systematic and reusable approach.
This brings us to the insight that every Web API is a query API or, more precisely, that the distinction is not useful. From this claim follows that every Web client is a query client, a statement which several people have rejected in the past. The argument I often hear is that “my app doesn’t query; it just performs a series of requests to a data API and combines the results”. My answer is usually: congratulations, you’ve built a purpose-specific and hard-wired query engine. Which is fine; just know that it only accepts a single kind of query and will break when the data layout of the network changes. Today, you’re right to build apps this way, because you can in many cases easily predict an optimal series of requests that provides you with the data. In a fully decentralized data ecosystem, many of the assumptions you have hard-wired into the app will start changing or failing, as will inevitably your app. This is why I propose a reusable client-side abstraction library that is shared across multiple apps, such that any update to this library propagates via a simple dependency upgrade. No app code changes, because the abstraction remains the same while its implementation improves.
Finally, we’ve done away with the myth of the universal API that once and for all could liberate us from the ever increasing plurality. No single Web API is the final answer because a decentralized landscape has many sources that each have their constraints. We can, however, make agreements about data models across such APIs, knowing that a complex problem space demands solutions that withstand its challenges. While surely several existing tools could somehow be retrofitted, Linked Data-driven solutions are currently the only real candidates purposely designed for such environments. Fortunately, because of the decoupling of client and server needs, apps can pretend the world looks like JSON while underlying libraries handle the heavy lifting in RDF.
Our approach fundamentally shifts the role of an API from a goal to a means to an end. That is: the actual goal is for the client to form a partial knowledge graph based on reflections of a much larger decentralized knowledge graph. From that vantage point, APIs should be invisible and exchangeable tunnels for knowledge, whereas up to now, APIs positioned themselves highly in the foreground of app building.
This implies that we are evolving from a tradition of API integration to a new realm of data integration. Concretely, multi-API apps such as the ones needed in a decentralized landscape would nowadays be built by manually writing API requests and combining their responses. In contrast, the abstraction-based integration paradigm is one where not APIs, but data form the main building blocks. The underlying assumption is that data integration problems are easier to solve than API integration problems, and that APIs are just vessels for data. We see evidence for this in a long history of data integration research and development, in contrast to the relatively limited automated results we have achieved with cross-API integration, in absence of shared data models.
This abstraction needs to work for read as well as write, such that applications can use the same interface for both. Any UPDATE
queries then need to be translated into PUT
, POST
, or PATCH
requests to the right places, for which again some notion of structure of the decentralized network will be needed.
Yet the proof of the pudding is in the eating. On the one hand, years of investments in Web API tooling have resulted in significant symptomatic relief, making it quite easy for developers to balance a small handful of APIs. On the other hand, data integration tooling for clients still needs to become much more practical in their usage. Techniques that are currently in a research phase, notably link-traversal-based query processing that allows executing complex queries over arbitrary APIs, need to be made robust for real-world environments. Such technologies are crucial to evolve reusable interface abstractions from a proof-of-concept into accessible development tooling. Who knows, one day, querying the decentralized Web might be available as an actual browser API.
Clearly, we need to build an ecosystem of Web APIs and client-side interfaces in order to harness a decentralized ecosystem of data. As usual, the answer we seek lies in the kind questions we ask—