Personal data is being centralized at an unprecedented scale, and this comes with widely known and far-reaching consequences, considering the recent data scandals with companies such as Equifax and Facebook. Decentralizing personal data storage allows people to take back control of their data, and Semantic Web technologies can facilitate data integration at runtime. However, such data processing over decentralized data requires far more expensive algorithms, while at the same time, less processing power is available in individual stores compared to large-scale data centers. This article presents a vision in which nodes in decentralized networks are incentivized to collaborate on data processing using a distributed ledger. By leveraging the collective processing capacity of all nodes, we can provide a sustainable alternative to the current generation of centralized solutions, and thereby put people back in control without compromising on functionality.
The past couple of years, we have witnessed an unprecedented centralization of personal data on the Web. Large-scale social media networks collect our information, with or without our conscious approval, and store and process it centrally in powerful data warehouses. People are requested to hand over the control of their personal data in order to receive the services they want. For instance, on many social platforms, creating a photo album for sharing with family members involves uploading your photos to those platforms. Serious data scandals with companies such as Equifax and Facebook point to the inherent dangers of bringing such large amounts of data together in one place. Unsurprisingly, taking back control of our own data and obtaining trusted information are two of three major challenges  formulated by Web inventor Tim Berners-Lee in 2017.
Putting people back in control of their data means offering them the choice of storing that data wherever they want, independently of the applications they want to use. This is a core idea behind initiatives such as Solid : data is decentralized in the sense that everyone can store their data in their own space, and applications are decoupled from data because resources created with one application can be read and modified by another. An example can be seen in the figure below, where a social feed displays pictures and events created by other applications. Moreover, the social feed is constructed by querying data from multiple storage locations, without prior centralization. This way, people are free to choose their storage provider and their application provider independently, and can move their data away at will. They can give applications, other people, or companies access to specific parts of their data as they see fit, and revoke or restrict that permission at any given point in time. This results in true control over data.
Such a wide cross-application interoperability without strong prior agreements can be achieved by encoding semantics along with data and queries, as is possible with Semantic Web technologies like RDF and SPARQL. Data can be represented through a choice of widely used and custom ontologies. Every person is free to pick their ontologies and, because of semantics, reasoning can bridge ontological differences. In other words, the decentralized aspects of Linked Data and the uncoordinated nature of RDFS and OWL ontologies are a good fit for such scenarios .
Compared to centralized systems, decentralized systems are facing a double disadvantage: individual nodes are not only solving a harder problem, they are doing so with far fewer resources. On the one hand, algorithms for decentralized data processing require significantly more processing power and network bandwidth than their centralized counterparts, because of heterogeneity and distribution. On the other hand, each individual node in the network—
Furthermore, many of our data processing algorithms are not prepared for the scale of decentralization entailed by personal data control. As a simple but realistic example, building the social media feed of a person with 500 friends requires executing a query over 500 different data sources in the worst case, where each of those friends store their data at a different location. State-of-the-art federated SPARQL query engines consider use cases of a dozen of large datasets with entirely different data shapes. In contrast, decentralized data storage will require federated queries over hundreds of small datasets with highly similar shapes. Current summarization and source selection strategies, crucial to federated performance, are not designed to function under such conditions.
Finally, exposing personal data storage through query endpoints comes with challenges of its own. Federated SPARQL query engines are usually benchmarked in private networks. On the public Web, SPARQL endpoints have long suffered from availability problems , and regardless of whether the causes are technological or managerial, there is a non-negligible risk that such problems would manifest themselves with at least a part of personal data stores. While less expressive query interfaces  have shown promise on public networks, as data becomes spread across an increasing number of nodes, we can expect to run into severe bandwidth usage and associated query slowdowns.
Decentralized networks have a particular asset: even though individual nodes have limited resources compared to large-scale server clusters, collectively, these nodes possess a far larger amount of computational power and bandwidth. Every single personal data store as well as every client (computers, smartphones, tablets, …), brings its own CPUs—
Let us apply this insight to the data gathering phase of applications, which in a decentralized network amounts to federated query evaluation. A straightforward query to collect the recent activity of one’s contacts would involve the application sending subqueries to each of those contacts’ data stores. However, social media networks typically contain overlapping clusters of people, so any person on a contact list is likely to have a subset of that list as contacts too. Therefore, we can set up agreements along the lines of
I will help you execute your query if you help me execute mine. Then instead of sending subqueries to, for instance, 500 contact nodes, we can delegate larger subqueries to 10 or 20 hubs in parallel. Instead of executing data gathering entirely at the server or the client, we thus dynamically redistribute query execution across the network.
In order to reach sustainable collaborations, nodes need to be incentivized to act as a contributor to the network. Otherwise, a node cannot be sure that, if it helps other nodes while idle, the others will return the favor when needed. However, when incentives are created, nodes also gain a reason for dishonest behavior, so we will need a trust mechanism to verify whether the work was completed correctly. For lack of a centralized entity in the network, such incentives and trust need to be established through decentralized consensus. This is possible through distributed ledgers , which can keep track of the work performed and hence the right to receive help from others.
One category of distributed ledgers are blockchains , which require a proof in order to add something to a ledger. Whereas the popular Bitcoin ledger is known for an essentially meaningless computation as proof-of-work, newer types of ledgers such as Filecoin  introduce more meaningful purposes for this proof. With Filecoin, people can pay others to securely store and retrieve their data, and a proof-of-replication confirms that the data is there at all times. We would similarly need to develop a proof-of-query-results that captures both the work performed as well was the correctness of the results.
The figure below shows the architectural components of an individual node in the network. When a query arrives, the node determines what incentive it is willing to accept, and what incentives it is wiling to pay others for subquery delegation. After possibly delegating some parts, and performing the remaining work itself, it maintains provenance of the data and generates a correctness proof of the results. Transactions are registered on the blockchain, such that all participants receive their reward. Some nodes might start performing preparatory work, such as precomputing partial results of common queries in the network, or locally caching other stores’ data to speed up querying.
This idea goes beyond data marketplaces  by in essence proposing a service marketplace between nodes in a decentralized semantic data network. While the example applies this to query execution over personal data, other kinds of services can be auctioned as well, such as reasoning to convert data to different ontologies. All such applications rely on the principle that client CPUs are idle most of the time, so by allowing others to use it when we do not, we can rely on them at the moment we need it ourselves.
This proposal can have a strong impact on the scale at which we apply Semantic Web technologies, especially in absence of clear business models. It opens up new directions in decentralized algorithms, and creates a connection between the Semantic Web and agent theory, as well as economic models for incentives. We also must pay attention to challenges such as privacy, perhaps through encryption. Most importantly, this vision sketches a Web-oriented future path to a Semantic Web for large and small players.
- Berners-Lee, T. (2017), “Three challenges for the Web, according to its inventor”, World Wide Web Foundation, March, available at: https://webfoundation.org/
2017/. 03/ web-turns-28-letter/
- Mansour, E., Sambra, A.V., Hawke, S., Zereba, M., Capadisli, S., Ghanem, A., Aboulnaga, A., et al. (2016), “A Demonstration of the Solid Platform for Social Web Applications”, in Companion Proceedings of the 25th International Conference on World Wide Web, pp. 223–226, available at: http://crosscloud.org/
- Buil-Aranda, C., Hogan, A., Umbrich, J. and Vandenbussche, P.-Y. (2013), “SPARQL Web-Querying Infrastructure: Ready for Action?”, in Proceedings of the 12th International Semantic Web Conference, available at: https://aran.library.nuigalway.ie/
handle/. 10379/ 4545
- Verborgh, R., Vander Sande, M., Hartig, O., Van Herwegen, J., De Vocht, L., De Meester, B., Haesendonck, G., et al. (2016), “Triple Pattern Fragments: a Low-cost Knowledge Graph Interface for the Web”, Journal of Web Semantics, Vol. 37–38, pp. 184–206, available at: http://linkeddatafragments.org/
- Nakamoto, S. (2008), “Bitcoin: A Peer-to-Peer Electronic Cash System”, available at: https://bitcoin.org/
- Filecoin: A Decentralized Storage Network, Whitepaper. (2017), , Protocol Labs, available at: https://filecoin.io/
- Grubenmann, T., Dell’Aglio, D., Bernstein, A., Moor, D. and Seuken, S. (2017), “Decentralizing the Semantic Web: Who will pay to realize it?”, in Proceedings of the Workshop on Decentralizing the Semantic Web, available at: http://ceur-ws.org/