For a domain with a strong focus on unambiguous identifiers and meaning, the Semantic Web research field itself has a surprisingly ill-defined sense of identity. Started at the end of the 1990s at the intersection of databases, logic, and Web, and influenced along the way by all major tech hypes such as Big Data and machine learning, our research community needs to look in the mirror to understand who we really are. The key question amid all possible directions is pinpointing the important challenges we are uniquely positioned to tackle. In this article, we highlight the community’s unconscious bias toward addressing the Paretonian 80% of problems through research—
Re-reading the original Semantic Web vision  from 2001, we immediately notice where the predictions went wrong. Far less obvious are those that came true; they have become givens in today’s world, part of the new normal that now forms our everyday reality. We have forgotten the era ruled by the Nokia 3310, whose monochrome screen’s resolution only covers a fraction of modern app icons, years before many people had Internet access at home—
The roots of the Semantic Web can be traced back to the initial Web proposal , whose opening diagram presents what we now refer to as a knowledge graph, an early glimpse into subject–predicate–object triples rather than the URL–HTTP–HTML triad that would ultimately become the Web. That same Web is currently facing severe threats , having rapidly gone from a utopian harbor of permissionless innovation to a potentially dystopian environment controlled by only a handful of dominant actors. The Semantic Web seems unaffected by most of this, strangely, until we realize that the Web and the Semantic Web have silently split ways not too long after the first RDF specifications appeared.
Nonetheless, semantic technologies are regularly coined as a means of tackling some of the Web’s most pressing challenges, such as combatting disinformation or fueling its re-decentralization movement . Meanwhile, the Semantic Web research community is facing its own battles with the latest technological hypes, doubting between defending its own relevancy next to Big Data, machine learning, and—
Granted, cracks have started creeping into these other technologies, too. Maybe Big Data is not limitless in practice if technical capabilities scale faster than the human and legal processes for ethical data management, and we do need to link data across distributed sources instead of unconditionally aggregating them. Perhaps there are problems that machine learning can never solve reliably, and the safety provided by first-order logic proofs is irreplaceable for crucial decisions. And possibly it will turn out that decentralized consensus only touches a small part of all use cases, that disagreement under the
anyone can say anything about anything flag provides a more workable model of the virtual world.
So when we are not riding others’ waves, what is it that unites the Semantic Web research community? What makes us truly “us”, what are the semantics we can attach to our own identity? Having emerged at the intersection of the Web, databases, and logic, we have since become disconnected from these domains, our awareness of which sometimes appears to be frozen in time. We tend to disregard that the Web from which we spun off is no longer the same as it was, and that different approaches are required today. We have held on to XML and RPC longer than most, confused the ends with the means that were supposed to achieve them.
The main danger within an existential crisis is the risk of losing our connection to the reality from which we originate. The philosophy of our community seems to align with Alan Kay’s quote that
The best way to predict the future is to invent it. We build and we investigate, expecting the future to wrap its arms around the creations we are spawning. In this vision article, we rather embrace John Perry Barlow’s inversion of the quote, in which
The best way to invent the future is to predict it. Looking back at the dreams from the past and recombining those with the aspirations of the present, what are the crucial missing pieces that require our unique dedication as Semantic Web scholars? As in the original Semantic Web article, those topics that have long been considered trivial  might very well be the hardest ones in practice.
The term Semantic Web evidently coincides with adding semantics to Web content to improve comprehension by machines. However, after two decades of debate, we still seem uncertain about exactly how much semantics are in fact useful. The writing on the wall is the disconnect between the data that is published and the applications that should consume them: the call for Linked Data has brought us the eggs, but the chickens that were supposed to hatch them are still missing.
To intertwine data with meaning, we largely rely on RDF for exchange and interoperability. But what is really there is only factual knowledge in a (hyper)graph structure, with URIs to uniquely identify terms. The intended meaning of the data is captured through knowledge representation ontologies such as RDFS or OWL, and can be discovered for example through dereferencing. In that sense, data in RDF actually refer to their semantics rather than containing them. And distributing those semantics has turned out significantly harder than distributing data.
Early efforts were heavily devoted to the development of ontology engineering, and understandably so. Having generic software to automatically act on a variety of independent datasets was what made the Semantic Web vision so appealing. Once domain knowledge had been formalized, it could be applied to represent facts, upon which reasoners could automatically derive new facts. Yet once we took those endeavors to the Web, it became apparent we had missed the general practical implications of the chosen direction. As semantics are always consensus-based, domain models are only as valuable as the scope of the underlying consensus. Hence, their usage cannot be guaranteed by parties that were not involved or disagree with the consensus. Often, these parties resort to mitigation strategies that disregard the semantics settled in description logic, such as selectively reusing properties and classes when publishing data, or freely reinterpreting the semantics through programming when consuming data.
Core frameworks such as RDF and OWL have also frequently been labeled as
by academics, for academics because of their perceived complexity by developers. Due to a lack of deeper understanding and an inability to connect with existing development practice, ontologies are in practice often dumbed down to vocabularies—a term that is used more and more, basically stripping the data from semantics and once again leaving it up to individual applications. The backing of Schema.org by the major search engines is illustrative of this fact, as well as the increasing popularity of the shape languages SHACL and ShEx. They cover an important gap between data in the wild and applications: they need to know what data to expect, which was one of the things neglected by our fixation on descriptive logic.
The paradox between the use of semantics and the effort to provide it, cultivated a heterogeneous and underspecified Web of Data . Practical implementation and usability have too often been handwavingly addressed by deep theories. As depicted in the figure below, a strong implicit assumption lies dormant in a lot of our work: that solving the hard 80% is where the research happens, and that the remaining 20% is simple engineering to take that research from theory to practice. However, is what we often dismiss as “engineering” really just “engineering”? Given the considerable problems arising when we try to deploy semantics at Web-scale, as scientists, we might want to validate that hypothesis.
What good is inference by reasoning if the ontologies cannot be found or are outdated? What good is having unique identifiers for concepts when stating equality with
owl:sameAs is inadequate for applications ? How realistic is federated query evaluation if queries in practice have to be written for specific endpoints, because reasoning is only ever switched on in theory? Meanwhile, enterprises and common developers start to give up on the formal semantics, and we risk baby being thrown out with the bath water. That is the logical result if we leave the completion of the bigger Semantic Web picture to companies with a deadline. Their enthusiastic endorsement of shapes, for instance, could eventually suppress the practice of semantics in data. We as researchers understand that
a little semantics goes a long way  does not necessarily mean that less semantics is better than more. But exactly how much is too much for the Web? Only through research we can find out.
What arguably sets us apart besides semantics is, well, the Web. In contrast to relational or other databases, our domain of discourse is infinite and unpredictable on multiple levels. Because of the open-world assumption, no single RDF document contains the full truth. Even worse, any sufficiently large collection of Web documents will contain contradictions that, under classical logic, allows us to derive any truth—
ex Tela quodlibet. Not only can anything be proven from a contradiction, in these days of fake news and dubious political advertising, it has never been easier to find self-consistent documents online in support of virtually any given conclusion or its opposite.
The Web is what we deliver as an answer to any Linked Data skeptic, as an irrefutable argument that all of our perceived or actual complexity is justified, because we are dealing with problems that span the entire virtual address space of the globe and in fact the universe. The Web is the reason why our ontologies are spread all over the place, why the prefix expansion for the OWL ontology counts 30 characters, why FOAF is forever stuck at version 0.9, the Dublin Core vocabulary at 3 different ones, and why we cannot all just use Schema.org. The Web is why Open Data exists, why our public SPARQL endpoints are down 1.5 days a month , why stable vocabularies suddenly disappear. Everything we do, we do it the way we do it, because the Web sets the rules such that anything more simple or logical would not do. If the Web is such a self-explanatory answer to the existence of our discipline—
We are not even talking here about taking our scholarly communication to the Web; let that be the crusade of the dogfooders , to whom we dedicate a later section. We mean to say that
it works in our university basement has become an acceptable and applauded narrative—
When a measure becomes a target, it ceases to be a good measure, except that we can strongly question whether non-Web environments, however pure and controlled, have ever fulfilled the role of good measure providers in the first place.
No, we cannot safely assume that the
owl:sameAs predicate has consistently been used in accordance with at least one of its several meanings . No, we cannot assume that SPARQL endpoints will be available or even return valid RDF. Yes, people will use the same URL to refer to different things, and obviously different URLs to point to the same things—
schema:sameAs. Yes, our precious data sets unnecessarily use different ontologies, so we have to switch on reasoning, even though that makes our results suddenly worse than the state of the art—
We are all aware that the Web is a good platform for data publication, but a pretty bad platform for data consumption . Yet that exactly is the reason to not ignore the 20% any longer, but to embrace the unique challenges and opportunities it brings. Crucial and sometimes counterintuitive insights arise when Web-based techniques are applied to research problems previously only studied in isolation. As an example, link-traversal-based query execution  taught us that SPARQL queries can exist separately from specific interfaces to evaluate them, which in turn are independent from back-ends. Understanding that some of our standardized protocols do not adhere to the constraints of the Web’s underlying REST architectural style, allows us to design interfaces with better scalability properties , which might perform worse in closed environments but yield desirable properties on the public Web. Taking this even further, we can wonder whether the default semantics of simple SPARQL queries are tailored too much to closed databases as opposed to the Web we publicly claim to target.
We should, however, not become too puristic in our judgment; an important aspect of scientific studies is their ability to zoom in on the isolated contribution of specific factors. Several valid use cases for non-Web RDF applications exist, so not every single undertaking has to embody the omnipotent role ascribed to the mythical Semantic Web agent. Nonetheless, as a community, we want to ensure we combine the 80% sufficiently often with the 20%, such that we obtain at least a more adequate impression of the potentially huge number of research questions hiding in plain sight.
There is no question the age of Deep Learning is very much upon us. As the last one to mature, deep learning has spawned numerous research efforts, techniques, and even production-ready applications with machine learning, elevating the state of AI once again. Semantic Web research has not been resilient to the siren song, and started exploiting RDF knowledge bases as fertile soil for Deep Learning and other machine learning approaches. The popular topics that emerged, such as embeddings  and concept learning  enable model training from description logics to complete and extend any semantic information present. Developing such approaches is crucial to reduce the high manual effort currently required for participating in the Semantic Web.
Semantic technologies were originally considered part of the AI family and in essence still are . Inference of logical consequences from data can drive a machine’s autonomy. Yet in the shadow of advanced machine learning, the “cool kids” perceive us as apostles of an old, inflexible, and outdated rule-based approach. However, maturity in the machine learning field also uncovered the gaps where semantic technology can prove its relevance. Use cases prone to decision accuracy, such as healthcare or privacy enforcement, profit from the exact outcomes of first-order logic. Furthermore, the ability of some semantic reasoners to explain their actions through proofs  is a much desired trait by the primarily black-box machine learning methods.
As both angles have their merits, the future is very likely hybrid, and we need to further explore complimentary roles. For instance, semantics and inference can pre-label data that improve the accuracy of models. Or, post-execution explainability could be achieved by reasoning over semantic descriptions of nodes. In the area of digital assistants, such as the promising work with Almond  and Snips , declarative AI can append a human representation of the world to representations trained on raw data. This would fill knowledge gaps of current assistants such as Siri and Alexa, increase their associative ability, and eventually improve the authenticity of their interactions. Some more fundamental questions also need to be answered, such as training a model under the open world assumption. Fitting strategies exist, but there are many more unknowns.
Semantic inference and first-order logic might lead to less spectacular conclusions, but they will nonetheless be crucial to advanced machine learning systems. Also here, it is important to solve the engineering side of things. Almond and Snips are directly usable to developers, who, through testing, discover further challenges. When machine learning solutions “just work” developers do not need to know what is inside, that is the result of research, not just engineering. Getting rid of the “trivial” problems with semantic inference hopefully means providing these more spectacular results, on the Web. Maybe this is the better way to position ourselves in one of the next waves to come: reinforcement learning.
Ultimately, all of this shows that we need to guard ourselves from conducting research in a vacuum. Not all science requires practical purposes, but if we would only design solutions for problems that will never even exist if the Semantic Web does not take off any further, then we should at least consider prioritizing those urgent problems that are blockers to adoption. Part of our hesitance might be that, having fought hard for recognition as a scientific domain, we are afraid to be pushed back into the corner of engineering. Our conferences and journals tend to have a high threshold for what qualifies as research, with a strong focus on qualitative experimentation. While high thresholds in general are commendable, they also result in a higher percentage of false negatives, both in submitted works that never get accepted, and in stellar research ideas that never materialize because fear of such rejections encourages safer bets.
We tend to zoom in on very focused, often incremental research problems, which tend to bring us progress. Again Pareto’s law lures around the corner: we consider the core 80% of a hard problem and assume that the remaining 20% is a non-issue. Converting technological research into digestible chunks for developers is considered trivial and outside of our scientific duty. Everything that reeks of pure engineering is shunned.
However, most researchers in our community have not built a single Semantic Web app, so we cannot pretend to understand the insides of that 20%. It is impossible to tell whether the remainder is trivial or not; and many of the experiences above reveal that some of the most complex research problems appear exactly there. But how would we know? We do not get in touch with some of the most pressing issues, because we already ruled them out as trivial, and then wonder about the low adoption of the otherwise excellent 80% research.
Since the Semantic Web started, Web development has massively changed. Many apps are now built by front-end developers, for whom Semantic Web technologies are inaccessible—
schema stitching . Persisting on the pragmatic road, which they initially took because our alternative was deemed too complex, they will ironically end up with something as difficult but less powerful, because they did not have the same forethought. Even more ironic is that we remain stuck in that forethought and wonder when adoption is coming. We compensate by drawing such technologies back into the research domain , but gloss over a crucial point: bringing SPARQL levels of expressivity to front-end developers is in fact a research problem.
Designing an appropriate Linked Data developer experience  is so challenging because, while regular apps are hard-coded against one specific well-known back-end, Linked Data apps need to expect the unexpected as they interface with heterogeneous data from all over the Web. Building such complex behavior involves a sophisticated integration of many branches of our research, which requires designing and implementing complex program code. Exposing such complex behavior into simple primitives, as is needed for front-end developers, requires automating the generation of that complex code, likely at runtime. Such endeavours have not been attempted at the research level, let alone they would be ready for implementation by skilled engineers.
This research gap between current research solutions and practice means that much of our work cannot be directly applied. Some find it acceptable that nothing works in practice yet. Unfortunately, such a lax attitude leaves us with an all too comfortable hiding spot: why would my research have to work in the real world if others’ does not? As a direct consequence of this line of thought, we cannot meaningfully distinguish research that could eventually work from research that never will.
Until we have examined whether or not something is trivial, we should not make any implicit assumptions. We have been wrong before. Perhaps we should consider scoring research works on the 80/
Not only do many of us lack Semantic Web experience as app developers, our even bigger gap is experience as users. Although a significant amount of our communication (not in the least toward funding bodies) consists of technological evangelism, we rarely succeed in leveraging our own technologies. If we keep on finding excuses for not using our own research outcomes, how can we convince others? The logicians among us will undoubtedly recognize the previous statement as a tu quoque fallacy: our reluctance to dogfood is factually independent of our technology’s claim to fame. Yet if all adoption were solely based on sound reasoning, our planet would look very different today. Credibility and fairness aside, we are not in the luxury position to tell others to
do as I say, not as I do. The burden of proof is entirely upon ourselves, and the required evidence extends beyond the scientific.
In addition to being an instrument of persuasion, dogfooding addresses a more fundamental question: which parts of our technology are ready for prime time, and which parts are not? By becoming users of our own technologies, we will gain a better understanding of the elusive 20% that clearly, had it actually been so trivial, would already have been there. Never underestimate the power of frustration: feeling frustrated about unlocked potential is what prompted Tim Berners-Lee to invent the Web . Only by managing almost his entire life with Linked Data, he is able to keep a finger on the Semantic Web’s pulse, and his eyes on its Achilles’ heel.
If we similarly had a deeper understanding of real-world Linked Data flows and obstacles, would we not be in a better position to make a difference? We might want to address concrete problems happening today, in addition to targeting those that will hopefully arise—
After almost two decades, the Semantic Web should step out of its identity crisis into adolescence. In search of a target market for adoption, research in semantic technologies has ridden others’ waves all too often, in an attempt to assimilate with all use cases but our own. This brought us as a community into a disconnect with the place where we can make a difference: the Web. There, new technologies still emerge every day—
A Semantic Web has data and semantics intertwined, yet distributing those semantics has been proven to be harder than sharing data. Can we focus on the practice and implications of sharing and preserving semantics? If not, we might leave the original vision to die in the hands of a more short-term and pragmatic agenda. No doubt, the need for full-scale data integration will eventually reappear, possibly reinventing the solutions and methods we are working on today. But that realization might take another decade to surface.
The Web might not be our only target market, but it is the one that sets us apart. Yet it does not pop up in the average
threats to validity section—
In order to succeed, we will need to hold ourselves to a new, significantly higher standard. For too many years, we have expected engineers and software developers to take up the remaining 20%, as if they were the ones needing to catch up with us. Our fallacy has been our insistence that the remaining part of the road solely consisted of code to be written. We have been blind to the substantial research challenges we would surely face if we would only take our experiments out of our safe environments into the open Web. Turns out that the engineers and developers have moved on and created their own solutions, bypassing many of the lessons we have learned, because we stubbornly refused to acknowledge the amount of research needed to turn our theories into practice. Since we seemingly did not want the Web, more pragmatic people took over.
- Berners-Lee, T., Hendler, J. and Lassila, O. (2001), “The Semantic Web”, Scientific American, Vol. 284 No. 5, pp. 34–43, available at: https://www.scientificamerican.com/
- Berners-Lee, T. (1989), Information Management: A Proposal, CERN, available at: https://www.w3.org/
History/. 1989/ proposal.html
- Berners-Lee, T. (2017), “Three challenges for the Web, according to its inventor”, Web Foundation, 12 March, available at: https://webfoundation.org/
2017/. 03/ web-turns-28-letter/
- Berners-Lee, T. (2018), “The Web is under threat. Join us and fight for it”., Web Foundation, 12 March, available at: https://webfoundation.org/
2018/. 03/ web-birthday-29/
- Berners-Lee, T. (2019), “30 years on, what’s next #ForTheWeb?”, Web Foundation, 12 March, available at: https://webfoundation.org/
2019/. 03/ web-birthday-30/
- Verborgh, R. (2019), “Re-decentralizing the Web, for good this time”, in Seneviratne, O. and Hendler, J. (Eds.), Linking the World’s Information: Tim Berners-Lee’s Invention of the World Wide Web, ACM, available at: https://ruben.verborgh.org/
- Shirky, C. (2003), “The Semantic Web, Syllogism, and Worldview”, available at: http://www.shirky.com/
writings/. herecomeseverybody/ semantic_syllogism.html
- Schmachtenberg, M., Bizer, C. and Paulheim, H. (2014), “Adoption of the Linked Data Best Practices in Different Topical Domains”, in Mika, P., Tudorache, T., Bernstein, A., Welty, C., Knoblock, C., Vrandečić, D., Groth, P., et al. (Eds.), The Semantic Web – ISWC 2014, Springer, pp. 245–260, available at: https://link.springer.com/
chapter/. 10.1007/ 978-3-319-11964-9_16
- Beek, W., Raad, J., Wielemaker, J. and van Harmelen, F. (2018), “sameAs.cc: The Closure of 500M owl:sameAs Statements”, in Gangemi, A., Navigli, R., Vidal, M.-E., Hitzler, P., Troncy, R., Hollink, L., Tordai, A., et al. (Eds.), The Semantic Web, Springer, pp. 65–80, available at: https://link.springer.com/
chapter/. 10.1007/ 978-3-319-93417-4_5
- Hendler, J. (2007), “The dark side of the Semantic Web”, IEEE Intelligent Systems, IEEE, Vol. 22 No. 1, pp. 2–4, available at: https://ieeexplore.ieee.org/
- Buil-Aranda, C., Hogan, A., Umbrich, J. and Vandenbussche, P.-Y. (n.d.). “SPARQL Web-Querying Infrastructure: Ready for Action?”, in Alani, H., Kagal, L., Fokoue, A., Groth, P., Biemann, C., Parreira, J.X., Aroyo, L., et al. (Eds.), The Semantic Web – ISWC 2013, Springer, pp. 277–293, available at: http://link.springer.com/
chapter/. 10.1007/ 978-3-642-41338-4_18
- Capadisli, S. (2019), Decentralised and Socially-Aware Scholarly Communication, PhD thesis, University of Bonn.
- Halpin, H., Hayes, P.J., McCusker, J.P., McGuinness, D.L. and Thompson, H.S. (2010), “When owl:sameAs Isn’t the Same: An Analysis of Identity in Linked Data”, in Patel-Schneider, P.F., Pan, Y., Hitzler, P., Mika, P., Zhang, L., Pan, J.Z., Horrocks, I., et al. (Eds.), The Semantic Web – ISWC 2010, Springer, pp. 305–320, available at: https://www.w3.org/
2009/. 12/ rdf-ws/ papers/ ws21
- van Harmelen, F. (2011), “10 Years of Semantic Web: does it work in theory?”, available at: https://www.cs.vu.nl/~frankh/
- Hartig, O., Bizer, C. and Freytag, J.-C. (2009), “Executing SPARQL Queries over the Web of Linked Data”, in Bernstein, A., Karger, D.R., Heath, T., Feigenbaum, L., Maynard, D., Motta, E. and Thirunarayan, K. (Eds.), The Semantic Web - ISWC 2009, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 293–309, available at: http://olafhartig.de/
- Verborgh, R., Vander Sande, M., Hartig, O., Van Herwegen, J., De Vocht, L., De Meester, B., Haesendonck, G., et al. (2016), “Triple Pattern Fragments: a Low-cost Knowledge Graph Interface for the Web”, Journal of Web Semantics, Vol. 37–38, pp. 184–206, available at: http://linkeddatafragments.org/
- Schätzle, A., Przyjaciel-Zablocki, M., Neu, A. and Lausen, G. (2014), “Sempala: Interactive SPARQL Query Processing on Hadoop”, in Mika, P., Tudorache, T., Bernstein, A., Welty, C., Knoblock, C., Vrandečić, D., Groth, P., et al. (Eds.), The Semantic Web – ISWC 2014, Springer, pp. 164–179, available at: https://link.springer.com/
chapter/. 10.1007/ 978-3-319-11964-9_11
- Isaac, A. and Haslhofer, B. (2013), “Europeana Linked Open Data – data.europeana.eu”, Semantic Web Journal, IOS Press, Vol. 4 No. 3, pp. 291–297, available at: http://www.semantic-web-journal.net/
system/. files/ swj297_1.pdf
- Verborgh, R. (2018), “One flew over the cuckoo’s nest – The role of aggregation on a decentralized Web”, available at: https://rubenverborgh.github.io/
- Vrandečić, D. and Krötzsch, M. (2014), “Wikidata: A Free Collaborative Knowledge Base”, Communications of the ACM, Vol. 57, pp. 78–85, available at: https://static.googleusercontent.com/
media/. research.google.com/ en//pubs/ archive/ 42240.pdf
- Cimiano, P., Corcho, O., Presutti, V., Hollink, L. and Rudolph, S. (Eds.). (2013), The Semantic Web: Semantics and Big Data, Springer, available at: https://link.springer.com/
book/. 10.1007/ 978-3-642-38288-8
- Mansour, E., Sambra, A.V., Hawke, S., Zereba, M., Capadisli, S., Ghanem, A., Aboulnaga, A., et al. (2016), “A Demonstration of the Solid Platform for Social Web Applications”, in Companion Proceedings of the 25th International Conference on World Wide Web, pp. 223–226, available at: http://crosscloud.org/
- Wang, Q., Mao, Z., Wang, B. and Guo, L. (2017), “Knowledge graph embedding: A survey of approaches and applications”, IEEE Transactions on Knowledge and Data Engineering, IEEE, Vol. 29 No. 12, pp. 2724–2743, available at: https://persagen.com/
files/. misc/ Wang2017Knowledge.pdf
- Bühmann, L., Lehmann, J. and Westphal, P. (2016), “DL-Learner—
A framework for inductive learning on the Semantic Web”, Journal of Web Semantics, Elsevier, Vol. 39, pp. 15–24, available at: https://www.sciencedirect.com/ science/. article/ pii/ S157082681630018X
- Halpin, H. (2004), “The Semantic Web: The origins of artificial intelligence redux”, in Third International Workshop on the History and Philosophy of Logic, Mathematics, and Computation (HPLMC-04 2005).
- Verborgh, R., Arndt, D., Van Hoecke, S., De Roo, J., Mels, G., Steiner, T. and Gabarro, J. (2017), “The pragmatic proof: Hypermedia API composition and execution”, Theory and Practice of Logic Programming, Cambridge University Press, Vol. 17 No. 1, pp. 1–48, available at: https://arxiv.org/
- Campagna, G., Ramesh, R., Xu, S., Fischer, M. and Lam, M.S. (2017), “Almond: The Architecture of an Open, Crowdsourced, Privacy-Preserving, Programmable Virtual Assistant”, in Proceedings of the 26th International Conference on World Wide Web, pp. 341–350, available at: https://mobisocial.stanford.edu/
- Coucke, A., Saade, A., Ball, A., Bluche, T., Caulier, A., Leroy, D., Doumouro, C., et al. (2018), “Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces”, available at: http://arxiv.org/
- Stubailo, S. (2018), “The next generation of schema stitching”, available at: https://blog.apollographql.com/
- Hartig, O. and Pérez, J. (2018), “Semantics and Complexity of GraphQL”, in Proceedings of the 2018 World Wide Web Conference, pp. 1155–1164, available at: https://doi.org/
- Verborgh, R. (2018), “Designing a Linked Data developer experience”, available at: https://ruben.verborgh.org/
blog/. 2018/ 12/ 28/ designing-a-linked-data-developer-experience/
- Berners-Lee, T. (2009), “The next Web”, available at: https://www.ted.com/