Newspapers everywhere were quick to blame social media for some of 2016’s more surprising political events. However, filter bubbles, echo chambers, and unsubstiantiated claims are as old as humanity itself, so Facebook and friends have at most acted as amplifiers. The real mystery is that, given our access to unprecedented technological means to escape those bubbles and chambers, we apparently still prefer convenient truths over a healthy diet of various information sources. Paradoxically, in a world where the Web connects people more closely than ever, its applications are pushing us irreconcilably far apart. We urgently need to re-invest in decentralized technologies to counterbalance the monopolization of many facets of the Web. Inevitably, this means trading some of the omnipresent luxuries we’ve grown accustomed to for forgotten basic features we actually need most. This is a complex story about the relationship between people and knowledge technology, the eye of the beholder, and how we cannot let a handful of companies act as the custodians of our truth.
Self-deception is our worst enemy—the only thing worse than not knowing is being unaware of your lack of knowledge. However, we would be deceiving ourselves once more if we truly pleaded ignorance in 2016. Let’s be honest: we perfectly knew we’ve been looking for convenient reassurance rather than balanced opinions. Research shows that we bend facts to match our opinion rather than the other way round (at least on the topic of politics). So more facts do not necessarily provide a way out.
Yet this is exactly the reason why the increasing monoculture of information on the Web should worry us. Precisely because we are by nature so bad at searching for balanced sources and opinions, major technological players prey on our hastiness and convenience by playing the Stream of all we want to hear. This is an utter waste of one of the most precious fountains of knowledge gifted to mankind: the World Wide Web.
We shouldn’t be naive. Fixing technology—or actually, using the Web as designed—isn’t going to stop misinformation. However, as information consumers in such a tightly connected world, we must fight for our right to select multiple sources of knowledge and control how they are combined. The only thing our vast technological possibilities cannot address is willful ignorance, because that just isn’t a technological problem. The rest, especially ignorance out of convenience, we should urgently tackle—but we’ll need to be prepared to do what it takes.
Filter bubbles are likely as old as humanity itself: people have always stuck together with those who think alike. The invention of writing and, many centuries later, the printing press, made it easier to distribute ideas—true or false—to wide audiences. Newspapers tint articles using their own political colors, reinforcing the mindset most of their readers probably already had. Nonetheless, printed media have always had a relatively high threshold, as creating and distributing physical publications requires financial and technical means not everybody has access to. As a result, there used to exist only a relatively small number of bastions of truth. This shows that the diseases currently plaguing social media are not new.
The Web fundamentally challenged the publishers’ monopoly, as world-wide distribution became cheap. Anybody could start a personal blog, and the “Web 2.0” hype planted the seeds for today’s omnipresent social networking culture. Already at that point in time, we witnessed the shift from a few traditional bastions to many sources of potential truth. Finally, the technology had arrived that enabled all of us to escape our bubbles. The rise of Wikipedia gave literally anybody the opportunity to edit an encyclopedia—and we were even collectively named Person of the Year for that. It was the Web that gave us the means to build knowledge together, across silos.
The Wikipedia example also highlights an important milestone in the debate about the value of truth, its origins, and its guardianship. Many voiced concerns about Wikipedia’s trustworthiness, with Nature famously declaring it as accurate as the Encyclopædia Britannica, a result that was disputed by Britannica, and recently re-examined by Wikimedia. A couple of years later, we take Wikipedia for granted. Whenever we’re hungry for knowledge, we happily consult Wikipedia, vaguely remembering to take its statements with the occasional grain of salt. Apparently, we gladly trade truth for convenience: while traditional encyclopedias can guarantee higher quality, we’re not prepared to invest the additional time and money this requires. Wikipedia doesn’t necessarily represent the best of human knowledge, but rather those parts a majority largely agrees on. A balanced vision still requires different sources, and no single source can ever be complete. Fortunately, the Web is decentralized: anyone can publish.
Fast-forward to 2016, the year the truth died. On social networks, opinion and fact have reached an almost equal status, and are sometimes very hard to distinguish. Moreover, we’ve grown accustomed to reading small or bigger lies on social media. Yet truth is not why we are on there—it never was in the first place. Newspapers stopped caring about truth as well, since people click on what is exciting, not on what is true. Many publishers are unsure how to make money otherwise. There is social permission to state falsehoods, maybe because we’re all lying at least a little on social media.
Unfortunately, the number of platforms we use to publish—and subsequently, the number of sources people consult—has gone down at an alarming rate. We’re only using a very small subspace of all the Web has to offer, and it’s a problematic blend of truths and lies. Which are which is different for everybody.
The current dominance of a handful of parties on the Web makes it so much easier to obtain convenient information rather than truthful, balanced information—even for those who want it. Those who label this a societal rather than a technological problem ignore how these technological platforms act as catalysts, rewarding insincere behavior from both their content consumers and producers. While it is true that opening more healthy restaurants will not suddenly stop everybody from eating at fast-food joints, the latter’s addictive offering actively lures us away from the alternatives. So we need extra effort to make those alternatives attractive, as only they can provide balanced and trustworthy information.
Just as the fast-food chains try hard to assume a healthy image, Google invested in fact-checking projects and Facebook announced its plans to combat hoaxes and fake news. Even though I don’t doubt the good intentions here, at the same time, these initiatives are important to their continued existence as a trusted brand. For instance, along similar lines, Facebook previously experimented with explicitly labeling satire.
Crucially, such features act as a red herring distracting us from something much more important: the fact that these companies appoint themselves as universal judges of what is real and what is fake, what can be trusted and what cannot. Because truth truly is in the eye of the beholder: what is fake to me can be true to you, in varying degrees.
Unfortunately, we don’t have to look far for examples. We have seen political parties and presidential candidates communicate statements that were verifiably untrue. Yet verification depends on provenance, and the fundamental sources we accept as truth might very well be different for every single one of us. For instance, Conservapedia labels itself as “the trustworthy encyclopedia”, whereas I personally take issue to both its contents and their argumentation. This means that, for me, Conservapedia should be labeled as an untrusted source, whereas Facebook will understandably—and fortunately—never be able to label it as fake. Similarly, some other people will disagree with what Wikipedia has to say about the same topics, and will trust one politician but not another. In a fascinating showcase that eradicates any hope for an absolute truth, Google Maps adjusts it definition of truth depending on where you’re looking from.
So we cannot accept Facebook, or any party, as the sole guardian of what is deemed true and acceptable. Facebook’s attitude is highly ambivalent here. On the one hand, they try to become and replace the Web, kicking away the ladder of this open platform that allowed them to grow hugely in the first place. On the other hand, they refuse to acknowledge their realized near-monopoly as primary entry point, pointing out they’re “not the Internet” whenever more convenient. They can’t eat our cake and have it too.
Even if we’d all have a personalized fake/disputed labeling system that hides what we don’t want to see, there’s the crucial question of whether we will be shown everything we do want to see. Because if Facebook filters news based on our preferences, we might still miss items that never entered the filter in the first place. Some content is rejected because of their community guidelines, some simply wasn’t posted on Facebook in the first place. Yet even if content is on Facebook, there is no guarantee it will appear in our feed. We can influence what we hide, but insufficiently what we see.
Feed and search result algorithms are among the most precious information secrets in this world. We do not know how exactly Google and Facebook select information, and cannot control how it is ranked and displayed. Furthermore, if a certain item appears, we cannot verify why it was presented to us. And of course, we cannot incorporate information from other sources. These issues become all the more pressing as our reliance on personal digital assistants such as Siri, Google Now, Cortana, and Echo increases. After all, who assists the assistants in ensuring information is nuanced and diverse?
Current search engines, social networks, and digital assistants send our queries to their centralized system, which then answers them using previously harvested data. However, since there is no single truth, there can also be no single source of all truth, so all centralized efforts will ultimately result in only a piece of the puzzle. It would be a serious mistake to consider them as the whole picture. We shouldn’t settle for less than the Web.
The obvious alternative is to stop putting our faith in streams and algorithms we can’t fully control ourselves. Instead of delegating the search for personalized information to third-party systems, our own devices should search the Web themselves.
However, we need to manage our expectations: a decentralized system cannot simply add a dimension of diversity and otherwise behave exactly like a centralized one. That’s just not how things work. Whenever we add one constraint to an equation, we need to be willing to compromise on another. Therefore, I believe we shouldn’t approach decentralized systems as re-engineered versions of their centralized counterparts, but rather as a different class of systems in their own right.
For me, a decentralized client automates tasks I otherwise would have done manually. When I want balanced insights on a certain topic, I’d consult different sources one by one, and then integrate that information to find a solution. And those sources can be anywhere on the Web. It is obvious that this could take a significant amount of time. Automation can play two roles here: reducing the amount of manual work, and parallelizing the tasks. In any case, consulting multiple sources will take more time than just consulting a single pre-processed stream of information.
But wait, what if another server does this multi-source processing? Then we can just connect to that server, and it will give us everything we need. Sure, but then we’ll probably have to pay for that server—which is a viable option, yet it illustrates that we will not get diversity in an automated way without giving up some of our conveniences. And then, ultimately, it is the same as with encyclopedias and fast food. Are we prepared to pay the price—time, money, or otherwise—that quality deserves? But let’s face it, we’re already paying a heavy price today, because of the diversity and privacy we give up, and because of publishers that desperately do anything for clicks.
The danger of centralized search engines, social networks, and digital assistants is that they’ve been fueling our addiction to the tempting mix of two things decentralized systems cannot combine: high-speed results that are offered free of charge. If you want things fast, you have to centralize; if you want to be in control of a centralized system, you’ll have to pay. The open Web can be searched free of charge, but it takes time.
We need to resist the temptation of requiring everything to be fast, because it eventually comes at a cost for us. Remembering that, just two decades ago, obtaining balanced information could take days or weeks? Do we really need it in a second now? Given that we’re all extreme multitaskers on our devices, is it so hard to wait for a couple of seconds or even minutes while we’re doing 5 other things anyway?
Maybe you wonder whether you’d be willing to trade your valuable time for quality, but let me break it to you: we’ll have no other option. The Stream is not going to become more centralized, more instantaneous, or more free. The Web is at a tipping point, and it’s our duty to make the right choices. Either we succumb to the convenience of the Stream, or we demand the information quality and diversity we deserve.