In a misleadingly titled article, How to Save Democracy From Technology : Ending Big Tech’s Information Monopoly for Foreign Affairs, Francis Fukuyama, Barak Richman, and Ashish Goel have put forward a confused and inadequate set of arguments concerning the “gigantic Internet platforms Amazon, Apple, Facebook, Google, and Twitter”. The authors maintain that because these companies “dominate the dissemination of information and the coordination of political mobilization” they pose “unique threats to a well-functioning democracy”.
The article spends some effort reviewing the prospects for reining in the economic power and influence of these companies, but the primary concern is that “Internet platforms cause political harms that are far more alarming than any economic damage they create”. The concern is that “a dominant platform can influence a broad swath of the population, against those people’s will and without their knowledge”. An assertion that’s nowhere near proved in the article.
The opaque manner in which the algorithms used by Amazon, Apple, Facebook, Google, and Twitter determine what users see, and the order in which items are seen, is widely-acknowledged as problematic. Efforts by political campaigns to leverage off the enormous amounts of data accrued and exchanged by these companies, and to exploit their capacity to target tailored messages to specific sectors of their user base, are well-documented. Moreover, there has been an accelerating proliferation of misinformation, lies, and conspiracies online, and the willingness to seriously address this phenomena by the big platforms has been questionable.
The article argues that the platforms, by failing to adequately curate the information posted on them, “create “filter bubbles,”; where “users are exposed only to information that confirms their pre-existing beliefs”. Thus, the authors maintain the platforms have “a disturbing influence on democratic political debate” and “could sway an election”. The solution offered is “middleware”, or “software that rides on top of an existing platform and can modify the presentation of underlying data”. To get around the algorithms the big platforms employ to direct content, the services provided by middleware, “would determine the importance and veracity of political content, and the platforms would use those determinations to curate what those users saw”. “Middleware could even prevent a user from viewing certain content or block specific information sources or manufacturers altogether”.
The ideal of a “well-functioning democracy” does not exist. Politics is a partisan and tribal affair, and it is about identity, values, beliefs, affinities, ideology, and emotions.
Putting aside the not trivial issue of freedom of speech, there is an unstated assumption behind the article’s argument. It is inferred that there is an acceptable basis or standard of information that should guide voter choice. In ideal circumstances electors would weigh the policies that the candidates put up in relation to their personal circumstances and, bearing the national welfare in mind, assess the trustworthiness and competence of the candidate, and vote according to their deliberations. Presumably, something like that ideal would be the “well-functioning democracy” the authors intend to save. Of course no democracy exhibits those characteristics. Politics is a partisan and tribal affair, and it is about identity, values, beliefs, affinities, ideology, and emotions.
The authors posit a situation where someone like Rupert Murdoch gained control of a major platform. He then could subtly tweak the algorithms “potentially affecting [the users’] political views without their awareness or consent”. However, the success of Fox News, and the extent to which many people seek out particular sites, and fringe and conspiracy platforms, like Parler or 8chan, that reinforce pre-existing beliefs, indicates that distinguishing between feeding people propaganda and satisfying deliberate searches is a troublesome area. The authors suggest that a malign operator could coerce or corrupt government officials by using the personal data they have gathered on them. But government agencies leaning on platforms to provide data for their own purposes is at least as probable, and more disturbing.
People’s vulnerability to misinformation and their susceptibility to manipulation originates in the inability of democratic governments to meet the legitimate expectations of citizens. This is the far bigger threat to democracy.
A regulatory solution needs to be found to the worst examples of the misuse and abuse of these major platforms, however, the real reasons that these problems are arising is attributable to the poor performance of democratic governments across the world. In matters of wealth inequality, social justice, climate action, environmental protection, and the delivery of adequate health and education services democracies are failing. People’s vulnerability to misinformation and their susceptibility to manipulation originates in the inability of democratic governments to meet the legitimate expectations of citizens. This is the far bigger threat to democracy.
There is another threat to democracy that is related to the subject matter of the piece. The world is rapidly moving to a situation where it won’t be only the data gathered by the major platforms that provide an architecture of oppression. The internet-of-things, the exponential increase in data collection that will come with 5G, facial-recognition technologies, driverless cars, and smart cities, will let governments know absolutely everything about their citizens. Deep fake video technology will erase the lines between reality and lies, and threatens to make truth unknowable. All of this will be enabled by artificial intelligence (AI), a technology able to mine, organise, and employ the vast new sources of data becoming available. AI is improving quickly, evolving through ever shorter generations of development.
The shallow treatment of politics and of political opinion forming in the article is disappointing. More concerning though is the narrow perspective, its focus on yesterday’s technological challenges while tomorrow threats are almost upon us.
The shallow treatment of politics and of political opinion forming in the article is disappointing. More concerning though is the narrow perspective, its focus on yesterday’s technological challenges while tomorrow’s threats are almost upon us. It might sit badly with some academics and elites that the broader masses dismiss rationality and science in favour of unsubstantiated propaganda. Opaque algorithms certainly exacerbate this problem. It would be better, though, if the experts lifted their gaze and looked into the near future with the same concern. That’s where the real threat to democracy and individual freedom is beginning to take form.
Copyright Mike Scrafton. This article may be reproduced under a Creative Commons CC-BY-NC-ND 4.0 licence for non-commercial purposes, and providing that work is not altered, only redistributed, and the original author is credited. Please see the Cross-post and re-use policy for more information.