How algorithms undermine democracy

Costica Dumbrava | A citizen with a view | March 2019


In August 1955, Isaac Asimov published Franchise, a short story in which a computer decides the results of the US  elections after interviewing  one single citizen. It is the year 2008(!) and Multivac, the electing machine, has chosen Norman, a clerk in a small departement store from Bloomington, Indiana, to be the single, most representative, voter in the forthcoming US presidential elections. The idea of a single representative voter is somewhat seductive. It makes sense from an economic point of view as such arrangement would help saving all the millions spent (wasted) currently on electoral campaigns and elections. It might also make sense from a theoretical point of view if you think of the one voter as the embodyment of the popular will (a la Rousseau). However, Asimov’s tale is not one of perfect representation; it is one of algorithmic politics. 

The voting in this case is only the last stage of a complex algorithmic process, during which the machine weighs “all sorts of known factors, billions of them”. This is impressive, but the machine still needs one piece of information in order to produce the final results, namely “the bent of mind” of a carefully seleted citizen, from which to derive “the reaction pattern of the human mind”. Asimov’s machine seems to be genuinely interested in capturing the political inclination or mood of citizens in order to deliver the perfect democratic result. This is what distinguishes it from more contemporary examples of algorithmic politics, in which companies and parties seek to identify the people’s “bent of mind” (on line) in order to manipulate political preferences and actions (see Cambridge Analytica). 

It must be noted that, in the process, Norman does not really cast a vote or express an explicit political preference. His “voting” is reduced to a medical-like examination, which allows the machine to read his various biological and emotional states. No wonder that the whole process leaves Norman confused and nauseated. It is only afterwards that he self-congratulates himself boasting about patriotic duty and self-importance. The scene resembles hardly a political theorist’s story of  political self-determination and democratic empowerment.

Such visceral, subconscious voting is deeply problematic from a liberal democratic perspective. Firstly, one of the moral foundations of liberal democracy is that individuals possess free will and personal autonomy, which enable them to define and pursue own conceptions of the good. Democratic participation through egual voting recognises this moral capacity of each citizen. The idea that we need machines to help us discover our ‘true’ inner political inclinations and preferences undermines the moral standing of individuals to participate in democratic politics (surely, the debate about the existence of free will and the self is not new and touches on issues beyond political participation).

Secondly, machine voting also undercuts the colective and deliberative dimensions of democracy. Democratic politics is not limited to aggregating individual preferences, but is supposed to also enable preference formation and deliberation. If people cannot know their preferences and are not able to express these publicly, they connot convince others, change their mind or find a compromise. This brings to mind the recent case of political polarisation on social media platforms, where algorithms are deployed to reinforce political stances by building effective fire-walls between different political camps. In such case, people are still free to express their opinions and preferences but their discursive and deliberative space is seriously restricted by algorithms. With algorithms digging deeper into our guts, we can imagine a world of automatic politics and algorithmic government akin to the invisible world of financial transactions carried on by financial bots.

While we still hold elections in which (almost) all people vote, computers and smart algorithms are increasingly deployed in many social areas and they are set to affect our ways of doing and thinking politics. Asimov’s story is fascinately relevant today because it touches on a number of  key issues related to algorithms, politics and society. Asimov’s machine is a black box  that is kept under a veil of secrecy and of hardly dissimulated coercion, embodied by the secret service agent Handley (“Now we don’t question Multivac, do we?”). The machine is dependent of the data it is fed with, thus prone to data and function creep: “Multivac can’t know everything about everybody until he’s fed all data there is”.  In the beginning, the machine had to extract information from a mumber of citizens but, as its algorithms improved, the base of representation was reduced to just one citizen. This is a hint that further improvement might lead to eliminating human direct interaction altogether. When a computer “will understand better than you yourself”, why would it bother to ask any of us what we think?

Even if feasible, the idea of decoding people’s true selves and of perfectly computing their deep political inclinations into a perfect political outcome misses the point of democratic politics. Whatever one’s true self is, democratic politics is not merely about collecting free-standing individual preferences. Democratic politics is also about reaching the best collective decisions, given the circumstances. It might well be true that, deep inside, many of us are at least a bit selfish, hedonist, even rasist etc. but we still count on each other to restrain such ‘true’ inclinations as much as possible for the sake of living together in a political society. Our ‘true’ political self is not a thing that could be extracted from our guts or download from our brain. As with the Socratic journey of self-finding, the political self emerges (or should emerge) from an ongoing confrontation of political dispositions and judgements. There is never a single or ‘true-to-oneself’ answer to a political question, which a smart machine would be able spit out. Even if there was something close to a true answer, being able to choose wrongly is a privilege of personal freedom. Sometimes, a person might just want “to vote cockeyed just for the pleasure of it”, as Granpa Matthew in Asimov’s story would like.

Entrusting machines to do for us the (sometimes) dirty work of politics accounts to moral laziness and will deeply undermine our sense of personal freedom and political responsibility.