Facial Recognition, Politics and Communication

Facial Recognition, Politics and Communication

The well-known  microbiologist, experimental pathologist, and environmentalist René Dubos once wrote: “There is a demon in technology. It was put there by man and man will have to exorcise it before technological civilization can achieve the eighteenth-century ideal of humane civilized life.” Recent discoveries that artificial intelligence can predict all kinds of characteristics of human faces — including the politics belonging to those faces — definitely brings to mind Dubos’s demon. 

“Facial recognition reveals political party in troubling new research,” reads the TechCrunch headline. What is disturbing about this development isn’t just the immediate threat to free association raised by combining facial recognition with political affiliation; it is how this all resembles phrenology and other really creepy racial- and appearance-based sciences from the past. 

The study concludes: “Ubiquitous facial recognition technology can expose individuals’ political orientation, as faces of liberals and conservatives consistently differ.” Over a million faces were used in the sample, compared by similarities to like-minded others. Predictions were accurate “even when controlling for age, gender, and ethnicity.” The authors noted that the conclusions made them nervous about the relationship between technology and civil liberties.

The study was conducted by Michael Kosinski of Stanford University, who a few years ago also made headlines linking facial recognition data to the prediction of one’s sexual orientation. In each case, of course, it’s not about finding a particular knob on the head, or a nose pointing this way or that. Instead, there are tendencies, sometimes subtle or nuanced, that become apparent in mountains of data collected over time. The end result feels the same though: the way a computer “sees” you includes a prediction of your political affiliation, whether you candidly and vocally express it or not. 

Imagine oppressive regimes (imagine even semi-oppressive regimes such as our own) able to deploy facial recognition tech to identify someone as a “possible extremist” based on an algorithm. Imagine private information about those predicted (and unconfirmed) characteristics falling into the wrong hands so that certain actors can search and target members of groups they don’t like. There are plenty of existential, physical, and potentially lethal threats associated with this kind of technology. 

But beyond those obvious human rights dangers there is another cause for concern, loftier in assumption and intent: such technology could turn political engagement, political communication, on its head. Those devoted to political communication might choose to only talk to one side or the other, increasing polarization and scapegoating, as well as groupthink. 

Political communication — often supported by data append services (such as those offered by my client Accurate Append), apps for constituent or voter follow-up, or political surveys — currently does make informed predictions that categorizes voters for the purposes of outreach; but this tech foundationally relies on an assumption of free will. Users of current Accurate Append’s phone append and email append services, for example, assume their outreach might persuade people. Those using canvassing apps can take advantage of a newer “deep” component: information can be recorded that reveals more than just a binary preference, thus more effectively targeting potential supporters. The polling facilitated by this technology may seem mechanistic, but it also aspires to be democratic, and being democratic means accepting that people can behave unpredictably and interrupt our perceptions of them. 

The polling is also not consistently accurate. Random is 50 percent, and humans supposedly come in at 55% accuracy (methodology not known). “The algorithm managed to reach as high as 71% accurate when predicting political party between two like individuals, and 73% presented with two individuals of any age, ethnicity or gender (but still guaranteed to be one conservative, one liberal).” That’s better but in no way the final word. If you were to use these predictions for any political or policy-based purpose, you’d get a lot of false positives and false negatives. 

Unfortunately, we don’t need to wonder whether facial recognition technology will bring about more repression, as it is currently being used by Immigration and Customs Enforcement (ICE) — regardless of the political affiliation, sexual orientation, or personal hardship story of any of the subjects. And this is a good illustration of the problem: one could (maybe with some difficulty) posit a hypothetical ICE that had not been accused of multiple kinds of human rights abuse. Perhaps that hypothetical ICE would be careful and sparing with the technology, and encourage its agents to more strongly follow human rights norms. But that’s not the ICE we have. So we should be careful about what kinds of technology we call “appropriate.” 

In fact, communities all over are speaking out against ANY facial recognition technology — it’s almost uniformly seen as anti-democratic, an intrinsic abuse of power and an unambiguous structural incursion into private life.  For example, Portland, Oregon recently passed “the most aggressive municipal ban on facial recognition technology so far,” prohibiting both city government and private companies from using facial recognition technology within city limits. That’s huge; Oakland, San Francisco and Boston have all banned their governments from using facial recognition tech, but Portland’s ban on corporate uses in public spaces breaks new ground.

The famous American financier Bernard Baruch had this to say about technology undermining our human agency: “During my eighty-seven years I have witnessed a whole succession of technological revolutions. But none of them has done away with the need for character in the individual or the ability to think.” All of which is to say that no matter how sophisticated the tech, it’s still up to us to fight for a just, equal, and kind world.