Web-Based Cartography, Power and Community-Building

We were interested in the news that the Chinese phone manufacturer Huawei recently signed a deal with TomTom, the Dutch digital mapping company, for an alternative to Google Maps.

TomTom has been an often unsung but never ignored force in map applications. It has several self-branded products on iOS and Android devices. But Huawei will be building its own application, a mapping product, using TomTom maps—so a kind of secondary app based on TomTom’s primary apps. TomTom is no stranger to this kind of use. In the past the company provided data for Apple Maps, making itself part of a similarly “shambolic” patchwork of mapping apps. Huawei intends to build a full-on system, a “Map Kit” using data from Yandex, a Russian tech company. The TomTom deal will either serve as a bridge to Map Kit, or be integrated into it. 

Why can’t Huawei just rely on Google Maps? Well, here’s where it gets interesting. Last year, the Trump administration placed sanctions on Huawei. In May, Trump “issued an executive order barring US companies from using information and communications technology from anyone considered a national security threat” and included Huawei in its list of dangerous entities. That complicated relations with Google, even though the application of administration sanctions has been uneven and uncertain. Businesses hate uncertainty, after all. Meanwhile, Huawei is building its own operating system, which it calls HarmonyOS, and is using the TomTom deal to further reduce its reliance on Google.

What’s especially interesting about this is that trade policies, domestic politics really, and the Trump administration’s confrontational approach to international relations, is going to literally change the way people look at the world. There are nuanced differences between the way Google and other companies map things. The methods, devices, and features of interactivity vary across systems and platforms. Many of the differences will be subtle, but they will still be differences, and over time, their aggregate will grow. In however nuanced a way it might be, politics will determine mapmaking differentials. 

Historically, cartography has been a ruling-class sphere. If the first task or problem of cartography is “map agenda-setting,” where the features and area of the place to be mapped, then historically, that task has privileged the kingdoms, countries, states, and localities with the authority and resources to make the maps. Likewise, cartographers eliminate characteristics and areas deemed irrelevant to the purposes of the maps and materials being created. Even the very act of generalization—creating categories of landscapes and ocean depths and similar profiles—relies on areas of both informational and material authority that perpetuate and emerge from great powers. Maps, critical scholars tell us, are “sites of power-knowledge,” and judgments about which maps are best “arise from privileged discourses.” 

All of which suggests that what we’re seeing in the escalation of trade disputes is a series of power-shifts that will again exercise influence on how we map the world. But now, instead of the changes being reflected in model globes and poster-sized maps in school rooms and offices, they will be reflected in millions upon millions of digital devices guiding people around in their cars or from their homes. 

So what does the business end of big data-informed, and satellite-sourced web based mapmaking get us? What social tendencies does it reproduce and strengthen? 

In many ways, web-based mapping has created an interesting piece of sociological data: it gives us more information about our surroundings and communities, but it also entrenches us as solitary beings in our vehicles and domiciles. We can access information about what’s out there, what’s around us. But then we click our keypads to navigate us through a geographical location so we don’t have to stop and ask for directions. We can order meals, groceries, office supplies, and other goods and services using apps that link from these maps. We never have to “be in public.” It doesn’t turn us into recluses, necessarily, but it changes our social dynamics. 

But consumer-based or user application-based maps are not the only way we’re seeing technological progress affect cartography. Another area that has emerged stronger because of big data and satellite technology is called collaborative mapping, and collaborative mapping is helping humanity re-define cooperative work in new ways. Collaborative mapping has the potential to bring diverse peoples and communities together rather than isolating us. 

Collaborative mapping uses open source collaborative software to create and develop (and thus never really be done creating) collective maps. Maps are created collaboratively on a shared “surface.” Problems sometimes occur due to people having concurrent access and competing for mutually exclusive data. This might happen at the same time, or it might be a matter of one party correcting a previous iteration of the map, potentially causing conflicts in interpretation. But just as commons-based encyclopedias like Wikipedia or other common knowledge libraries have (admittedly imperfect) procedures for dealing with these conflicts, collaborative mapping can also develop such procedures. The outcome is still likely to be a more egalitarian knowledge base, and we now know that many hands and many heads not only make light work, but also create better epistemology, and better knowledge. 

Based on the work of Christopher Parker and colleagues, as well as many other researchers, we are learning that collaborative mapping can do a number of other socially positive things. The public-oriented consciousness that one might expect from collaborative mapping work is also found in many of its practical applications. For example, the potential of collaborative mapping extends to creating ease-of-access maps and guides for people with limited mobility. A crowd-sourced “mashup” of mapping data can be created to provide that information on an accessible and free platform. Researchers say that it’s a challenge to collect that “subjective” data in the first place, but they are coming up with many potential solutions, from pre-populating local regions and then expanding coverage from there, to creating mashups without the crowd-sourced data and then create ways for users to easily add their own data to the collective data pool. 

For people who study the history of mapping and cartography, the idea of egalitarian, collectively-created maps is pretty revolutionary. Immediately following the end of World War One, there emerged a deep dissatisfaction with territorial thinking. Progressive-minded activists and scholars began searching for alternative ways of “viewing the world” both in the paradigmatic sense (worldview) and the cartographic sense (criticizing the construction of borders and the representations of the world that show up on maps; think of the conversation about why the “north” is “on top” and that kind of thing). It’s worth thinking that perhaps activists can forge links between cooperative map-making and cooperative international and other political relationships. 

The building of that political power and relationships will also be continually shaped by how we utilize data. From geo-targeted ads on social media that help turn out voters, to using partners like our client Accurate Append, a phone, email and data vendor, to get the contact info campaigns need to text and call voters. Data and tech are shaping every aspect of our future. From how we navigate our world, to how we change it.

What Can We Learn from Kafka’s “Before the Law”?

What Can We Learn from Kafka’s “Before the Law”?

Alt text: A photo of author Franz Kafka.

Within Franz Kafka’s The Trial — a story about a person prosecuted by obscure elites for an unclear crime — there is a short parable called “Before the Law.” In this short story, a man wishes to gain entry “to the law,” which is in a gated community. A guard, or gatekeeper, tells the man he cannot go in at the moment, and that although he could try to get in despite the gatekeeper’s refusal, there are gatekeepers inside more powerful than himself. The man spends the rest of his life there, occasionally bribing the gatekeeper, but he is never let in. At the end of his life, he asks the gatekeeper why nobody else has gone in; after all, the law is supposed to be accessible to all. The gatekeeper replies that the entrance is only for the man and nobody else. “[T]his entrance” the gatekeeper says “was assigned only to you.” He continues: “I’m now going to close it.” 

There are no shortages of interpretation. Some say Kafka is categorically absurdist and that life itself is absurd and irreducible to telos and “Before the Law” is just another instance of this. Some say it’s a parable for the “inaccessibility” of law and its elitist nature. Some say it’s a criticism of the man, whom the guard blatantly invites to go in, as he is surrendering his personal agency, giving into fear or inertia. More interesting interpretations posit that we are both “inside” and “outside” of structures of power and that our paralysis comes from overdetermining our place either inside or outside. 

In trying to determine what Kafka’s purpose was (if that”s even possible), it’s important to consider his theoretical influences. Kafka was influenced by Marxist theory, an influence more apparent in some works than others and frequently in concert with other perspectives. Although postmodern critics find him easy to invoke with respect to a general state of absurdity and collapsed or indeterminate meaning, it may be more interesting to consider his acknowledgment of both the objectivity and subjectivity of oppressive structures; in The Metamorphosis, Gregor really does have a boss. The police and eventually the vigilantes in The Trial really do have the power to detain and kill people. People really do walk over “The Bridge” and it hangs between two actual solid foundations. All the characters are aware of this structural reality, and their own; they just can’t seem to traverse any of it sensibly, predictably, or fairly. 

There are clear similarities between oppression under the law and exploitation under capitalism. In fact, the law — in addition to being an ideological cover for class oppression — can itself be recognised as a platform for the material disciplining of the dispossessed classes. Capitalism, too, is characterized by the law’s mythical traits presented in the parable: (a) idealized as accessible to all; (b) concrete inaccessibility (or inaccessible material hierarchies); and (c) individuation of attempts at empowerment. 

The law is idealized as accessible to all, and this drives the man’s desire and stubborn determination. But the law is ultimately inaccessible to the man materially, and the threat (which may not be real) is of successively more powerful guards.The entrance, that particular one, was “reserved” for the man alone, and then it is shut. In many ways, this same scenario plays out under capitalism, and during attempts at empowerment within it. 

Criticism of law under material hierarchy is, importantly, not a dismissal of the possibility of community justice. The idea of a better world may have been incidental to Kafka (or perhaps the contemplation of betterment distracts from the reader’s necessarily subjective confusion and exasperation, vital elements for authentically experiencing Kafka’s universe). But postmodernism’s paradoxically categorial dismissal of any kind of ordered deliberation is too broad a brush with which to paint Kafka’s parables of real material oppression. Instead, we should consider likening the absurdities of the Kafkaesque legal system, penal colony, worker-boss relationship, or other structures and relations to fascist use of nonsensical and self-contradictory “humor” in their clownish public discourse, always attempting to cover up their real brutalities with the mind-clouding confusion of absurd denials or epistemic problematizations. 

Structures can be very real, extremely violent, unquestionably oppressive, and also manifest themselves as absurd and confusing to the core. “Before the Law” isn’t even the most absurdist of Kafka’s parables, and its metaphorical standing-in for the liberal myth of equality under the law is, by his standards, a pretty easy call. 

This post was sponsored by my client, Accurate Append, which provides effective and efficient services to assist organizations, campaigns and businesses in reaching their supporters and customers.

Eisenstein and Brecht: Political Ideology-Influenced Theater and Film

Eisenstein and Brecht: Political Ideology-Influenced Theater and Film

Marxism, the systems-philosophy based on the political economy of Karl Marx, holds that history is the sum of material class struggles – that a society’s material production and distribution shapes its culture and consciousness, and that “progress” is reflected in successive revolutions creating more egalitarian material (economic) and political systems. Not surprisingly, Marxist views of the arts emphasize similar things: the underlying economic contexts of artists’ works, how those works represent class and class struggle, and how they give voice to defenses of the capitalist status quo or envision a post-capitalist world. 

I want to briefly explain the relationship between Marxism and performance, specifically the performance of theater and film, which are “representational” and “narrative” forms of art. Theater and film are frequently of interest to socialists and other anti-capitalists (music is another common area of interest and there are Marxist graphic artists too), because there’s an innate collectivism and productivity in ensemble performance, set design, historical narrative, the representation of conflict, and the language of narrative dialogue. Also, performances can be educational; and, education is a political strategy for the anti-capitalist, teaching people how to criticize society and find common solutions through political organizing around an egalitarian agenda. Theater and film can be criticized for the way they represent the existing order, but they can also be re-constructed and re-deployed to create a new, radically democratic order. 

In fact, Daniel Fairfax repeats, with the caveat “reportedly”, the claim that Lenin declared cinema “the most important” of “all the arts.” Mainstream cinema reproduces the dominant ideology. 

What does critical, revolutionary, anti-capitalist film do?

To explore the educational component of theater in relation to Marxism – the criticism of society, and the treatment of drama as didactic – we can turn to Soviet filmmaker (earlier a playwright) Sergei Eisenstein (1898-1948) and German playwright and dramatic theorist Bertolt Brecht (1898-1956). 

Sergei Eisenstein’s most well-known contribution was “montage,” and with it he made unforgettable and technically pioneering films like Strike, Battleship Potemkin and October. Eistenstein learned montage filmmaking from Lev Kuleshov, who created the “montage” effect by cutting through many different perspectives, angles, actors, and parts of scenes in ways that created a motion and set of relationships. Montage creates “meaningful associations within the combinations of thoughts” to tell a story, compiling shots in a way that enables the film to no longer be bound to the chronological and expected sequencing. This makes the film dynamic and unbound by time and space. Eisenstein called montage “an idea that arises from the collision of independent shots.” There were different types of montages with different purposes, from eliciting and emphasizing difference to establishing visual continuity. Most importantly, according to Fairfax, “[i]ntellectual montage juxtaposes images to elicit cerebral responses rather than emotional ones.” In other words, intellectual montage is educational.

Bertolt Brecht wrote drama, poetry and theory. In productions like The Three Penny Opera (whence came the popular song “Mack the Knife”), Mother Courage, and The Good Person of Szechuan, Brecht and his ensemble (he was surrounded by brilliant actresses, songwriters and other dramatists; they came up with many of his ideas though he rarely acknowledged that) explored whether ethics were possible in capitalism. In other plays, like The Measures Taken, Brecht interrogated anti-capitalist organizing and revolutionary tactics, intending to educate his audiences on how to be good revolutionaries. 

The purpose of Brechtian Theater, in fact, is primarily to educate, spark debate, and place our conventions into question rather than emote or romanticize. Brecht believed we should not lose ourselves in the romance, fantasy, or other removals of consciousness represented by conventional theater. According to Douglas Kellner, “Brecht’s epic theater was built on the Marxian principles of historical specification and critique . . . Brecht sought to illuminate the historically specific features of an environment in order to show how that environment influenced, shaped, and often battered and destroyed the characters.”

At the center of the Brechtian project was the use of theatrical exaggeration, outlandishness, and the obvious incongruousness. Characters were to be seen as characters, not “real people” (even though many of his characters, like Joan in Saint Joan of the Stockyards, did possess a lot of unique personality). The point is that the idea of “method acting” – of the player trying to “be” the character – was anathema to epic theater. Like Greek comedy and tragedy, a chorus and other surrealist components could make the production into a blatant series of arguments while still giving audiences a theatrical experience. The point was to learn to criticize and organize organically, as appropriate to the historical moment.

And Marxist theory, or at least Brecht’s interpretation of it, is foundational to Brecht’s plays, which consider situations brought about by our material systems or the struggle against them and relationships between people, whether soldiers, tradespeople, criminals, bankers, religious people, or shopkeepers. 

In both Eisenstein’s intellectual montage and Brechtian epic theater, the form of presentation is designed to lift audiences away from “subjectivity”, or passive immersion, into the aesthetic. The productions are busy and blatantly artificial, challenging and confrontational, openly theoretical rather than populist. They are a pedagogy as well as aesthetic. 

An interesting common thread between Eisenstein and Brecht is that they were both exposed to Kabuki theater. For Brecht, the exaggerated performativity of Kabuki would influence some of the basic axioms of his theory of epic theater. Kabuki influenced Eisenstein too: he believed it was “designed to act upon all of the spectator’s sense organs at once,” and this made him think about how to bombard audiences with imagery. This technique of producing to affect reflects an optimistic, late modernist revolutionary ethos.

Both Brecht and Eisenstein became huge influences on modern and contemporary theater and film. Brecht deserves credit every time actors break the fourth wall or some emergent meta-production occurs (characters walk off the sound set into the guts of the studio, or talk about their production in the middle of it). Of Eistenstein, Jason Hellerman writes: “Editing began to inspire filmmakers to set up more shots and take more chances. We saw the French New Wave and American New Wave take these montage ideas and build incredible narratives. 

They changed editing and became the basis of storytelling. Now, we expect these kinds of edits and montages. They are almost second-nature to us, a universal way to transport viewers and share emotions.”

This post was sponsored by my client, Accurate Append, providing high quality data appending services to fill in the holes in your campaign’s, organization’s, or company’s data.

The Difference Between Liberals, Progressives, and Leftists

The Difference Between Liberals, Progressives, and Leftists

Guest post

There’s more to politics than ideological labels, and for most people, encapsulating the complexity and nuance of one’s political views with a single term may prove difficult. However, in a country where conservative politicians spout oxymoronic combinations — for example, when Senator Ron Johnson described Joe Biden as a “liberal, progressive, socialist” — clarification is clearly needed.

Though the mainstream media may use these terms interchangeably, this is fundamentally incorrect. Liberals, progressives, and leftists have very different perspectives on current political and economic realities. More importantly, though, is the fact that each group differs in their ideal approach to solving the problems at hand.

Perhaps the best way of illustrating the difference between liberals, progressives, and leftists could be seen by each groups’ reaction to and understanding of Donald Trump’s election in 2016. The shock that accompanied his victory led people across the political spectrum to reconsider their established notions of what it meant to be “electable”. Though he was defeated for reelection, Trump’s legacy remains felt throughout all aspects of political life, especially in the realm of ideology.

The Liberal Worldview

During the Trump presidency, liberals generally saw Trump as an aberration, not a symptom of a fundamentally unsalvageable system. Liberals placed their faith in reinvigorating established institutions to stop Trump in his tracks, ignoring the fact that these same institutions were not enough to stop him to begin with. For liberals, Trump’s victory was not a result of the Democratic Party hemorrhaging its working-class base after failing to provide meaningful change to the people that put them in power. 

Instead, Trump’s victory was seen as a mere fluke, a disaster that wasn’t an indictment of a failed system, but rather a mere exception to a system that was generally working alright. This is why liberals embraced “anti-Trump conservatives” — those Republicans from the Bush years who later went on to become Trump critics. To them, these weren’t people who created the conditions that allowed Trump to win in the first place, but were rather positive relics of a simpler time. 

The Progressive Worldview

Progressives viewed Trump’s victory in a broader context, acknowledging that the same institutions that failed to stop Trump were incapable of holding him accountable. Progressives such as Bernie Sanders correctly identified the fact that the Democratic Party’s failure to provide tangible material benefits to its working-class base cost them a historic election. 

Noticing sharp shifts to the right in working-class, traditional Democratic Party strongholds in the Midwest, progressives understood Trump as a conman who preyed on the anxieties of the vulnerable. For progressives, this means that Democrats should, going forward, commit to policies that would benefit workers, such as universal healthcare, stronger labor protections, and a living wage for all. Adriel Hampton’s write-in campaign for Carlsbad Mayor is a progressive electoral effort.

The Leftist Worldview

Leftists viewed Trump’s victory in an even broader context, identifying it as not just a condemnation of existing institutions within a capitalist framework, but the very system of capitalism itself. In other words, leftists during the Trump era understood that capitalism is the disease, Trump is the symptom, and that socialism is the cure

While leftists certainly support progressive goals such as universal healthcare in the short term, in the long term they seek a total reorganization of economic society. This means that, while progressive policies will provide short-term relief for workers and will therefore temporarily thwart demagogues like Trump from arising, securing a better future will take much more than just increased funding for social services. 

This means creating an economic system rooted in people, not profit. This means decommodification of essentials for human life such as food and healthcare, not just increasing funds to provide them in the short-term. As long as the world exists under a regime of white supremacist, patriarchal, colonial capitalism, demagogues like Trump will take advantage of people’s fears and channel them into something sinister.

Cyber-As-Material-Battlefield, Political Communication and Imperialism

In April of 2020, at the start of the Covid-19 pandemic but in anticipation of a predictable phenomena, Christoph Laucht and Susan T. Jackson posted a great piece at the History and Policy website on the militarization of pandemic response rhetoric. They noted that “the dominant feature of the novel coronavirus pandemic is the many uncertainties that it creates for national governments and their publics.” In the face of that uncertainty, policymakers start “securitizing” and thus militarizing everything. Language begins to change as “leaders, journalists and the general public alike . . . draw on a highly militarized—often nationalistic —rhetoric” in talking about how to respond to threats. This militaristic language, Laucht and Jackson write, is zero-sum, predicated on “dangerous competition between and within states over what are now scarce resources (for example, medical piracy).” It’s how society responds to uncertainty, but it normalizes militaristic solutions. Laucht and Jackson give plenty of examples: “front-line healthcare staff,” “deployment” of equipment, the Mayor of New York calling a particularly bad day in the city “D-Day,” news organizations narrating the actual presence of the military, showing up to support civil leaders in the “battle” against Covid. The language normalises the militarisation of society, whether or not that militarization is as obvious as the military showing up in person.

I think NATO’s recent promise to retaliate against cyber-attacks as if they were military attacks represents the culmination of this kind of rhetoric, particularly when it can be identified with a targetable antagonist. This geopolitical development is a great case study in the power of political communication. Securitization becomes militarization—militarization that begins linguistically and culminates with actual “boots on the ground.” 

We often think of political communication as speeches and rallies, or even online communication and organising (like that facilitated by data appending and enhancement services, as provided by my client Accurate Append). But geopolitically, political communication is much about displays of power and efforts at intimidation. The Western bloc is responding to Russian cyber-hacking by using its power to define certain actions as acts of war, and intimidating potential adversaries by threatening overwhelming military—and even nuclear—retaliation in response to cyber-attacks. 

NATO formed in 1949 with the signing of the Washington Treaty, as the Western bloc nations tried to counter the Warsaw Pact nations and the constructed threat of the Soviet Union. There is no longer either a Soviet Union or Warsaw Pact; in fact, nations in the Warsaw Pact are now in NATO. NATO still exists, and has deployed forces in decidedly non-European geographies like the now virtually stateless Libyan entity and the lost U.S. cause in Afghanistan. 

This past history is important to keep in mind when discussing NATO’s recent aggressive rhetoric concerning cybersecurity. It’s true that Russian and Russia-adjacent activity targeting the U.S. and Europe is hostile and exploitative. This is part of a ubiquitous, multidirectional state of hostility and brinkmanship around communication and information technology, intellectual property, information warfare. 

In June of this year, NATO member states met in Brussels to communicate amongst themselves and send a series of messages to the world. NATO is in full expansion mode and unapologetically sees itself as a global military machine—not just a military force, but a cultural and political force, a facilitator of technology and R&D, part of the solution to climate change and more. During the summit, “NATO Leaders agreed to launch a Defence Innovation Accelerator for the North Atlantic (DIANA) to boost transatlantic cooperation on critical technologies and to establish a NATO Innovation Fund to invest in start-ups working on emerging and disruptive technologies.”

All of this falls under broad meta-communicative messaging around underlying purposes. In this case, NATO’s purpose is “safeguarding the rules-based international order,” which the organization openly interpreted as hostile to Russia and China, doing so alongside responding to “the security implications of climate change.” Since the end of the Cold War, there’s been a drive to expand the meaning of “security” in order to justify the continued existence of various military juggernauts and a continued military economy. Of course, things like climate change do threaten us, but grouping it as an adversary alongside two traditional rival nations is more than just a simplification—it’s a sophisticated political argument, a communicative (and communicated) position. 

And cyber-security is another lodestone of the general mission—a potentially calamitous battlefield on which NATO secretary general Jens Stoltenberg has said that increasingly sophisticated cyber attacks against its member nations could trigger an alliance response. He further reported that, as a concrete action, the alliance has established a “cyber domain center” in Estonia, which will monitor attacks and share best practices with NATO members. Cyber is now considered “an operational military domain,” triggering a “collective response based on NATO’s Article 5.” Stoltenberg called Russia “aggressive” and mentioned malware attacks on the German parliament and other incidents. 

The idea of an overwhelming military response to cyber-attacks isn’t new. In 2018, perhaps emboldened by the unhinged president’s attitude and rhetoric, the Pentagon suggested using nuclear weapons to answer serious cyber-attacks. Of course cyber-attacks themselves could destabilize nuclear weapons through increasing the likelihood of miscalculation or accidental launches. If cyber-antagonists knew the U.S. or NATO would strike back by starting the apocalypse, would those antagonists be deterred? Or would they rush to hack weapons systems in the U.S., Britain, France, and Germany? 

As a result of all of this brinkmanship and toxic Wario masculinity, we are now potentially at the brink of nuclear war over cyberattacks. It’s important to see this situation as the culmination of acts of political communication; and, which is being met by political communication in turn—communication that is expressing that a cyberattack by a party hostile to any NATO member state will be treated as a “military” attack on all NATO member states. The syllogism is based on Article 5 of the NATO charter, providing that if a NATO ally is the victim of an armed attack, that attack is treated as an attack on every other member of the alliance, justifying necessary actions in response. 

Ultimately, lubricated by militaristic language and securitized rhetoric, NATO has expanded not only its identity in geopolitics, but the authority by which it can destroy nations and people.

Gods, Devils, and Today’s Political Rhetoric

Gods, Devils, and Today’s Political Rhetoric

There’s certainly been a lot of talk of god and devil in the midst of the castle-storming and descent-into-barbarism themes that have dominated national politics in 2021 so far. And even when “God” or “Satan” aren’t explicitly referenced by any of their many names, allusions to them abound. 

When thinking of politics and political communication, it’s useful to also think of religion — not because they are or ought to be the same, but rather because it’s useful to keep in mind their similarities in the communication styles, worldview construction and argumentative logics. Religious discourse has shaped large swaths of human history and during many points in time, politics and religion were essentially interchangeable. Our collective memory still struggles to differentiate between the two, to whatever extent is possible. 

Recognizing this is important because religious thinking and discourses vary greatly in ways that alter their impact on politics. One person’s chosen religious discourse might be very hardline, very old school, uncompromising and even warlike; their politics may be this way as well. This is the religion that tends towards “othering”, creating “enemies”.There is also that religion that seeks to redeem everyone — and there are analogous political strategies to this type of thought as well. 

Two 20th century rhetorical theorists can help us understand this. The first is Richard Weaver, whose major work occurred in the 1950s. A conservative, Weaver was nevertheless critically reflective of his own belief system and others’. Weaver’s introduction of “god terms and devil terms” has become the core of what we would now call religious form, god-and-devil form, in political communication. Weaver described god terms as those words and phrases that were so positive that they “overpower” other language. In America, “freedom” is such a term. Devil terms, in contrast, were overpoweringly negative in connotation; for example, the word “communist” when Weaver was writing (and, some may argue, today), and the word “terrorism” decades later.  

The second theorist is Kenneth Burke, a remarkable and esoteric writer who also believed religion was imbued in form, and that such form was used in nonreligious communication too. Many have written in detail about Burke — I will not be. Important to this discussion, however, is Burke’s distinction between the comic and tragic frames. 

In the comic frame, we are imperfect beings trying to be better, believing we can be so and, thus, are redeemable. In the tragic frame, battles must be fought between good and evil, with one side or the other inevitably vanquished and, thus, only some of us are redeemable. The others are irredeemable.

For example, in his book The Rhetoric of Religion, Burke “refigures Augustine’s Confessions as a Platonist comedy,” instead of a tragic story of a sinner. “Tolerant charity” and “humble irony” can be read from Augustine’s Confessions, suggesting that we can all “confess” and be better people, and look back at ourselves with bemusement at how far we have strayed from the people we ought to be. What is dangerous, in Burke’s view, is treating that “tendency toward perfection” as a quest for order — which can quickly suggest the beginnings of fascism. “Burke warns us against this principle, which functions as much in the rhetoric of politics as in the rhetoric of religion.” 

The important point here is that ideological change, rejection of ideas or theories or various partisan sides, often retains the same forms, shapes, patterns, and most importantly methods, from past ideologies. You can be a hard-right MAGA supporter and use god/devil form, and you can be a left-wing revolutionary and also use god-devil form. This is one reason, though certainly not the only reason, why you see totalitarianism creep up on both the right and the left and sometimes (as was the case in 20th century Europe) at the same time. The conditions that favor god-and-devil rhetoric generate such rhetoric on both ends of the political spectrum.

The form of communication we choose (consciously or unconsciously) will influence the way we resolve, debate, strategize around issues. If you’re a socialist who wants to overthrow capitalism, but you frame the conflict as apocalyptic where the capitalists are in need of redemption or destruction, your speeches, pamphlets and social media posts will reflect that “tragic frame” or commitment to spiritual (manifested in this case as theoretical and political) warfare. 

There’s a vast difference between political activism that says someone is evil if they disagree with you and a more (engaging) activism that suggests we’re all in this together, we’re all imperfect, and want you to help us make the world better. 

This more redemptive frame doesn’t cast off the metaphysics of the past that inform our current approaches to political communication; but it can be far more effective for movement building. 

I think this ultimately justifies the kind of “deep canvassing” approach to on-the-ground political activism and campaigning, and the use of technologies by canvassers and organizers to update and remain in communication with voters, volunteers and organizational recruits. When technologies are used for listening and building from diverse experience rather than division and judgement, the resulting campaigns and organizations are dynamic regardless of their “success”. 

There is a potential for deep canvassing to literally “talk someone out of bigotry.” In 2018 in Massachusetts, a referendum on whether to keep an antidiscrimination law inspired some LGBTQ rights advocates to enter into nonjudgmental conversations with people they encountered in their door-to-door work who exhibited prejudice. This is based on a “comic,” universal-redemption frame rather than a “tragic” evil-must-be-eradicated frame. When voters chose to keep the antidiscrimination laws, some of those activists believed that the deep canvassing helped. Communicative frames, like elections, have consequences. 

This post is sponsored by Accurate Append, which provides high quality data append, phone append and email append services to campaigns and organizations so that they can most effectively reach potential voters, volunteers and supporters.

Facial Recognition, Politics and Communication

Facial Recognition, Politics and Communication

The well-known  microbiologist, experimental pathologist, and environmentalist René Dubos once wrote: “There is a demon in technology. It was put there by man and man will have to exorcise it before technological civilization can achieve the eighteenth-century ideal of humane civilized life.” Recent discoveries that artificial intelligence can predict all kinds of characteristics of human faces — including the politics belonging to those faces — definitely brings to mind Dubos’s demon. 

“Facial recognition reveals political party in troubling new research,” reads the TechCrunch headline. What is disturbing about this development isn’t just the immediate threat to free association raised by combining facial recognition with political affiliation; it is how this all resembles phrenology and other really creepy racial- and appearance-based sciences from the past. 

The study concludes: “Ubiquitous facial recognition technology can expose individuals’ political orientation, as faces of liberals and conservatives consistently differ.” Over a million faces were used in the sample, compared by similarities to like-minded others. Predictions were accurate “even when controlling for age, gender, and ethnicity.” The authors noted that the conclusions made them nervous about the relationship between technology and civil liberties.

The study was conducted by Michael Kosinski of Stanford University, who a few years ago also made headlines linking facial recognition data to the prediction of one’s sexual orientation. In each case, of course, it’s not about finding a particular knob on the head, or a nose pointing this way or that. Instead, there are tendencies, sometimes subtle or nuanced, that become apparent in mountains of data collected over time. The end result feels the same though: the way a computer “sees” you includes a prediction of your political affiliation, whether you candidly and vocally express it or not. 

Imagine oppressive regimes (imagine even semi-oppressive regimes such as our own) able to deploy facial recognition tech to identify someone as a “possible extremist” based on an algorithm. Imagine private information about those predicted (and unconfirmed) characteristics falling into the wrong hands so that certain actors can search and target members of groups they don’t like. There are plenty of existential, physical, and potentially lethal threats associated with this kind of technology. 

But beyond those obvious human rights dangers there is another cause for concern, loftier in assumption and intent: such technology could turn political engagement, political communication, on its head. Those devoted to political communication might choose to only talk to one side or the other, increasing polarization and scapegoating, as well as groupthink. 

Political communication — often supported by data append services (such as those offered by my client Accurate Append), apps for constituent or voter follow-up, or political surveys — currently does make informed predictions that categorizes voters for the purposes of outreach; but this tech foundationally relies on an assumption of free will. Users of current Accurate Append’s phone append and email append services, for example, assume their outreach might persuade people. Those using canvassing apps can take advantage of a newer “deep” component: information can be recorded that reveals more than just a binary preference, thus more effectively targeting potential supporters. The polling facilitated by this technology may seem mechanistic, but it also aspires to be democratic, and being democratic means accepting that people can behave unpredictably and interrupt our perceptions of them. 

The polling is also not consistently accurate. Random is 50 percent, and humans supposedly come in at 55% accuracy (methodology not known). “The algorithm managed to reach as high as 71% accurate when predicting political party between two like individuals, and 73% presented with two individuals of any age, ethnicity or gender (but still guaranteed to be one conservative, one liberal).” That’s better but in no way the final word. If you were to use these predictions for any political or policy-based purpose, you’d get a lot of false positives and false negatives. 

Unfortunately, we don’t need to wonder whether facial recognition technology will bring about more repression, as it is currently being used by Immigration and Customs Enforcement (ICE) — regardless of the political affiliation, sexual orientation, or personal hardship story of any of the subjects. And this is a good illustration of the problem: one could (maybe with some difficulty) posit a hypothetical ICE that had not been accused of multiple kinds of human rights abuse. Perhaps that hypothetical ICE would be careful and sparing with the technology, and encourage its agents to more strongly follow human rights norms. But that’s not the ICE we have. So we should be careful about what kinds of technology we call “appropriate.” 

In fact, communities all over are speaking out against ANY facial recognition technology — it’s almost uniformly seen as anti-democratic, an intrinsic abuse of power and an unambiguous structural incursion into private life.  For example, Portland, Oregon recently passed “the most aggressive municipal ban on facial recognition technology so far,” prohibiting both city government and private companies from using facial recognition technology within city limits. That’s huge; Oakland, San Francisco and Boston have all banned their governments from using facial recognition tech, but Portland’s ban on corporate uses in public spaces breaks new ground.

The famous American financier Bernard Baruch had this to say about technology undermining our human agency: “During my eighty-seven years I have witnessed a whole succession of technological revolutions. But none of them has done away with the need for character in the individual or the ability to think.” All of which is to say that no matter how sophisticated the tech, it’s still up to us to fight for a just, equal, and kind world. 

From Operatic Blobs to Mental Illness Predictions, AI’s Getting Deeper

From Operatic Blobs to Mental Illness Predictions, AI’s Getting Deeper

In his essay “Taming the Digital Leviathan: Automated Decision-Making and International Human Rights,” Malcolm Langford of the University of Oslo warns against the mystification of artificial intelligence. This mystification begins when we treat AI as something exceptional, alien, or definitively non-human. “Discussions of automation and digitalization should be guided by a logic of minimizing danger, regardless of whether its origin is machine or human,” Langford writes. After all, humans also have “black box” computations we don’t understand. As many philosophers of technology have pointed out, tech is an extension of our own bodies. It is more of us. 

I have been meditating on those points — the drive to demystify AI and the desirability of seeing it as an extension of our own selves — as I’ve been reading about the very latest developments in machine learning, AI-level prediction, and the whole new social universe enabled by nuanced computation. It can be great fun and ineffably profound at the same time, as Google’s new “blob opera” feature — four singing blobs who can harmonize and bend and play their voices like human singers — shows. 

The project began with recording four opera singers. Google’s AI team then “trained a machine learning model on those voice recordings.” In other words, the algorithm “thinks” that opera sounds like what the machine learned by listening, and the sounds that human listeners hear are the result of the learning, not of pre-programmed sounds to be “synthesized” by a synthesizer. The result is noises that “manage to approximate the gist of a true opera, in spirit if not lyrics.” (The blobs can only make vowel sounds, though there is beautifully subtle variation to each of the vowels).

“Blob opera” is a fun feature: download it and four adorable blobs appear in red (soprano), green (mezzo soprano), turquoise (tenor) and purple (bass). Their eyes follow your cursor around, engaging with your on screen movement. The “room” is set with a dusty-sounding large-room reverb, which sustains the notes for a few fading seconds after the singer ceases. Stretching the blobs higher causes them to sing higher on the scale, and if you hold the notes for a long time, the blobs won’t run out of breath, but their voices will vary a bit, simulating the vibrato-by-necessity of real singers holding sustained notes. The important thing to remember here is that these are not synthesized recordings of singers; rather, the application is Google making its machine sing after it learned how to by “studying” opera. The control sensitivity is incredibly impressive for a Google interface activity. You can create unusual harmonies and discord and make the blobs sing in minutely short staccato notes or long sustained forrays across vowel sounds. It’s better than any voice synthesizer function on music apps. 

How does machine learning work? It’s somewhere north of metaphor and south of literalism, but the theories of neuron interaction set out by Donald Hebb in 1949 in his book The Organization of Behavior give us a clue. “When one cell repeatedly assists in firing another,” Hebb wrote, “the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell.” The relationship of the neurons, natural or artificial, influences each. Activate the neurons at the same time and they are all “stronger.” Activate them separately and they are weak. It’s their relationship that matters, and that’s how learning occurs — relationally on the inside (neurons influencing each other to build a whole greater than the sum of their parts) and the outside (processing external data to learn from). 

As profound and healing as music is, other examples of AI have even greater healing and harm reduction potential. Although the impact of AI on cardiology is well-known, late in 2020 another amazing experiment emerged, this time in mental health. Wired reports that “On December 3, a group of researchers reported that they had managed to predict psychiatric diagnoses with Facebook data — using messages sent up to 18 months before a user received an official diagnosis.” 

Researchers used voluntary subjects’ past Facebook messages as predictive signals or flags. An AI program created the categories, and within those categories, certain message types indicated certain mood disorders (eg, bipolar or depression). The researchers’ interpretation of the flags “predicted” (that is, guessed the eventual diagnosis without knowing it) the conditions subjects actually had with accuracy “substantially better than would have been expected by chance.” 

Although it raises some questions about fairness and free expression, this type of personality prediction technology can also help social media sites like Twitter or Facebook predict which users will post and/or share disinformation. This was confirmed with the development of an AI system at the University of Sheffield, a system “that detects which social media users spread disinformation — before they actually share it.” Although some of the predictive tools seem intuitive (“Twitter users who share content from unreliable sources mostly tweet about politics or religion”) the conclusions were made after AI analyzed more than “1 million tweets from around 6,200 Twitter users,” an amount of data that would have stretched human limits. AI was also “trained” to “forecast” users’ propensity to spread disinformation. 

Of course, sometimes this all works in ways that still seem pretty clunky, and the results can be amusing. One researcher recently used a GPT-3 “trained” in late 2019 to analyze news stories from 2020 (including the really bizarre ones like murder hornets and monoliths found in national parks) and come up with predictions of its own for future news stories. The results included “Proof that a hellhound is living at Los Angeles Airport has been provided in the photos below … First naked bogman has been found out walking the great British countryside… Albino green sea monster filmed… at the wrong time… Scientists discover the alien ant farm under the Antarctic ice” and so on. My favorite, since I’m a sci-fi fan: “Lizardman: The Terrifying Tale of the Lizard Man of Scape Ore Swamp.” 

These are nerdily hilarious. “I really can’t tell if these are attempts to do novel but realistic headlines, or to completely goof around,” the researcher wrote. AI goofing around? Imagine that. But, fun is itself relational and AI does seem to have a sense of humor from time to time. Interestingly, humor can only really be learned through experience and contemplation. 
Though the “learning” inherent in all this feels novel, the ability for tech to process and organize this amount of data and information is nothing new. Computers have long been used to solve equations and calculate numbers that are too expansive or time-consuming for humans to do on their own. Other existing tech helps us organize data. Take data appending services like those offered by Accurate Append, for example. These services are able to organize data and fill in gaps that are very difficult for humans to do on their own. In many ways, AI is the natural evolution of existing technology, and it allows us to maximize our own abilities. As I said earlier, it is simply more of us.

Human Error Has Always Happened in Election Tabulation

Human Error Has Always Happened in Election Tabulation

In their book Human Error: Cause, Prediction, and Reduction, John W. Senders and Nevile P. Moray define human error as something “not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits.” There are many ways to commit human error and a number have been catalogued. We might commit errors in problem detection, diagnosis, planning and execution. We might mischaracterize the level of analysis required to assess and address a problem. We might erroneously or unreflectively rely on certain equipment. We might commit groupthink. Latent human errors are human errors can be made as a result of systems or routines that are formed in such a way that humans are likely to make such errors. Research in ergonomics shows that adverse psychological states, physical and mental limitations, and coordination or communication quality all factor into the likelihood of latent human errors. 

Oftentimes, we can rely on technology to mitigate such mistakes. Take data appending services like our client Accurate Append, for example. Among other services, Accurate Append offers phone appending to help build out robust databases and minimize human error.

It’s the nature of extremist politics, however, to conflate human error with intentional conspiracy. Amid an endless spew of conspiracy-mongering and controlled rumoring by the Trump campaign in the wake of Joe Biden and Kamala Harris’s presidential election win, there appears a common theme: the “system” is “rigged.” Ghostly images of truckfulls of (presumably forged) ballots; claims that election officials and vote-counters purposely threw out or altered votes; the allegation that warehouses full of late-arriving ballots favored the Biden-Harris campaign: all these rumors and charges suggest intentionality, and an insistence that nefarious things happened in the election because of nefarious people. 

Take the case of ballot error-spotting in Michigan during the counting of presidential votes, for example. In Oakland County, election officials reported unofficial counts before spotting an error they had made. They had counted one city, Rochester, twice. The county had used software to help count (though not the Dominion software the far-right is talking about), but that wasn’t the problem. The issue lay in how election officials had been trained on and had oriented themselves around the software. 

When people like Republican National Committee Chairwoman Ronna McDaniel call for election probes into Michigan without offering proof, they may refer to errors that were made and then corrected. They may point to unexplainable irregularities. They may even vaguely allude to voter suppression efforts for which they themselves are responsible. Regardless, they are cognitively framing errors and irregularities as intentional acts of conspiracy. 

In another instance of human error, employees at Detroit satellite voting locations forgot to enter ballots’ dates of receipt into the Qualified Voter File (the state’s election computer system). The actual ballots were stamped with the date they were received, however, and election officials declared it was an easily remediable “clerical error”. McDaniel accused election officials of “backdating” ballots when fixing these errors. 

Intentional manipulation of the electoral process is a security issue, not a human error issue, latent or otherwise. And where security is concerned — despite talk of lax measures — it was superior in 2020 than 2016. Four years of public-private partnership in security methods meant technology was upgraded, information was shared, resiliency plans were enacted and redundancy was established. As a result, the 2020 election was very successful if measured in the amount of disruption. Some new measures guarded against human error too: in Chesterfield County, Virginia, a utilities crew accidentally cut a cable rendering voter registration databases inaccessible to county election officials. Early voters were given provisional ballots to be checked against voters’ registration confirmation. 

This isn’t to say that human error is the only error in elections. The 2000 Florida election debacle resulted from mistaken reliance on machines counting glitchy ballots (the ones with hanging chads that sometimes prevented accurate counting) as well as the confusing nature of the butterfly ballot — an “error” on the part of the designers of the ballot. A recent article in Governing notes that New York state had relied on a “cut-and-add process” where individual jurisdictions download ballot counts onto flash drives, then sent to a central computer for counting. That process created a scenario in 2010 where 200,000 uncounted votes were found a month after the 2010 election.

But without direct evidence of harmful intent, we must assume any 2020 errors were human errors. They can be egregious, and they can seriously impede the process, but they cannot be seen as deliberate unless we can demonstrate that they are. Why, therefore, do people jump the gun and declare mistakes to be malicious? It turns out that when support for certain people or causes leads to strong group cohesion, those who identify with said group are more likely to attribute harmful intent or read conspiracy into human error. Recently three researchers from University College London published the results of a study in group cohesion and the attribution of harmful intent in “conspiracy” thinking. It was group cohesion, the authors concluded, that locks people into attributing harmful intent. In other words, the more people mobilize behind charismatic leaders or ideological extremes, for example, the more likely they are to attribute harmful intent as guiding others’ actions. 

Boredom also plays a role in spreading conspiracy theories. More accurately, according to another team of researchers from London, when boredom and paranoia combine, conspiracy theories are spread more often. This makes sense in the case of fraud claims in the 2020 election: the purveyors of many of conspiracies seem to have a lot of time on their hands. 

One thinks, of course, about Richard Hoffstater’s groundbreaking and still accurate 1964 piece, “The Paranoid Style in American Politics.” The author writes: “I call it the paranoid style simply because no other word adequately evokes the sense of heated exaggeration, suspiciousness, and conspiratorial fantasy that I have in mind . . . It is the use of paranoid modes of expression by more or less normal people that makes the phenomenon significant.”

There may be a time, in some distant future, where the paranoid style melts away and people see human error for what it is — a much more forgiving time indeed. But 2020 is not that time.

YouGov Chat: An Experience in Interactive Polling

YouGov Chat: An Experience in Interactive Polling

Trying to reach constituents, voters, and potential supporters via electronic communication technology is daunting for the inexperienced and frustrating for the experienced campaigner. People are so close and yet so far: close enough that you can establish an electronic dialogue with them, but too far away to be able to screen for unspoken sentiments or take advantage of that one pause that lets in all the candidate is trying to sell. Every aspect of public opinion engineering is difficult, whether it’s gathering data (which may be done through phone append by companies like Accurate Append) or interpreting said data into neat little packages. Accordingly, there’s a special place in my heart for those who brave public opinion engineering out of a love for people or love for the game. 

I bring this up because I’ve finally been chat-botted for a reason other than customer service, and frankly speaking, it was not terrible. That’s right: recently, I consented to participate in a YouGov chat, which was a simple interactive survey conducted via an online chatbot. YouGov surveys internet users on a variety of topics. I was about the Trump administration and how vulnerable it has become to criticism. Although I was preoccupied, I readily took the survey or, rather, co-participated in a conversation between a bot and a political junkie. 

I have a long history of qualified respect for public opinion polls. I have worked for pollsters, I have answered telephone, in-person, and internet surveys, and I’ve crunched the numbers turning other people’s surveys into conclusions. 

A lot of times I think the polls create, rather than reflect or record, public opinion. The question of the credibility of opinion poll results is often relegated to the folk science section of our collective mind. Richard Seymour’s Guardian piece from last year states more eloquently what many people suspect — that many pollsters mitigate against the risk of not interviewing who they want through “their selection of who to interview, and how to weight the results. . . . in conditions of political uncertainty, these assumptions start to look like what they are: guesswork and ideology.” Seymour’s Guardian article expresses my biggest concern with polling: “By relying on past outcomes to guide their assumptions, pollsters made no room for political upsets,” he writes. “Since poll numbers are often used as a kind of democratic currency – a measure of “electability” – the effect of these methodological assumptions was to ratify the status quo. They reinforced the message: ‘There is no alternative.’ Now that there are alternatives, polling firms are scrabbling to update their models.”

A few years ago, Jill Lepore wrote a good history of public opinion polling in America. She pointed out that such polling began during the Great Depression, and the response rate at the time (when, presumably, opinion workers put far more time into individualized targets of response) was over 90%. Lepore pointed out that “the lower the response rate the harder and more expensive it becomes to realize” the promise of representation. By the 1980s, when the response rate had fallen to sixty percent, pollsters were worried about it falling further. But the rate now is “in the single digits,” Lepore wrote, and pollsters still see value in conducting them.

The source of that value, even in a single-digit world, is the ultimate precarity and close-call nature of modern political campaigns. Even the risk of a tiny moving of the needle, a few votes here or there, can swing an election. Even a minimally successful PR campaign can bring in one or two extra donors and that can swing a close campaign. And even minimal data can be added to other collections of data in the service of big data analytics. 

Traditionally, that data was collected using call centers whose operators would call a representative sample of people and ask them questions on scales, binaristic true/false forms, and more. But YouGov’s surveys are different. They are based on the interactivity of chatbots, which are primarily viewed as marketing utilities rather than instruments of public deliberation. 

The dynamic of the exchange was interesting. In these bot interactions, if they’re done well, one forgets, but does not completely forget, that one is dealing with a computer program possessing no “real” personality of consciousness. The right programming and wording can create such a personality, at least in limited ways. This was no exception. The main issue in the survey I took was Donald Trump’s commutation of Roger Stone’s sentence. YouGov would periodically inform me that “most Americans” felt either in agreement with my position or at odds with it, as a way of putting my own opinions into perspective. 

Moreover, YouGov would ask me if I wanted to keep going after clusters of answers. It seemed friendly and appreciative of my participation. It was as if part of me were genuinely convinced I was talking to a program with some degree of autonomy and personality. 

Not everything about the survey or the experience taking it was perfect. The only political parties I could choose from were Republican, Democrat, Independent, or Other. Similarly, the question concerning political orientation had no position to the left of “very liberal.” This raises the question of how useful political affiliation categories really are. It’s easy to argue that these categories are sufficient for survey purposes; third parties poll very low nationally even when their candidates receive a lot of media exposure, whether seen as credible leaders or not. And what’s the practical difference between labeling someone “very liberal” and a socialist? So I get that, but at a time when a few hundred votes might decide a large election outcome, or when groups like the Democratic Socialists of America can influence races with national importance, it may be time to expand those categories. Few pollsters seem eager to do that. 

One last complaint: At one point I made an error, typing some incomplete gibberish and accidentally sending it. There was no way to backtrack in the poll, even though the poll was asking for some detailed answers on things. 

But did I feel like my voice had been heard? Yes, more so than in singular binary or Likert scale answers. Perhaps with more interactivity, even more data-driven bot proactivity, chatbot-based political opinion-gathering could become extremely dialogical, allowing the kind of relational interaction found between crews on spaceships and their sentient computers–well perhaps not quite that much, but an impressive amount compared to what we were capable of doing during the Great Depression.

Bad-Faith Speakers Like Trump Let Audience Fill in the Blanks

Bad-Faith Speakers Like Trump Let Audience Fill in the Blanks

Way back in 2007, a GOP political strategist on a cable news discussion show said of the then-longshot presidential candidate, “I don’t think the Republicans have anything to fear from Barack Hussein Obama.” His voice emphasized the word “Hussein.” The meaning was obvious–or somewhat obvious, and that was the point. The American people would never elect a candidate for president whose middle name was the same as the surname of an enemy of the U.S., and an obviously Muslim name. Because actually saying that would have been uncouth, the speaker could always plead that they hadn’t said it. They were able to claim the credit, and avoid the liability, for the argument. 

Donald Trump has insulted hundreds upon hundreds of people, places and things. 

Among them are a few instances where he makes half an argument. Speaking of Chief Justice John Roberts, he said “my judicial appointments will do the right thing, unlike . . . Roberts.” Missing is the premise that the “right thing” is to uphold the administration’s agenda through their aspirationally neutral jurisprudence. Trump doesn’t need to say that part. The audience figures it in. 

These political figures, polemicists, and commentators are literally using one of the oldest tricks in the book, rhetorically speaking. They are using enthymemes. An enthymeme is an argument with a “suppressed premise” that is filled in by the audience, or expected to be filled in by the audience, in order to make them participants in the rhetorical process that culminates in reaching a common conclusion. It’s the invisible portion of an argument that acts as a wink, a nudge, and a psychic prompt to the audience. It’s less scientific than a data append, but more powerful because of the seed it leaves with the listener.

As scholar of political communication Kathleen Hall Jameson points out, enthymemes can often come in the form of visual arguments: Because pictures convey nonverbal or unstated meanings, the very act of showing those pictures can be enthymematic, and in the case of Bill Clinton’s race against Bob Dole, such picture-arguments can convince audiences that the party putting them out is more values-aligned than the other party with the observer. 

If it seems like enthymemes are great for racist or other prejudicial arguments, well, consider the concept of “prejudice.” For one to pre-judge, one must already agree with the framework under which the target of the racism, or whatever, is to be judged. There’s actually a word for this that’s a lot less loaded than bigotry, and provides a bigger tent of meaning: topoi, or “the common places.” These are conceptual, historical, interpretive, and yes, provincial and often biased “spaces” of shared values, meaning, and history. 

When that shared meaning is benevolent, or appeals to deeply held and sacred non-hateful values, the experience of co-creating meaning between speaker and audience can be beautiful. I might invoke a phrase like “the greatest generation” or “they gave their last full measure of devotion” when describing heroes not literally affiliated with the Second World War or the union side of the Civil War, and my audience will know that I am implying that the people I am describing are incredibly heroic. 

But when, as it so often is, the enthymeme is used by cynical speakers to cover their tracks while still otherizing or dehumanizing their political enemies, the enthymeme can be a frustrating method of evasion. I might try and point out that the speaker implied that members of a certain race might be lazy or corrupt, or unworthy of full constitutional protections, but the speaker’s defenders can always say “he never said that–this must be your problem.” In this way, the enthymeme is the tool of alt right irony-purveyors or the old boys network—spaces that feign non-seriousness in order to justify real atrocities. It’s that non-seriousness, that possibility that the arguer doesn’t actually believe what they are halfway implying, that makes dialogue impossible. “Where a statement is explicitly made in a clear way by an arguer,” writes one philosopher in the Journal of Applied Logic, “either as an assertion or part of an argument, normally the commitment rule operates in a clear and precise fashion. But there are all kinds of borderline and dubious cases when it comes to dealing with implicit commitments. There can be all kinds of problems, for example when an argument has not been quoted but paraphrased, or where an implicit assumption may be needed to make the argument valid, but where the proponent may not only have not stated that assumption, but may even disagree with it.”

This, then, is the very definition of arguing in bad faith. The hate monger will deliberately slide in a nuanced racial epithet or violent threat (letting the audience do that heavy lifting, because they won’t take the blame either) and then deny making it. “I never said members of the Democratic Party should be killed. I only said that George Washington knew what to do with traitors!” 

Modern ad distribution tactics that rely on deeply personal behavioral data to build targeting models—including Facebook political advertising—have allowed bad faith arguments and misinformation to reach just those most likely to accept them. This is in some contrast to the larger political mail runs or income and job-based data supported by append vendors like Accurate Append (client).

There is no easy way to combat this kind of rhetoric head-to-head, although it’s refreshing to see people try. Instead, political candidates, speakers and advocates who do not want to be affiliated with bigotry or intolerance may need to do the correct-but-awkward and somewhat fantasy-dashing thing: speak in plain and sincere language and repeatedly reaffirm your good intent–and call out enthymemes that are being used to spread racism and other bad arguments, even when the calling out is answered by half-smiling denial. Rhetorician Matthew Jackson writes that “the mercurial quality of whiteness works more insidiously as a morphing sphere of shifting and dynamic power relations with a political commitment to white supremacy . . . one might ask, ‘Then how do we fight it?’ I think part of the answer, as activists have suggested for centuries, is in not allowing arguments for white supremacy to continue unidentified, unanswered, unresolved, and therefore efficacious.” That’s difficult work, but it seems to be the only alternative to letting those hidden premises slide.