Web-Based Cartography, Power and Community-Building

We were interested in the news that the Chinese phone manufacturer Huawei recently signed a deal with TomTom, the Dutch digital mapping company, for an alternative to Google Maps.

TomTom has been an often unsung but never ignored force in map applications. It has several self-branded products on iOS and Android devices. But Huawei will be building its own application, a mapping product, using TomTom maps—so a kind of secondary app based on TomTom’s primary apps. TomTom is no stranger to this kind of use. In the past the company provided data for Apple Maps, making itself part of a similarly “shambolic” patchwork of mapping apps. Huawei intends to build a full-on system, a “Map Kit” using data from Yandex, a Russian tech company. The TomTom deal will either serve as a bridge to Map Kit, or be integrated into it. 

Why can’t Huawei just rely on Google Maps? Well, here’s where it gets interesting. Last year, the Trump administration placed sanctions on Huawei. In May, Trump “issued an executive order barring US companies from using information and communications technology from anyone considered a national security threat” and included Huawei in its list of dangerous entities. That complicated relations with Google, even though the application of administration sanctions has been uneven and uncertain. Businesses hate uncertainty, after all. Meanwhile, Huawei is building its own operating system, which it calls HarmonyOS, and is using the TomTom deal to further reduce its reliance on Google.

What’s especially interesting about this is that trade policies, domestic politics really, and the Trump administration’s confrontational approach to international relations, is going to literally change the way people look at the world. There are nuanced differences between the way Google and other companies map things. The methods, devices, and features of interactivity vary across systems and platforms. Many of the differences will be subtle, but they will still be differences, and over time, their aggregate will grow. In however nuanced a way it might be, politics will determine mapmaking differentials. 

Historically, cartography has been a ruling-class sphere. If the first task or problem of cartography is “map agenda-setting,” where the features and area of the place to be mapped, then historically, that task has privileged the kingdoms, countries, states, and localities with the authority and resources to make the maps. Likewise, cartographers eliminate characteristics and areas deemed irrelevant to the purposes of the maps and materials being created. Even the very act of generalization—creating categories of landscapes and ocean depths and similar profiles—relies on areas of both informational and material authority that perpetuate and emerge from great powers. Maps, critical scholars tell us, are “sites of power-knowledge,” and judgments about which maps are best “arise from privileged discourses.” 

All of which suggests that what we’re seeing in the escalation of trade disputes is a series of power-shifts that will again exercise influence on how we map the world. But now, instead of the changes being reflected in model globes and poster-sized maps in school rooms and offices, they will be reflected in millions upon millions of digital devices guiding people around in their cars or from their homes. 

So what does the business end of big data-informed, and satellite-sourced web based mapmaking get us? What social tendencies does it reproduce and strengthen? 

In many ways, web-based mapping has created an interesting piece of sociological data: it gives us more information about our surroundings and communities, but it also entrenches us as solitary beings in our vehicles and domiciles. We can access information about what’s out there, what’s around us. But then we click our keypads to navigate us through a geographical location so we don’t have to stop and ask for directions. We can order meals, groceries, office supplies, and other goods and services using apps that link from these maps. We never have to “be in public.” It doesn’t turn us into recluses, necessarily, but it changes our social dynamics. 

But consumer-based or user application-based maps are not the only way we’re seeing technological progress affect cartography. Another area that has emerged stronger because of big data and satellite technology is called collaborative mapping, and collaborative mapping is helping humanity re-define cooperative work in new ways. Collaborative mapping has the potential to bring diverse peoples and communities together rather than isolating us. 

Collaborative mapping uses open source collaborative software to create and develop (and thus never really be done creating) collective maps. Maps are created collaboratively on a shared “surface.” Problems sometimes occur due to people having concurrent access and competing for mutually exclusive data. This might happen at the same time, or it might be a matter of one party correcting a previous iteration of the map, potentially causing conflicts in interpretation. But just as commons-based encyclopedias like Wikipedia or other common knowledge libraries have (admittedly imperfect) procedures for dealing with these conflicts, collaborative mapping can also develop such procedures. The outcome is still likely to be a more egalitarian knowledge base, and we now know that many hands and many heads not only make light work, but also create better epistemology, and better knowledge. 

Based on the work of Christopher Parker and colleagues, as well as many other researchers, we are learning that collaborative mapping can do a number of other socially positive things. The public-oriented consciousness that one might expect from collaborative mapping work is also found in many of its practical applications. For example, the potential of collaborative mapping extends to creating ease-of-access maps and guides for people with limited mobility. A crowd-sourced “mashup” of mapping data can be created to provide that information on an accessible and free platform. Researchers say that it’s a challenge to collect that “subjective” data in the first place, but they are coming up with many potential solutions, from pre-populating local regions and then expanding coverage from there, to creating mashups without the crowd-sourced data and then create ways for users to easily add their own data to the collective data pool. 

For people who study the history of mapping and cartography, the idea of egalitarian, collectively-created maps is pretty revolutionary. Immediately following the end of World War One, there emerged a deep dissatisfaction with territorial thinking. Progressive-minded activists and scholars began searching for alternative ways of “viewing the world” both in the paradigmatic sense (worldview) and the cartographic sense (criticizing the construction of borders and the representations of the world that show up on maps; think of the conversation about why the “north” is “on top” and that kind of thing). It’s worth thinking that perhaps activists can forge links between cooperative map-making and cooperative international and other political relationships. 

The building of that political power and relationships will also be continually shaped by how we utilize data. From geo-targeted ads on social media that help turn out voters, to using partners like our client Accurate Append, a phone, email and data vendor, to get the contact info campaigns need to text and call voters. Data and tech are shaping every aspect of our future. From how we navigate our world, to how we change it.

YouGov Chat: An Experience in Interactive Polling

YouGov Chat: An Experience in Interactive Polling

Trying to reach constituents, voters, and potential supporters via electronic communication technology is daunting for the inexperienced and frustrating for the experienced campaigner. People are so close and yet so far: close enough that you can establish an electronic dialogue with them, but too far away to be able to screen for unspoken sentiments or take advantage of that one pause that lets in all the candidate is trying to sell. Every aspect of public opinion engineering is difficult, whether it’s gathering data (which may be done through phone append by companies like Accurate Append) or interpreting said data into neat little packages. Accordingly, there’s a special place in my heart for those who brave public opinion engineering out of a love for people or love for the game. 

I bring this up because I’ve finally been chat-botted for a reason other than customer service, and frankly speaking, it was not terrible. That’s right: recently, I consented to participate in a YouGov chat, which was a simple interactive survey conducted via an online chatbot. YouGov surveys internet users on a variety of topics. I was about the Trump administration and how vulnerable it has become to criticism. Although I was preoccupied, I readily took the survey or, rather, co-participated in a conversation between a bot and a political junkie. 

I have a long history of qualified respect for public opinion polls. I have worked for pollsters, I have answered telephone, in-person, and internet surveys, and I’ve crunched the numbers turning other people’s surveys into conclusions. 

A lot of times I think the polls create, rather than reflect or record, public opinion. The question of the credibility of opinion poll results is often relegated to the folk science section of our collective mind. Richard Seymour’s Guardian piece from last year states more eloquently what many people suspect — that many pollsters mitigate against the risk of not interviewing who they want through “their selection of who to interview, and how to weight the results. . . . in conditions of political uncertainty, these assumptions start to look like what they are: guesswork and ideology.” Seymour’s Guardian article expresses my biggest concern with polling: “By relying on past outcomes to guide their assumptions, pollsters made no room for political upsets,” he writes. “Since poll numbers are often used as a kind of democratic currency – a measure of “electability” – the effect of these methodological assumptions was to ratify the status quo. They reinforced the message: ‘There is no alternative.’ Now that there are alternatives, polling firms are scrabbling to update their models.”

A few years ago, Jill Lepore wrote a good history of public opinion polling in America. She pointed out that such polling began during the Great Depression, and the response rate at the time (when, presumably, opinion workers put far more time into individualized targets of response) was over 90%. Lepore pointed out that “the lower the response rate the harder and more expensive it becomes to realize” the promise of representation. By the 1980s, when the response rate had fallen to sixty percent, pollsters were worried about it falling further. But the rate now is “in the single digits,” Lepore wrote, and pollsters still see value in conducting them.

The source of that value, even in a single-digit world, is the ultimate precarity and close-call nature of modern political campaigns. Even the risk of a tiny moving of the needle, a few votes here or there, can swing an election. Even a minimally successful PR campaign can bring in one or two extra donors and that can swing a close campaign. And even minimal data can be added to other collections of data in the service of big data analytics. 

Traditionally, that data was collected using call centers whose operators would call a representative sample of people and ask them questions on scales, binaristic true/false forms, and more. But YouGov’s surveys are different. They are based on the interactivity of chatbots, which are primarily viewed as marketing utilities rather than instruments of public deliberation. 

The dynamic of the exchange was interesting. In these bot interactions, if they’re done well, one forgets, but does not completely forget, that one is dealing with a computer program possessing no “real” personality of consciousness. The right programming and wording can create such a personality, at least in limited ways. This was no exception. The main issue in the survey I took was Donald Trump’s commutation of Roger Stone’s sentence. YouGov would periodically inform me that “most Americans” felt either in agreement with my position or at odds with it, as a way of putting my own opinions into perspective. 

Moreover, YouGov would ask me if I wanted to keep going after clusters of answers. It seemed friendly and appreciative of my participation. It was as if part of me were genuinely convinced I was talking to a program with some degree of autonomy and personality. 

Not everything about the survey or the experience taking it was perfect. The only political parties I could choose from were Republican, Democrat, Independent, or Other. Similarly, the question concerning political orientation had no position to the left of “very liberal.” This raises the question of how useful political affiliation categories really are. It’s easy to argue that these categories are sufficient for survey purposes; third parties poll very low nationally even when their candidates receive a lot of media exposure, whether seen as credible leaders or not. And what’s the practical difference between labeling someone “very liberal” and a socialist? So I get that, but at a time when a few hundred votes might decide a large election outcome, or when groups like the Democratic Socialists of America can influence races with national importance, it may be time to expand those categories. Few pollsters seem eager to do that. 

One last complaint: At one point I made an error, typing some incomplete gibberish and accidentally sending it. There was no way to backtrack in the poll, even though the poll was asking for some detailed answers on things. 

But did I feel like my voice had been heard? Yes, more so than in singular binary or Likert scale answers. Perhaps with more interactivity, even more data-driven bot proactivity, chatbot-based political opinion-gathering could become extremely dialogical, allowing the kind of relational interaction found between crews on spaceships and their sentient computers–well perhaps not quite that much, but an impressive amount compared to what we were capable of doing during the Great Depression.

Bad-Faith Speakers Like Trump Let Audience Fill in the Blanks

Bad-Faith Speakers Like Trump Let Audience Fill in the Blanks

Way back in 2007, a GOP political strategist on a cable news discussion show said of the then-longshot presidential candidate, “I don’t think the Republicans have anything to fear from Barack Hussein Obama.” His voice emphasized the word “Hussein.” The meaning was obvious–or somewhat obvious, and that was the point. The American people would never elect a candidate for president whose middle name was the same as the surname of an enemy of the U.S., and an obviously Muslim name. Because actually saying that would have been uncouth, the speaker could always plead that they hadn’t said it. They were able to claim the credit, and avoid the liability, for the argument. 

Donald Trump has insulted hundreds upon hundreds of people, places and things. 

Among them are a few instances where he makes half an argument. Speaking of Chief Justice John Roberts, he said “my judicial appointments will do the right thing, unlike . . . Roberts.” Missing is the premise that the “right thing” is to uphold the administration’s agenda through their aspirationally neutral jurisprudence. Trump doesn’t need to say that part. The audience figures it in. 

These political figures, polemicists, and commentators are literally using one of the oldest tricks in the book, rhetorically speaking. They are using enthymemes. An enthymeme is an argument with a “suppressed premise” that is filled in by the audience, or expected to be filled in by the audience, in order to make them participants in the rhetorical process that culminates in reaching a common conclusion. It’s the invisible portion of an argument that acts as a wink, a nudge, and a psychic prompt to the audience. It’s less scientific than a data append, but more powerful because of the seed it leaves with the listener.

As scholar of political communication Kathleen Hall Jameson points out, enthymemes can often come in the form of visual arguments: Because pictures convey nonverbal or unstated meanings, the very act of showing those pictures can be enthymematic, and in the case of Bill Clinton’s race against Bob Dole, such picture-arguments can convince audiences that the party putting them out is more values-aligned than the other party with the observer. 

If it seems like enthymemes are great for racist or other prejudicial arguments, well, consider the concept of “prejudice.” For one to pre-judge, one must already agree with the framework under which the target of the racism, or whatever, is to be judged. There’s actually a word for this that’s a lot less loaded than bigotry, and provides a bigger tent of meaning: topoi, or “the common places.” These are conceptual, historical, interpretive, and yes, provincial and often biased “spaces” of shared values, meaning, and history. 

When that shared meaning is benevolent, or appeals to deeply held and sacred non-hateful values, the experience of co-creating meaning between speaker and audience can be beautiful. I might invoke a phrase like “the greatest generation” or “they gave their last full measure of devotion” when describing heroes not literally affiliated with the Second World War or the union side of the Civil War, and my audience will know that I am implying that the people I am describing are incredibly heroic. 

But when, as it so often is, the enthymeme is used by cynical speakers to cover their tracks while still otherizing or dehumanizing their political enemies, the enthymeme can be a frustrating method of evasion. I might try and point out that the speaker implied that members of a certain race might be lazy or corrupt, or unworthy of full constitutional protections, but the speaker’s defenders can always say “he never said that–this must be your problem.” In this way, the enthymeme is the tool of alt right irony-purveyors or the old boys network—spaces that feign non-seriousness in order to justify real atrocities. It’s that non-seriousness, that possibility that the arguer doesn’t actually believe what they are halfway implying, that makes dialogue impossible. “Where a statement is explicitly made in a clear way by an arguer,” writes one philosopher in the Journal of Applied Logic, “either as an assertion or part of an argument, normally the commitment rule operates in a clear and precise fashion. But there are all kinds of borderline and dubious cases when it comes to dealing with implicit commitments. There can be all kinds of problems, for example when an argument has not been quoted but paraphrased, or where an implicit assumption may be needed to make the argument valid, but where the proponent may not only have not stated that assumption, but may even disagree with it.”

This, then, is the very definition of arguing in bad faith. The hate monger will deliberately slide in a nuanced racial epithet or violent threat (letting the audience do that heavy lifting, because they won’t take the blame either) and then deny making it. “I never said members of the Democratic Party should be killed. I only said that George Washington knew what to do with traitors!” 

Modern ad distribution tactics that rely on deeply personal behavioral data to build targeting models—including Facebook political advertising—have allowed bad faith arguments and misinformation to reach just those most likely to accept them. This is in some contrast to the larger political mail runs or income and job-based data supported by append vendors like Accurate Append (client).

There is no easy way to combat this kind of rhetoric head-to-head, although it’s refreshing to see people try. Instead, political candidates, speakers and advocates who do not want to be affiliated with bigotry or intolerance may need to do the correct-but-awkward and somewhat fantasy-dashing thing: speak in plain and sincere language and repeatedly reaffirm your good intent–and call out enthymemes that are being used to spread racism and other bad arguments, even when the calling out is answered by half-smiling denial. Rhetorician Matthew Jackson writes that “the mercurial quality of whiteness works more insidiously as a morphing sphere of shifting and dynamic power relations with a political commitment to white supremacy . . . one might ask, ‘Then how do we fight it?’ I think part of the answer, as activists have suggested for centuries, is in not allowing arguments for white supremacy to continue unidentified, unanswered, unresolved, and therefore efficacious.” That’s difficult work, but it seems to be the only alternative to letting those hidden premises slide.

Adriel Hampton: ‘The COVID Crisis has Shown us Just How Fast Governments Can Act’

Adriel Hampton: ‘The COVID Crisis has Shown us Just How Fast Governments Can Act’

Adriel Crossing the Delaware — by Brett Bandetelli

I recently took the time for a lengthy Q&A with Phil Mandelbaum of herald.news. We spoke on everything from my firm’s work with technology companies and SEO clients like data vendor Accurate Append to how I got started in activism:

the City of Walnut Creek wanted to knock out a big chunk of the central park in the town for a parking garage. I had covered activism enough to know how it gets into the press and I organized a group that helped beat a ballot measure and then supported a compromise plan. The final project sited an amazing library in the park and also preserved the longtime matriarchal home of one of my neighbors.

We also discussed post-Bernie organizing strategy:

The long haul is going to be getting more left and corporate-free candidates at all levels — and that is going to take a few more cycles to really get rolling.

I hope you’ll check out the full interview.

Big Data Makes for Big Sci-Fi Plots

We’re fans of science fiction, and its conversion into science fact, around here. Ray Bradbury has written that “science fiction is the most important literature in the history of the world because it’s the history of ideas, the history of our civilization birthing itself . . . Science fiction is central to everything we’ve ever done, and people who make fun of science fiction writers don’t know what they’re talking about.” It’s in the notion of “civilization birthing itself” that we find plotlines that deal with mass information, and the various ways humans use that information for better or worse. If you look at the concepts and plotlines of many science fiction stories where societies crunch on mountains of data, you end up with stories that last thousands, millions, or billions of years; stories that deal with juggling simultaneous alternate realities; stories of unprecedented intersystem contact, and more. Big data accompanies big ideas, big leaps in intergalactic evolution. 

Although this post talks about the role of big data in science fiction plots, big data as the plot is relatively rare, although the exceptions listed in this post are noteworthy. Almost four years ago, James Bridle posted a story called “The End of Big Data” on Vice, with art by Gustavo Torres. “It’s the world after personal data,” reads the synopsis. Most entities are forbidden from having any identifying information. “No servers, no search records, no social, no surveillance.” This is enforced by satellites constantly monitoring the planet to “make sure the data centers are turned off—and stay off.” Bridle’s story is of a different kind of fictional future. And it’s kind of unique because we typically expect science fiction to talk about pushing limits forward, not back. 

But this is sci-fi about policy and law, not just tech. The story features scenes of data cops busting data pirates, because naturally if you make it illegal, you create a black market for it. These pirates move the data, which they collect like rainwater catchment in “receiver horns” and ship it in physical containers: “Sealed and tagged, these containerized data stores could be shipped anonymously to brokers in India and Malaysia, just a few more boxes among millions, packed with transistors instead of scrap metal and plastic toys.” Bridle describes the mundane and sometimes challenging life of a data cop, surveilling the whole planet through satellite monitoring and, sure, the collection of data; the sovereign must be able to be outside of the law to enforce it, of course. 

The black market forms because “Data is power,” the main character observes. Of course, as Kelly Faircloth points out in another post, the science fiction world is already here—dating sites that can predict when a candidate is lying, social networks already knowing who you know, extremely fast and efficient Orwellian surveillance—it’s all already here. And it has both positive and nefarious uses. It can help you sleep better by collecting and processing sleep data from REM patterns to body positions to ambient noises. It can facilitate the early detection of natural disasters, epidemics, and acts of violence. 

Here’s a spontaneous debate you can make your friends or students have: Resolved: The benefits of big data’s early epidemiological or medical detection of deadly diseases outweigh the detrimental effects of big data on privacy and civic life. Because in science fiction, like in public policy, the question is who is using the data and what their intentions are. 

Paul Bricman blogs about three novels with data-driven plots and in doing so raises an interesting point. We hear a lot about the dystopian or tragic use of big data in sci-fi (and in real life). Bricman’s analysis includes (as it should) The Minority Report by Philip K. Dick, where cops bust people for crimes they haven’t committed (and arguably won’t commit with 100% certainty). Minority Report is a classic example of data dystopianism, but Bricman also includes two works that offer a more optimistic or at least hopeful view: the Foundation series, by Isaac Asimov, where mathematician Hari Seldon develops a “mathematical sociology” to save his civilization from ruin (playing a very long, data-driven game to do so); and The Three Body Problem by Cixin Liu, which turns on the efforts by a cooperative organization composed of two civilizations—Earth and Trisolaris—to solve the problems created by the latter’s having three suns. Their methods include “genetic algorithms applied to movement equations” and the development of “an in-game computer based on millions of medieval soldiers which emulate transistors and other electrical components” in order to predict the behavior of the Trisolaris system’s three suns. If you look hard enough, you’ll find stories about what good people who are data crunchers and philosophers of data can do. 

These are all works that have been written after the “golden age” of science fiction from the late 1930s to the late 1940s. Most emerge in the 1960s and 1970s. But there’s a special place for Olaf Stapledon (who published just prior to the golden age) in any analysis of metadata-based sci-fi because Stapledon’s work was all about making the biggest abstractions and generalizations possible from virtually infinite fields of data across the universe. Stapledon wrote Last and First Men, a “future history” novel in 1930, and followed it with Last Men in London and Star Maker—the latter being Stapledon’s history of the entire universe. Last and First Men encompasses the history of humanity for the next two billion years, detailing the evolution of eighteen iterations of the human species beginning with our own. Stapledon anticipated both genetic engineering and the “hivemind” or “supermind.” The data processing involved in mapping and describing this evolution may not fit neatly into any categories like those we use when appending lead data using our client Accurate Append (an email, and phone contact data vendor), but Stapledon’s work beautifully, sometimes almost poetically, captures such wide perspectives. It’s impossible to do work collecting and processing the information on thousands or millions of humans and not feel that some kind of collective consciousness, elusive but real, is at work.

Huxley, Big Data, and the Artistic Mind

Imagine you’re a famous painter, but in an effort to get with the times, you crowd-source your newest painting. You make a nice big show of it, being creative in your soliciting of ideas, bringing people to your studio (and sharing the encounters on social media), maybe even hosting some focus groups to discuss themes, content, and forms. You finish a painting that is the result of a process of dialogical exchange with your audience. 

Most thinking art scholars and critics would call that a pretty creative artistic endeavor, and because you facilitated it, you’d get credit as the artist, even though the intent of the “performative” and creative aspect of the art was to de-center yourself as the artist. 

What if you were a musician and you did something similar? Maybe you’d invite audience members to hum into a recording device, and then you mix the sounds into what you believe to be optimal if somewhat discordant combination. Obviously, this would be considered a creative act by you as well. The inclusion of audience suggestions is part of the aesthetic experience. It calls into question artistic individuality, making an important philosophical point that you, the individual artist (see what I did there) get credit for developing and illustrating. 

These innovative gestures that blur the line between artist and audience are not problematic in the same way that concerns have been raised about the relationship between big data and artistic expression. In the forthcoming book Beyond the Valley: How Innovators around the World are Overcoming Inequality and Creating the Technologies of Tomorrow, Ramesh Srinivasan explores more concerning technological questions: fashion designers whose creative work is based on algorithms developed by consolidating millions of users’ preferences, for example, or art and music created to specifically appeal to the semiconscious desires of listeners, but the data that goes into making that music is mass data, not individual aesthetic experiences or individual expressions of desire. 

Srinivasan seems mostly concerned that big data produces “undemocratic” outcomes, but I don’t think he means this in a strictly political sense. I think he means that democracy carries a certain expectation of self-consciousness. Even the experimental collaborative art and music I imagined earlier is self-consciously participatory. The participants know they are helping create something, and are intentionally contributing. This isn’t the case when the list of acceptable preferences distilled and given to artists is based on millions of pieces of data. 

This is not to say that big data can’t or shouldn’t play a role in developing products with an aesthetic value like furniture or clothing. This seems like a legitimate form of product development and there’s nothing inherently wrong with it. But it would be a mistake to call it “artistic” without a radical redefinition of the word—because the “art” in it is not conscious or intentional in the same sense that an individual artist’s painting or even an ensemble collectively-written piece of theater is. 

This depersonalization of aesthetics through big data was anticipated by Aldous Huxley (who Srinivasan cites in his book) in books like Brave New World, and Huxley was concerned about industries and politics and other endeavors lacking transparency—not just where stakeholders were able to see the decisions being made, but participate in them too. “Whatever passes for transparency today seems one-directional,” Srinivasan writes. “Tech companies know everything about us, but we know almost nothing about them. So how can we be sure that the information they feed us hasn’t been skewed and framed based on funding models and private news industry politics?”

Huxley, who died 56 years ago on the same day as JFK, November 22, 1963, has developed a reputation as a scathing critic of industrial society, but he was more than that. His thoughtfulness about ethics wasn’t abstract: when he became relatively wealthy working as a Hollywood screenwriter in the 1930s and 40s (he’d immigrated to the U.S. and settled in Southern California), Huxley used a great deal of that money to transport Jews and left-wing writers and artists from Europe to the United States, where they would be safe from fascism. Huxley saw the threat of depersonalized and depersonalizing technology as “an ordered universe in a world of planless incoherence.” That’s not far from how big data skeptics describe the data industry. 

In a recent Washington Post piece, Catherine Rampell echoes these concerns. The “vast troves of data on consumer preferences” owned by large firms are largely collected surreptitiously. “There are philosophical questions,” she writes, “about who should get credit for an artistic work if it was conjured not solely through human imagination but rather by reflecting and remixing customer data.” 

If the fashion or informational choices of big data are alienated from the conscious preferences of audiences or consumers, there is at least one theory of art that holds that artists themselves should be consciously removed from the preferences of audiences who otherwise appreciate the art. It’s the “theory of obscurity,” made (somewhat) famous by San Francisco avant-garde band The Residents, who borrowed it from N. Senada, a musician who may not have existed as such. The theory holds that “an artist can only produce pure art when the expectations and influences of the outside world are not taken into consideration.” In other words, a true artist can’t worry about what the audience thinks. N. Senada had a corollary theory, the “theory of phonetic organization,” holding that “the musician should put the sounds first, building the music up from [them] rather than developing the music, then working down to the sounds that make it up.” Big data aggregation could not play any meaningful role in such work. Perhaps listening to avant-garde music is the best way to avoid being assimilated into a giant cybernetic vat of data goo. But I doubt such a solution can be implemented universally since very few people enjoy that music.

Big Data, Cops, and Criminality

In the short story and movie “Minority Report,” police in the future are authorized to arrest, and prosecutors to convict, people for the crimes they will (according to predictive technology) commit in the future. Although the thought of arresting, trying and convicting individuals for what they have not yet done seems far-fetched, the basic structural and technological prerequisites for such a world are already in place. What does this mean?  

Over the past few years, crime prevention through big data has undergone a transition from interfacing with data to using it predicatively. Up until now, law enforcement could check individual samples against DNA and fingerprint databases, arrest records, that kind of thing. But now, there’s an emerging norm towards “predictive analytics algorithms to identify broader trends.” And yes, this is a step closer to a “Minority Report” world, where police predict where crime is likely to occur based on what we might call “moving” or trending data rather than snapshots of particular markers and records like fingerprints and DNA. 

Areas, where predictive software is used, have experienced remarkable results: a “33% reduction in burglaries, 21% reduction in violent crimes and a 12% reduction in property crime.” But there’s even more at stake here. In that such policing is preventative rather than reactive, what less crime also means is less confrontation between police and suspects–including innocent suspects erroneously targeted by police in the old world where cops wait to see suspicious behavior and then act on it in the midst of uncertainty. Of course, there are other implications, for detective work, for work dealing with serial offenders, in assessing the need for future resources. Big data won’t necessarily improve relations between residents and police outright, but smarter policing may translate into less confrontational policing and an increase in public perception of police effectiveness. 

And so a lot of police forces see the biggest challenge now to be teaching police how to use the technology the right way. A recent British report on “big data’s use in policing published by the Royal United Services Institute for Defence and Security Studies (RUSI) said British forces already have access to huge amounts of data but lack the capability to use it.” This is unfortunate because, at least as researchers and developers see it, in the words of Alexander Babuta, who did the British research, “The software itself is actually quite simple – using crime type, crime location and date and time – and then based on past crime data it generates a hotspot map identifying areas where crime is most likely to happen.” 

So we might be led to think that the only challenge left is training police forces to effectively use big data predictably, that doing so will decrease crime without resorting to aggressive policing and the regressive and socially negative “broken windows” policing of Giuliani-era New York, where policymakers attempted to harness individual acts of police intimidation in the service of an overall social perception of a crime-free city. Data accuracy is also critical—when working with client Accurate Append, we find that demographic and email and other contact data are missing and incomplete in data files across industries. 

The jury is not unanimous on the use of algorithmic big data as a crime prevention tool. To begin with, predictive policing can be perceived as just as oppressive as reactive policing. Predicting that certain areas are prone to future crime almost certainly means putting up video cameras, possibly with controversial facial recognition technology, in these “risky” areas. But the construct of the “high-risk area” by data interpretation risks being just as laden with racist or other assumptions as policing itself can often be. 

After all, we know that big data is not immune to racism or to other stereotyping of its human keepers. And what if, in an effort to politically manipulate the landscape of city policing, politicians and appointees manipulate or misinterpret the conclusions of long-term trends, or short-term spikes in crime, to continue the over-policing of oppressed communities? This is an emerging concern among civil liberties advocates in the UK and the U.S.

Another concern, expressed by the editorial staff at the British paper The Guardian,  is that in addition to predicting trends in particular areas, police are also using this interpretive technology “on individuals in order to predict their likelihood of reoffending” — which gets us even closer to “Minority Report” status. At the very least, “it is easy to see that the use of such software can perpetuate and entrench patterns of unjust discrimination.” Or worse, many fear. And, to make perhaps an obvious but necessary point, “the idea that algorithms could substitute for probation officers or the traditional human intelligence of police officers is absurd and wrong. Of course such human judgments are fallible and sometimes biased. But training an algorithm on the results of previous mistakes merely means they can be made without human intervention in the future . . . Machines can make human misjudgments very much worse.”

Perhaps the most interesting take on the social dangers of big data use in policing comes from  Janet Chan, professor of law at the University of New South Wales, in an academic paper for Criminology and Criminal Justice. Chen writes that “data visualization, by removing the visibility of real people or events and aestheticizing the representations in the race for attention, can achieve further distancing and deadening of conscience in situations where graphic photographic images might at least garner initial emotional impact.” In other words, seeing only data instead of the faces of victims and offenders, or the social and neighborhood contexts of property crimes, or the living dynamics of domestic violence cases, risks making law enforcement, and perhaps policymakers, less engaged and empathetic towards the public. Chen cites other scholars’ work to suggest that visual representation and face-to-face encounters, however imperfect, are necessary forms of social engagement.

Constituent Communication Research: A Snapshot from Long Past

Political culture has changed a great deal, and this is not a “get off my lawn” post. In fact, as alienating and uncivil as much current political discourse seems, there’s a level of directness and candidness that earlier eras lacked, giving them a feel of artificiality and stuffy elitism. 

Take, for example, a research article published back in a 1969 issue of the Journal of Politics. Titled “The Missing Links in Legislative Politics: Attentive Constituents,” the article by G. R. Boynton, Samuel C. Patterson, and Ronald D. Hedlund sought to describe a kind of constituent that was a cut above the rest, part of the “thin stratum” between the masses and “the upper layer of the political elite,” and seen as a critical sub-elite maintaining democratic dialogue. Curiously beginning in what they admitted was “a particular intra-elite context,” the scholars observed that both “attentive constituents and legislators differed markedly from the general adult population in terms of occupational status . . .” and that these special constituents were in constant communication with legislators and even recruited people to run for office.

Today, even if we acknowledge that some citizens are more engaged than others, that people benefiting from education and stable material lives can share their privileges by proactively participating in political and civic life, we are rightly hesitant to paint such citizens as part of superior substrata. We know that poor and working class people engage too when they can, that community engagement is often (though admittedly not often enough) facilitated by civic, religious and political interest groups across a wider range of economics and demographics than was supposed fifty years ago. 

We also know that high-level involvement doesn’t automatically correlate to helpfulness or the strengthening of democracy. We know that elite groups often engineer a great deal of spin, and that both privileged and disadvantaged populations are vulnerable to misinformation. Involvement and access are more complicated than Bounton, et al’s worldview reflected.  

Political science and communication scholars carried different assumptions back then — and even began with different questions. Today, much of the research is geared towards identifying bad hierarchies, undesirable ways in which constituent access is blocked or limitations are set on how communication may occur between people and the leaders they elect. This may include how letters and emails are processed, such as in Matthew J. Geras and Michael H. Crespin’s study, published this year, concluding that high-ranking staffers answer socially powerful constituents, while “[l]etters from women and families . . . are more likely to be answered by lower-ranked staffers. These results are important,” the authors conclude, “because they reveal that even something as simple as constituent correspondence enters a type of power hierarchy within the legislative branch where some individuals are advantaged over others.” Mia Costa’s dissertation, published last year, gives an interesting corollary conclusion: Not only are female constituents devalued, but female legislators are held to an unfairly high standard by their own supporters, including supporters who believe more women in elected office would be desirable. “In fact,” Costa argues, “it is individuals that hold the most positive views of women that then penalize them when they do not provide quality responsiveness to constituents” — a fascinating conclusion that invites further study.

Current research also suggests that elected officials have a sore spot when dealing with constituents who engage in what James N. Druckman and Julia Valdes call “private politics,” or what others would call “direct action” or attempts to influence change outside of the legislative process — things like boycotts, strikes, other direct or demonstrative tactics. Druckman and Valdes report finding that “a constituent communication that references private politics vitiates legislative responsiveness . . . reference to private politics decreases the likelihood of constituent engagement among both Republican and Democratic legislators.” The authors think that these findings call for collective, foundational “conversations about how democracies work” since elected officials ought to appreciate, rather than be intimidated or irritated by, extra-electoral constituent action. 

And through all of this data, as the OpenGov Foundation Study suggests, much of Congress still uses very old communication management technology. One researcher says it’s like “entering a time machine.” Beyond not looking at the power hierarchies of gender and class, the 1969 study also didn’t look at the challenges of staffing in a world of scarce resources. “When advocacy groups target thousands of calls or emails at a single member of Congress, it’s these low-level and in some cases unpaid interns and junior staffers they inundate.” Simply put, it is a nightmare to handle these communications without having CRM software built specifically for the government.

The questions today’s constituent communication researchers ask are thus very different from whether some special elite civic group exists to influence political leadership and how educated and well-connected such constituents are. Today’s research strikes at the heart of material and cultural power imbalances. Until those imbalances are corrected, we need scholars and advocates to continue asking tough questions about practical democracy. 

Meeting Constituents’ Real Needs Means Innovating Constituent Tech

I’m thinking about technology, political culture, and constituent accessibility. What often appears to be a problem of political culture (staff blowing off certain constituents, young hotshot staffers not being able to relate to the challenges of elderly constituents, that kind of thing) might actually be a problem of technology (not having the right tools to organize constituent inquiries or comments at point of reception).

It helps to remember that elected officials are often expected to do much more than the normal list of legislative or executive functions. A Wired story has this anecdote of a military veteran in Massachusetts who misplaced an important piece of mail from the Veteran’s Administration. The letter explained how they could access their benefits, but that information wasn’t packaged well. It occurred at the end of the six-page (!) letter. The veteran contacted the staff of Rep. Seth Moulton, whose Deputy Chief of Staff didn’t just respond by investigating legislative options to address the culture that created such confusion; the staffer actually began by making sure the veteran was able to access their benefits—an administrative or service function that falls well outside the strict boundaries of a legislator’s job.

This was not a constituent complaining about politics, taking a side in a debate, sounding off on some ideological divide. It was a constituent who needed their basic services and benefits. How many contacts might staff receive about accessing services? Who knows? But at a time when the average House member gets 123,000 emails a year (almost triple the number they received on average in 2001), certainly many of these will be about non-legislative/apolitical matters, and this makes it even more clear that institutions have to adapt to new methods of communication in a world where there’s one member of Congress for every 747,184 people.

One of these adaptation needs is an efficient method of separating those who want to weigh in on or engage in deliberation about law and policy from those who may have encountered problems with public services. Non-responsiveness is bad enough in general (and it happens far too often), but it’s especially bad when the inquiry was from someone with a serious unmet need. 

If constituents are sometimes ignored in normal circumstances (and is being on a Congressional staff ever normal?), it’s easy to see how large sections of the constituency can slip through the cracks when public issues get heated—as they have been every single day of the current presidency, for example. In a recent New Yorker article—definitely worth a read—on the effectiveness of phone calls, letters, and emails to elected officials, we find an astute observation about how crises jam up staff phone times, which crowds out less politically charged concerns. “In normal times, then—which is to say, in the times we don’t currently live in—calling your members of Congress is not an intrinsically superior way to get them to listen. But what makes a particular type of message effective depends largely on what you are trying to achieve. For mass protests, such as those that have been happening recently, phone calls are a better way of contacting lawmakers, not because they get taken more seriously but because they take up more time—thereby occupying staff, obstructing business as usual, and attracting media attention.” While the point of the article was to assess effectiveness, I am fixated on the “occupying staff” phrase, because I picture staff sitting on phones all day taking calls from energized people, but unable to process a veteran’s urgent question about benefits.

The same article traces the history of political advocates’ and lobbyists’ manipulation of communication technology to influence political outcomes, beginning with a 1928 campaign by an oil and gas company to get people to make telephone calls in opposition to a gas tax. A century later, “constituent communications account for twenty to thirty per cent of the budget for every congressional office on Capitol Hill.” And the evidence and anecdotes about overworked staff indicates that this is not enough.

Of course, increasing the budget isn’t the answer if you want to remain popular in your district. One potential solution is philanthropic grants, such as those available from the Democracy Fund and its affiliate, Democracy Fund Voice. Intended “to address the disparity between the tools available to Congressional staff and the technological innovations of the digital advocacy industry,” the grants put the tech in the hands of elected officials’ staff, and incentivize further innovation and development. They include apps that can process data in addition to enabling constituent communication, giving those offices “a clearer picture of district sentiment in the aggregate.” And for good measure, they let members of Congress demonstrate their commitment to pushing away from Facebook, so that members won’t fear their personal data being mined on official Congressional pages.

So, in the end, management of technology, political culture, and constituent accessibility hinge on resource questions. Members of Congress can’t be perceived as shirking their duty to help constituents navigate bureaucracies and meet needs, but they’re constantly managing political disputes and crises. We need to find more ways to get appropriate new technology in the hands of their staffers.

Is Antitrust Law Appropriate to Regulate Google and Facebook?

Recently Makan Delrahim, Assistant Attorney General for the Antitrust Division of the U.S. Department of Justice, publicly criticized Google and Amazon for their ruthlessness and size, actually and provocatively citing past government breakups of big private utilities like Standard Oil. If you’re surprised that someone in Trump’s DOJ would poke big business and invoke the progressive antitrust regulation of the past, some explanation is in order. 

The government has always regulated the market, so “free markets” have always been a kind of mythical creature. What governments typically do is regulate “competition,” but that doesn’t just mean keeping firms from becoming too big or stopping them from establishing monopolies through sheer size alone. It also means regulating certain behaviors, if those behaviors limit the choices or distort the autonomy of consumers. This is what’s behind the recent drive to use antitrust regulation to control big infotech. 

For example, digital advertising has both crowded out local and independent news, and intruded on consumer privacy. Both of those effects could be justification for antitrust regulation because that private data can be used in ways that disadvantage competitors who do not mine such data. Digital advertising is also a ripe target for regulation because this year, it’s expected to “exceed TV and print advertising for the first time ever,” and the Trump reelection team “is doubling down on its digital ad strategy,” according to CNN.

Just three companies–Google, Facebook, and Amazon–account for 70 percent of spending on digital ads, so that short list is another antitrust red flag. 

There is increasing public pressure to apply antitrust law to these big data giants. The public doesn’t like the Faustian bargain that has been made—search the internet and connect with others all you want, largely for free, but the companies get to surveil you “across the whole web” and use that data in any way they want. It’s like liberty is being traded for the right to communicate and gather knowledge. And in the meantime, competitors like Yelp have complained that the access power of Google has been used to crowd the smaller players out. Traditional data append and email vendors now represent just a sliver of the consumer data market.

“Outside of Google, Facebook, and a few others, the rest of the market, which includes thousands and thousands of independent news publishers (that depend on digital advertising as their primary source of revenue), will shrink by 11 percent” this year.

Sometimes size prevents competitors from developing. This can be true even if the size doesn’t translate into higher costs for the consumer. In fact, the largest tech companies today allow most of their services to be used for free, so we can’t really rely on price to test the assumptions of antitrust law. The data those companies gain in the process is really the problem, because the ability to take that data (which other companies have no access to precisely because those companies give their services away for free) and “identify untapped and under-served markets, spot potential competitors and prevent them from developing – the kind of edge that antitrust law is meant to thwart.” Antitrust regulation may also be warranted because those large companies create “natural monopolies.” Now, in the past, some natural monopolies were seen as acceptable as long as they were subject to additional antitrust scrutiny, like “price controls and oversight boards.” One could argue that Facebook, which is very close to being a natural monopoly, ought to be subject to a special government oversight board just for it. 

The European Union hasn’t wasted any time or parsed out any nuances here. Since 2010, the EU has investigated Google for antitrust violations three times and charged Google with violating EU competition law with Google Shopping, AdSense, and the Android system. The Google Shopping and Android charges stuck, and the company has handed over €8 billion to the European body. 

Despite this scrutiny (or maybe because of it), Google is acting impetuous and bold. Just a few days ago, the company, fully aware that it’s under antitrust scrutiny, announced that it was buying another company, the data analytics firm Looker, for $2.6 billion. It’s hard not to surmise that the company is testing the government to see who blinks.