Adriel Hampton: ‘The COVID Crisis has Shown us Just How Fast Governments Can Act’

Adriel Hampton: ‘The COVID Crisis has Shown us Just How Fast Governments Can Act’

Adriel Crossing the Delaware — by Brett Bandetelli

I recently took the time for a lengthy Q&A with Phil Mandelbaum of We spoke on everything from my firm’s work with technology companies and SEO clients like data vendor Accurate Append to how I got started in activism:

the City of Walnut Creek wanted to knock out a big chunk of the central park in the town for a parking garage. I had covered activism enough to know how it gets into the press and I organized a group that helped beat a ballot measure and then supported a compromise plan. The final project sited an amazing library in the park and also preserved the longtime matriarchal home of one of my neighbors.

We also discussed post-Bernie organizing strategy:

The long haul is going to be getting more left and corporate-free candidates at all levels — and that is going to take a few more cycles to really get rolling.

I hope you’ll check out the full interview.

Web-Based Cartography, Power and Community-Building

We were interested in the news that the Chinese phone manufacturer Huawei recently signed a deal with TomTom, the Dutch digital mapping company, for an alternative to Google Maps.

TomTom has been an often unsung but never ignored force in map applications. It has several self-branded products on iOS and Android devices. But Huawei will be building its own application, a mapping product, using TomTom maps—so a kind of secondary app based on TomTom’s primary apps. TomTom is no stranger to this kind of use. In the past the company provided data for Apple Maps, making itself part of a similarly “shambolic” patchwork of mapping apps. Huawei intends to build a full-on system, a “Map Kit” using data from Yandex, a Russian tech company. The TomTom deal will either serve as a bridge to Map Kit, or be integrated into it. 

Why can’t Huawei just rely on Google Maps? Well, here’s where it gets interesting. Last year, the Trump administration placed sanctions on Huawei. In May, Trump “issued an executive order barring US companies from using information and communications technology from anyone considered a national security threat” and included Huawei in its list of dangerous entities. That complicated relations with Google, even though the application of administration sanctions has been uneven and uncertain. Businesses hate uncertainty, after all. Meanwhile, Huawei is building its own operating system, which it calls HarmonyOS, and is using the TomTom deal to further reduce its reliance on Google.

What’s especially interesting about this is that trade policies, domestic politics really, and the Trump administration’s confrontational approach to international relations, is going to literally change the way people look at the world. There are nuanced differences between the way Google and other companies map things. The methods, devices, and features of interactivity vary across systems and platforms. Many of the differences will be subtle, but they will still be differences, and over time, their aggregate will grow. In however nuanced a way it might be, politics will determine mapmaking differentials. 

Historically, cartography has been a ruling-class sphere. If the first task or problem of cartography is “map agenda-setting,” where the features and area of the place to be mapped, then historically, that task has privileged the kingdoms, countries, states, and localities with the authority and resources to make the maps. Likewise, cartographers eliminate characteristics and areas deemed irrelevant to the purposes of the maps and materials being created. Even the very act of generalization—creating categories of landscapes and ocean depths and similar profiles—relies on areas of both informational and material authority that perpetuate and emerge from great powers. Maps, critical scholars tell us, are “sites of power-knowledge,” and judgments about which maps are best “arise from privileged discourses.” 

All of which suggests that what we’re seeing in the escalation of trade disputes is a series of power-shifts that will again exercise influence on how we map the world. But now, instead of the changes being reflected in model globes and poster-sized maps in school rooms and offices, they will be reflected in millions upon millions of digital devices guiding people around in their cars or from their homes. 

So what does the business end of big data-informed, and satellite-sourced web based mapmaking get us? What social tendencies does it reproduce and strengthen? 

In many ways, web-based mapping has created an interesting piece of sociological data: it gives us more information about our surroundings and communities, but it also entrenches us as solitary beings in our vehicles and domiciles. We can access information about what’s out there, what’s around us. But then we click our keypads to navigate us through a geographical location so we don’t have to stop and ask for directions. We can order meals, groceries, office supplies, and other goods and services using apps that link from these maps. We never have to “be in public.” It doesn’t turn us into recluses, necessarily, but it changes our social dynamics. 

But consumer-based or user application-based maps are not the only way we’re seeing technological progress affect cartography. Another area that has emerged stronger because of big data and satellite technology is called collaborative mapping, and collaborative mapping is helping humanity re-define cooperative work in new ways. Collaborative mapping has the potential to bring diverse peoples and communities together rather than isolating us. 

Collaborative mapping uses open source collaborative software to create and develop (and thus never really be done creating) collective maps. Maps are created collaboratively on a shared “surface.” Problems sometimes occur due to people having concurrent access and competing for mutually exclusive data. This might happen at the same time, or it might be a matter of one party correcting a previous iteration of the map, potentially causing conflicts in interpretation. But just as commons-based encyclopedias like Wikipedia or other common knowledge libraries have (admittedly imperfect) procedures for dealing with these conflicts, collaborative mapping can also develop such procedures. The outcome is still likely to be a more egalitarian knowledge base, and we now know that many hands and many heads not only make light work, but also create better epistemology, and better knowledge. 

Based on the work of Christopher Parker and colleagues, as well as many other researchers, we are learning that collaborative mapping can do a number of other socially positive things. The public-oriented consciousness that one might expect from collaborative mapping work is also found in many of its practical applications. For example, the potential of collaborative mapping extends to creating ease-of-access maps and guides for people with limited mobility. A crowd-sourced “mashup” of mapping data can be created to provide that information on an accessible and free platform. Researchers say that it’s a challenge to collect that “subjective” data in the first place, but they are coming up with many potential solutions, from pre-populating local regions and then expanding coverage from there, to creating mashups without the crowd-sourced data and then create ways for users to easily add their own data to the collective data pool. 

For people who study the history of mapping and cartography, the idea of egalitarian, collectively-created maps is pretty revolutionary. Immediately following the end of World War One, there emerged a deep dissatisfaction with territorial thinking. Progressive-minded activists and scholars began searching for alternative ways of “viewing the world” both in the paradigmatic sense (worldview) and the cartographic sense (criticizing the construction of borders and the representations of the world that show up on maps; think of the conversation about why the “north” is “on top” and that kind of thing). It’s worth thinking that perhaps activists can forge links between cooperative map-making and cooperative international and other political relationships. 

The building of that political power and relationships will also be continually shaped by how we utilize data. From geo-targeted ads on social media that help turn out voters, to using partners like our client Accurate Append, a phone, email and data vendor, to get the contact info campaigns need to text and call voters. Data and tech are shaping every aspect of our future. From how we navigate our world, to how we change it.

Big Data Makes for Big Sci-Fi Plots

We’re fans of science fiction, and its conversion into science fact, around here. Ray Bradbury has written that “science fiction is the most important literature in the history of the world because it’s the history of ideas, the history of our civilization birthing itself . . . Science fiction is central to everything we’ve ever done, and people who make fun of science fiction writers don’t know what they’re talking about.” It’s in the notion of “civilization birthing itself” that we find plotlines that deal with mass information, and the various ways humans use that information for better or worse. If you look at the concepts and plotlines of many science fiction stories where societies crunch on mountains of data, you end up with stories that last thousands, millions, or billions of years; stories that deal with juggling simultaneous alternate realities; stories of unprecedented intersystem contact, and more. Big data accompanies big ideas, big leaps in intergalactic evolution. 

Although this post talks about the role of big data in science fiction plots, big data as the plot is relatively rare, although the exceptions listed in this post are noteworthy. Almost four years ago, James Bridle posted a story called “The End of Big Data” on Vice, with art by Gustavo Torres. “It’s the world after personal data,” reads the synopsis. Most entities are forbidden from having any identifying information. “No servers, no search records, no social, no surveillance.” This is enforced by satellites constantly monitoring the planet to “make sure the data centers are turned off—and stay off.” Bridle’s story is of a different kind of fictional future. And it’s kind of unique because we typically expect science fiction to talk about pushing limits forward, not back. 

But this is sci-fi about policy and law, not just tech. The story features scenes of data cops busting data pirates, because naturally if you make it illegal, you create a black market for it. These pirates move the data, which they collect like rainwater catchment in “receiver horns” and ship it in physical containers: “Sealed and tagged, these containerized data stores could be shipped anonymously to brokers in India and Malaysia, just a few more boxes among millions, packed with transistors instead of scrap metal and plastic toys.” Bridle describes the mundane and sometimes challenging life of a data cop, surveilling the whole planet through satellite monitoring and, sure, the collection of data; the sovereign must be able to be outside of the law to enforce it, of course. 

The black market forms because “Data is power,” the main character observes. Of course, as Kelly Faircloth points out in another post, the science fiction world is already here—dating sites that can predict when a candidate is lying, social networks already knowing who you know, extremely fast and efficient Orwellian surveillance—it’s all already here. And it has both positive and nefarious uses. It can help you sleep better by collecting and processing sleep data from REM patterns to body positions to ambient noises. It can facilitate the early detection of natural disasters, epidemics, and acts of violence. 

Here’s a spontaneous debate you can make your friends or students have: Resolved: The benefits of big data’s early epidemiological or medical detection of deadly diseases outweigh the detrimental effects of big data on privacy and civic life. Because in science fiction, like in public policy, the question is who is using the data and what their intentions are. 

Paul Bricman blogs about three novels with data-driven plots and in doing so raises an interesting point. We hear a lot about the dystopian or tragic use of big data in sci-fi (and in real life). Bricman’s analysis includes (as it should) The Minority Report by Philip K. Dick, where cops bust people for crimes they haven’t committed (and arguably won’t commit with 100% certainty). Minority Report is a classic example of data dystopianism, but Bricman also includes two works that offer a more optimistic or at least hopeful view: the Foundation series, by Isaac Asimov, where mathematician Hari Seldon develops a “mathematical sociology” to save his civilization from ruin (playing a very long, data-driven game to do so); and The Three Body Problem by Cixin Liu, which turns on the efforts by a cooperative organization composed of two civilizations—Earth and Trisolaris—to solve the problems created by the latter’s having three suns. Their methods include “genetic algorithms applied to movement equations” and the development of “an in-game computer based on millions of medieval soldiers which emulate transistors and other electrical components” in order to predict the behavior of the Trisolaris system’s three suns. If you look hard enough, you’ll find stories about what good people who are data crunchers and philosophers of data can do. 

These are all works that have been written after the “golden age” of science fiction from the late 1930s to the late 1940s. Most emerge in the 1960s and 1970s. But there’s a special place for Olaf Stapledon (who published just prior to the golden age) in any analysis of metadata-based sci-fi because Stapledon’s work was all about making the biggest abstractions and generalizations possible from virtually infinite fields of data across the universe. Stapledon wrote Last and First Men, a “future history” novel in 1930, and followed it with Last Men in London and Star Maker—the latter being Stapledon’s history of the entire universe. Last and First Men encompasses the history of humanity for the next two billion years, detailing the evolution of eighteen iterations of the human species beginning with our own. Stapledon anticipated both genetic engineering and the “hivemind” or “supermind.” The data processing involved in mapping and describing this evolution may not fit neatly into any categories like those we use when appending lead data using our client Accurate Append (an email, and phone contact data vendor), but Stapledon’s work beautifully, sometimes almost poetically, captures such wide perspectives. It’s impossible to do work collecting and processing the information on thousands or millions of humans and not feel that some kind of collective consciousness, elusive but real, is at work.

Huxley, Big Data, and the Artistic Mind

Imagine you’re a famous painter, but in an effort to get with the times, you crowd-source your newest painting. You make a nice big show of it, being creative in your soliciting of ideas, bringing people to your studio (and sharing the encounters on social media), maybe even hosting some focus groups to discuss themes, content, and forms. You finish a painting that is the result of a process of dialogical exchange with your audience. 

Most thinking art scholars and critics would call that a pretty creative artistic endeavor, and because you facilitated it, you’d get credit as the artist, even though the intent of the “performative” and creative aspect of the art was to de-center yourself as the artist. 

What if you were a musician and you did something similar? Maybe you’d invite audience members to hum into a recording device, and then you mix the sounds into what you believe to be optimal if somewhat discordant combination. Obviously, this would be considered a creative act by you as well. The inclusion of audience suggestions is part of the aesthetic experience. It calls into question artistic individuality, making an important philosophical point that you, the individual artist (see what I did there) get credit for developing and illustrating. 

These innovative gestures that blur the line between artist and audience are not problematic in the same way that concerns have been raised about the relationship between big data and artistic expression. In the forthcoming book Beyond the Valley: How Innovators around the World are Overcoming Inequality and Creating the Technologies of Tomorrow, Ramesh Srinivasan explores more concerning technological questions: fashion designers whose creative work is based on algorithms developed by consolidating millions of users’ preferences, for example, or art and music created to specifically appeal to the semiconscious desires of listeners, but the data that goes into making that music is mass data, not individual aesthetic experiences or individual expressions of desire. 

Srinivasan seems mostly concerned that big data produces “undemocratic” outcomes, but I don’t think he means this in a strictly political sense. I think he means that democracy carries a certain expectation of self-consciousness. Even the experimental collaborative art and music I imagined earlier is self-consciously participatory. The participants know they are helping create something, and are intentionally contributing. This isn’t the case when the list of acceptable preferences distilled and given to artists is based on millions of pieces of data. 

This is not to say that big data can’t or shouldn’t play a role in developing products with an aesthetic value like furniture or clothing. This seems like a legitimate form of product development and there’s nothing inherently wrong with it. But it would be a mistake to call it “artistic” without a radical redefinition of the word—because the “art” in it is not conscious or intentional in the same sense that an individual artist’s painting or even an ensemble collectively-written piece of theater is. 

This depersonalization of aesthetics through big data was anticipated by Aldous Huxley (who Srinivasan cites in his book) in books like Brave New World, and Huxley was concerned about industries and politics and other endeavors lacking transparency—not just where stakeholders were able to see the decisions being made, but participate in them too. “Whatever passes for transparency today seems one-directional,” Srinivasan writes. “Tech companies know everything about us, but we know almost nothing about them. So how can we be sure that the information they feed us hasn’t been skewed and framed based on funding models and private news industry politics?”

Huxley, who died 56 years ago on the same day as JFK, November 22, 1963, has developed a reputation as a scathing critic of industrial society, but he was more than that. His thoughtfulness about ethics wasn’t abstract: when he became relatively wealthy working as a Hollywood screenwriter in the 1930s and 40s (he’d immigrated to the U.S. and settled in Southern California), Huxley used a great deal of that money to transport Jews and left-wing writers and artists from Europe to the United States, where they would be safe from fascism. Huxley saw the threat of depersonalized and depersonalizing technology as “an ordered universe in a world of planless incoherence.” That’s not far from how big data skeptics describe the data industry. 

In a recent Washington Post piece, Catherine Rampell echoes these concerns. The “vast troves of data on consumer preferences” owned by large firms are largely collected surreptitiously. “There are philosophical questions,” she writes, “about who should get credit for an artistic work if it was conjured not solely through human imagination but rather by reflecting and remixing customer data.” 

If the fashion or informational choices of big data are alienated from the conscious preferences of audiences or consumers, there is at least one theory of art that holds that artists themselves should be consciously removed from the preferences of audiences who otherwise appreciate the art. It’s the “theory of obscurity,” made (somewhat) famous by San Francisco avant-garde band The Residents, who borrowed it from N. Senada, a musician who may not have existed as such. The theory holds that “an artist can only produce pure art when the expectations and influences of the outside world are not taken into consideration.” In other words, a true artist can’t worry about what the audience thinks. N. Senada had a corollary theory, the “theory of phonetic organization,” holding that “the musician should put the sounds first, building the music up from [them] rather than developing the music, then working down to the sounds that make it up.” Big data aggregation could not play any meaningful role in such work. Perhaps listening to avant-garde music is the best way to avoid being assimilated into a giant cybernetic vat of data goo. But I doubt such a solution can be implemented universally since very few people enjoy that music.

Big Data, Cops, and Criminality

In the short story and movie “Minority Report,” police in the future are authorized to arrest, and prosecutors to convict, people for the crimes they will (according to predictive technology) commit in the future. Although the thought of arresting, trying and convicting individuals for what they have not yet done seems far-fetched, the basic structural and technological prerequisites for such a world are already in place. What does this mean?  

Over the past few years, crime prevention through big data has undergone a transition from interfacing with data to using it predicatively. Up until now, law enforcement could check individual samples against DNA and fingerprint databases, arrest records, that kind of thing. But now, there’s an emerging norm towards “predictive analytics algorithms to identify broader trends.” And yes, this is a step closer to a “Minority Report” world, where police predict where crime is likely to occur based on what we might call “moving” or trending data rather than snapshots of particular markers and records like fingerprints and DNA. 

Areas, where predictive software is used, have experienced remarkable results: a “33% reduction in burglaries, 21% reduction in violent crimes and a 12% reduction in property crime.” But there’s even more at stake here. In that such policing is preventative rather than reactive, what less crime also means is less confrontation between police and suspects–including innocent suspects erroneously targeted by police in the old world where cops wait to see suspicious behavior and then act on it in the midst of uncertainty. Of course, there are other implications, for detective work, for work dealing with serial offenders, in assessing the need for future resources. Big data won’t necessarily improve relations between residents and police outright, but smarter policing may translate into less confrontational policing and an increase in public perception of police effectiveness. 

And so a lot of police forces see the biggest challenge now to be teaching police how to use the technology the right way. A recent British report on “big data’s use in policing published by the Royal United Services Institute for Defence and Security Studies (RUSI) said British forces already have access to huge amounts of data but lack the capability to use it.” This is unfortunate because, at least as researchers and developers see it, in the words of Alexander Babuta, who did the British research, “The software itself is actually quite simple – using crime type, crime location and date and time – and then based on past crime data it generates a hotspot map identifying areas where crime is most likely to happen.” 

So we might be led to think that the only challenge left is training police forces to effectively use big data predictably, that doing so will decrease crime without resorting to aggressive policing and the regressive and socially negative “broken windows” policing of Giuliani-era New York, where policymakers attempted to harness individual acts of police intimidation in the service of an overall social perception of a crime-free city. Data accuracy is also critical—when working with client Accurate Append, we find that demographic and email and other contact data are missing and incomplete in data files across industries. 

The jury is not unanimous on the use of algorithmic big data as a crime prevention tool. To begin with, predictive policing can be perceived as just as oppressive as reactive policing. Predicting that certain areas are prone to future crime almost certainly means putting up video cameras, possibly with controversial facial recognition technology, in these “risky” areas. But the construct of the “high-risk area” by data interpretation risks being just as laden with racist or other assumptions as policing itself can often be. 

After all, we know that big data is not immune to racism or to other stereotyping of its human keepers. And what if, in an effort to politically manipulate the landscape of city policing, politicians and appointees manipulate or misinterpret the conclusions of long-term trends, or short-term spikes in crime, to continue the over-policing of oppressed communities? This is an emerging concern among civil liberties advocates in the UK and the U.S.

Another concern, expressed by the editorial staff at the British paper The Guardian,  is that in addition to predicting trends in particular areas, police are also using this interpretive technology “on individuals in order to predict their likelihood of reoffending” — which gets us even closer to “Minority Report” status. At the very least, “it is easy to see that the use of such software can perpetuate and entrench patterns of unjust discrimination.” Or worse, many fear. And, to make perhaps an obvious but necessary point, “the idea that algorithms could substitute for probation officers or the traditional human intelligence of police officers is absurd and wrong. Of course such human judgments are fallible and sometimes biased. But training an algorithm on the results of previous mistakes merely means they can be made without human intervention in the future . . . Machines can make human misjudgments very much worse.”

Perhaps the most interesting take on the social dangers of big data use in policing comes from  Janet Chan, professor of law at the University of New South Wales, in an academic paper for Criminology and Criminal Justice. Chen writes that “data visualization, by removing the visibility of real people or events and aestheticizing the representations in the race for attention, can achieve further distancing and deadening of conscience in situations where graphic photographic images might at least garner initial emotional impact.” In other words, seeing only data instead of the faces of victims and offenders, or the social and neighborhood contexts of property crimes, or the living dynamics of domestic violence cases, risks making law enforcement, and perhaps policymakers, less engaged and empathetic towards the public. Chen cites other scholars’ work to suggest that visual representation and face-to-face encounters, however imperfect, are necessary forms of social engagement.

Constituent Communication Research: A Snapshot from Long Past

Political culture has changed a great deal, and this is not a “get off my lawn” post. In fact, as alienating and uncivil as much current political discourse seems, there’s a level of directness and candidness that earlier eras lacked, giving them a feel of artificiality and stuffy elitism. 

Take, for example, a research article published back in a 1969 issue of the Journal of Politics. Titled “The Missing Links in Legislative Politics: Attentive Constituents,” the article by G. R. Boynton, Samuel C. Patterson, and Ronald D. Hedlund sought to describe a kind of constituent that was a cut above the rest, part of the “thin stratum” between the masses and “the upper layer of the political elite,” and seen as a critical sub-elite maintaining democratic dialogue. Curiously beginning in what they admitted was “a particular intra-elite context,” the scholars observed that both “attentive constituents and legislators differed markedly from the general adult population in terms of occupational status . . .” and that these special constituents were in constant communication with legislators and even recruited people to run for office.

Today, even if we acknowledge that some citizens are more engaged than others, that people benefiting from education and stable material lives can share their privileges by proactively participating in political and civic life, we are rightly hesitant to paint such citizens as part of superior substrata. We know that poor and working class people engage too when they can, that community engagement is often (though admittedly not often enough) facilitated by civic, religious and political interest groups across a wider range of economics and demographics than was supposed fifty years ago. 

We also know that high-level involvement doesn’t automatically correlate to helpfulness or the strengthening of democracy. We know that elite groups often engineer a great deal of spin, and that both privileged and disadvantaged populations are vulnerable to misinformation. Involvement and access are more complicated than Bounton, et al’s worldview reflected.  

Political science and communication scholars carried different assumptions back then — and even began with different questions. Today, much of the research is geared towards identifying bad hierarchies, undesirable ways in which constituent access is blocked or limitations are set on how communication may occur between people and the leaders they elect. This may include how letters and emails are processed, such as in Matthew J. Geras and Michael H. Crespin’s study, published this year, concluding that high-ranking staffers answer socially powerful constituents, while “[l]etters from women and families . . . are more likely to be answered by lower-ranked staffers. These results are important,” the authors conclude, “because they reveal that even something as simple as constituent correspondence enters a type of power hierarchy within the legislative branch where some individuals are advantaged over others.” Mia Costa’s dissertation, published last year, gives an interesting corollary conclusion: Not only are female constituents devalued, but female legislators are held to an unfairly high standard by their own supporters, including supporters who believe more women in elected office would be desirable. “In fact,” Costa argues, “it is individuals that hold the most positive views of women that then penalize them when they do not provide quality responsiveness to constituents” — a fascinating conclusion that invites further study.

Current research also suggests that elected officials have a sore spot when dealing with constituents who engage in what James N. Druckman and Julia Valdes call “private politics,” or what others would call “direct action” or attempts to influence change outside of the legislative process — things like boycotts, strikes, other direct or demonstrative tactics. Druckman and Valdes report finding that “a constituent communication that references private politics vitiates legislative responsiveness . . . reference to private politics decreases the likelihood of constituent engagement among both Republican and Democratic legislators.” The authors think that these findings call for collective, foundational “conversations about how democracies work” since elected officials ought to appreciate, rather than be intimidated or irritated by, extra-electoral constituent action. 

And through all of this data, as the OpenGov Foundation Study suggests, much of Congress still uses very old communication management technology. One researcher says it’s like “entering a time machine.” Beyond not looking at the power hierarchies of gender and class, the 1969 study also didn’t look at the challenges of staffing in a world of scarce resources. “When advocacy groups target thousands of calls or emails at a single member of Congress, it’s these low-level and in some cases unpaid interns and junior staffers they inundate.” Simply put, it is a nightmare to handle these communications without having CRM software built specifically for the government.

The questions today’s constituent communication researchers ask are thus very different from whether some special elite civic group exists to influence political leadership and how educated and well-connected such constituents are. Today’s research strikes at the heart of material and cultural power imbalances. Until those imbalances are corrected, we need scholars and advocates to continue asking tough questions about practical democracy. 

Meeting Constituents’ Real Needs Means Innovating Constituent Tech

I’m thinking about technology, political culture, and constituent accessibility. What often appears to be a problem of political culture (staff blowing off certain constituents, young hotshot staffers not being able to relate to the challenges of elderly constituents, that kind of thing) might actually be a problem of technology (not having the right tools to organize constituent inquiries or comments at point of reception).

It helps to remember that elected officials are often expected to do much more than the normal list of legislative or executive functions. A Wired story has this anecdote of a military veteran in Massachusetts who misplaced an important piece of mail from the Veteran’s Administration. The letter explained how they could access their benefits, but that information wasn’t packaged well. It occurred at the end of the six-page (!) letter. The veteran contacted the staff of Rep. Seth Moulton, whose Deputy Chief of Staff didn’t just respond by investigating legislative options to address the culture that created such confusion; the staffer actually began by making sure the veteran was able to access their benefits—an administrative or service function that falls well outside the strict boundaries of a legislator’s job.

This was not a constituent complaining about politics, taking a side in a debate, sounding off on some ideological divide. It was a constituent who needed their basic services and benefits. How many contacts might staff receive about accessing services? Who knows? But at a time when the average House member gets 123,000 emails a year (almost triple the number they received on average in 2001), certainly many of these will be about non-legislative/apolitical matters, and this makes it even more clear that institutions have to adapt to new methods of communication in a world where there’s one member of Congress for every 747,184 people.

One of these adaptation needs is an efficient method of separating those who want to weigh in on or engage in deliberation about law and policy from those who may have encountered problems with public services. Non-responsiveness is bad enough in general (and it happens far too often), but it’s especially bad when the inquiry was from someone with a serious unmet need. 

If constituents are sometimes ignored in normal circumstances (and is being on a Congressional staff ever normal?), it’s easy to see how large sections of the constituency can slip through the cracks when public issues get heated—as they have been every single day of the current presidency, for example. In a recent New Yorker article—definitely worth a read—on the effectiveness of phone calls, letters, and emails to elected officials, we find an astute observation about how crises jam up staff phone times, which crowds out less politically charged concerns. “In normal times, then—which is to say, in the times we don’t currently live in—calling your members of Congress is not an intrinsically superior way to get them to listen. But what makes a particular type of message effective depends largely on what you are trying to achieve. For mass protests, such as those that have been happening recently, phone calls are a better way of contacting lawmakers, not because they get taken more seriously but because they take up more time—thereby occupying staff, obstructing business as usual, and attracting media attention.” While the point of the article was to assess effectiveness, I am fixated on the “occupying staff” phrase, because I picture staff sitting on phones all day taking calls from energized people, but unable to process a veteran’s urgent question about benefits.

The same article traces the history of political advocates’ and lobbyists’ manipulation of communication technology to influence political outcomes, beginning with a 1928 campaign by an oil and gas company to get people to make telephone calls in opposition to a gas tax. A century later, “constituent communications account for twenty to thirty per cent of the budget for every congressional office on Capitol Hill.” And the evidence and anecdotes about overworked staff indicates that this is not enough.

Of course, increasing the budget isn’t the answer if you want to remain popular in your district. One potential solution is philanthropic grants, such as those available from the Democracy Fund and its affiliate, Democracy Fund Voice. Intended “to address the disparity between the tools available to Congressional staff and the technological innovations of the digital advocacy industry,” the grants put the tech in the hands of elected officials’ staff, and incentivize further innovation and development. They include apps that can process data in addition to enabling constituent communication, giving those offices “a clearer picture of district sentiment in the aggregate.” And for good measure, they let members of Congress demonstrate their commitment to pushing away from Facebook, so that members won’t fear their personal data being mined on official Congressional pages.

So, in the end, management of technology, political culture, and constituent accessibility hinge on resource questions. Members of Congress can’t be perceived as shirking their duty to help constituents navigate bureaucracies and meet needs, but they’re constantly managing political disputes and crises. We need to find more ways to get appropriate new technology in the hands of their staffers.

Is Antitrust Law Appropriate to Regulate Google and Facebook?

Recently Makan Delrahim, Assistant Attorney General for the Antitrust Division of the U.S. Department of Justice, publicly criticized Google and Amazon for their ruthlessness and size, actually and provocatively citing past government breakups of big private utilities like Standard Oil. If you’re surprised that someone in Trump’s DOJ would poke big business and invoke the progressive antitrust regulation of the past, some explanation is in order. 

The government has always regulated the market, so “free markets” have always been a kind of mythical creature. What governments typically do is regulate “competition,” but that doesn’t just mean keeping firms from becoming too big or stopping them from establishing monopolies through sheer size alone. It also means regulating certain behaviors, if those behaviors limit the choices or distort the autonomy of consumers. This is what’s behind the recent drive to use antitrust regulation to control big infotech. 

For example, digital advertising has both crowded out local and independent news, and intruded on consumer privacy. Both of those effects could be justification for antitrust regulation because that private data can be used in ways that disadvantage competitors who do not mine such data. Digital advertising is also a ripe target for regulation because this year, it’s expected to “exceed TV and print advertising for the first time ever,” and the Trump reelection team “is doubling down on its digital ad strategy,” according to CNN.

Just three companies–Google, Facebook, and Amazon–account for 70 percent of spending on digital ads, so that short list is another antitrust red flag. 

There is increasing public pressure to apply antitrust law to these big data giants. The public doesn’t like the Faustian bargain that has been made—search the internet and connect with others all you want, largely for free, but the companies get to surveil you “across the whole web” and use that data in any way they want. It’s like liberty is being traded for the right to communicate and gather knowledge. And in the meantime, competitors like Yelp have complained that the access power of Google has been used to crowd the smaller players out. Traditional data append and email vendors now represent just a sliver of the consumer data market.

“Outside of Google, Facebook, and a few others, the rest of the market, which includes thousands and thousands of independent news publishers (that depend on digital advertising as their primary source of revenue), will shrink by 11 percent” this year.

Sometimes size prevents competitors from developing. This can be true even if the size doesn’t translate into higher costs for the consumer. In fact, the largest tech companies today allow most of their services to be used for free, so we can’t really rely on price to test the assumptions of antitrust law. The data those companies gain in the process is really the problem, because the ability to take that data (which other companies have no access to precisely because those companies give their services away for free) and “identify untapped and under-served markets, spot potential competitors and prevent them from developing – the kind of edge that antitrust law is meant to thwart.” Antitrust regulation may also be warranted because those large companies create “natural monopolies.” Now, in the past, some natural monopolies were seen as acceptable as long as they were subject to additional antitrust scrutiny, like “price controls and oversight boards.” One could argue that Facebook, which is very close to being a natural monopoly, ought to be subject to a special government oversight board just for it. 

The European Union hasn’t wasted any time or parsed out any nuances here. Since 2010, the EU has investigated Google for antitrust violations three times and charged Google with violating EU competition law with Google Shopping, AdSense, and the Android system. The Google Shopping and Android charges stuck, and the company has handed over €8 billion to the European body. 

Despite this scrutiny (or maybe because of it), Google is acting impetuous and bold. Just a few days ago, the company, fully aware that it’s under antitrust scrutiny, announced that it was buying another company, the data analytics firm Looker, for $2.6 billion. It’s hard not to surmise that the company is testing the government to see who blinks.

Big Data, Business, and Politics

Some candidates in the 2020 Democratic presidential primary are marketing themselves like products, running solely on familiarity—presumably like the familiarity of your favorite neighborhood restaurant or a carmaker whose models you keep buying over the years. Others are using a radically different approach: They are building movements while they campaign, because they see themselves as activist leaders, not products. So these campaigns are building movements, “thoughtfully and deliberately designed to create an unprecedented grassroots movement driven by hundreds of thousands of volunteers.”

We hear and read these all the time: shallow comparisons between running businesses and running political campaigns—particularly where the use of data, social media, and other infotech are concerned.

It’s not entirely unfounded. After all, big political firms do make many of the same mistakes that businesses make where tech and data are concerned. For example, while I might not put it in the stark terms that he does, there’s something to Igor Lys’s comments in a recent post that many of the promises of big data in politics, just as in business, are false, based on the assumption that “big data allows reliable prediction.” I do think it’s futile to “predict” outcomes and instead that it’s better to use data in combination with other forms of information gathering. Lys agrees, writing that “the real use these people make of massive data collection and analysis concerns less the prediction and the manipulation of the future result, than the better analysis of the already existing ones.”

Political campaigns might also parallel business practices in efforts to capture email addresses of visitors that don’t end up donating money (or in business parlance, buy the product). Such sites might use pop-up windows to offer free newsletter subscriptions in exchange for an email address and use email append and verification services to build powerful databases of supporter contacts and preferences.

But while both political campaigns and businesses analyze data and collect contact info, I am also cautious about drawing too many parallels between, on the one hand, an endeavor whose primary goal is to make profits and one whose primary goal is to engage people into voting for, financially supporting, and working for political candidates or issues.

Here’s why political engagement, even through data use, is different from profit-seeking: Profits are extracted from workers’ labor and paid to owners or shareholders. These are very exclusive dividends. But political support grows as relationships among people. Political support, and political solidarity, are not finite and can’t be exclusively owned or claimed. A group of volunteers for a campaign may feel that political energy growing inside of them and when they share it with others, that energy grows rather than thins out. I don’t have less of it when I give it to you.  

This is why it’s important to use data in combination with direct political participation, such as social media engagement, canvassing, and campaign communications. For example, you can use your data to plan solid social media messaging strategies, to nuance the language on a candidate or issue website, or to pick the appropriate language for your campaign emails. Those messages invite different kinds of interaction, from campaign volunteers reaching out to vocal supporters of a campaign on social media, to a web form offering many different options for a supporter’s participation. Businesses sell products and services and the entire process is rather binary: will you buy the thing, if yes, then profit into the hands of owners and shareholders. There’s not much else a loyal customer can do beyond buy more products and refer others to do the same.

None of this is to say that pouring massive amounts of money into big data operations will have an effect even if it doesn’t help build an organic movement. Michael Bloomberg’s plan to use big data to help defeat Trump is a notable example of an effort that will probably have an effect even if it doesn’t empower people politically beyond voting. But an approach that includes participation and lots of interaction produces organizations —like Bernie Sanders’ campaign— that have strength beyond their numbers.

And, although I recently wrote that small donor acquisition efforts rely on creating a sense of urgency (which some may see as similar to creating consumer desire), these efforts are really ultimately about creating political communities. Sure, we can tailor social media ad campaigns based on appeals to different interests and demographics, but the end goal is to get invite these people’s participation through small donations as alternatives to courting large sums of corporate or millionaire money. And such courtship of small donors almost always includes inviting them into the interactive and participatory aspects of a campaign.

This isn’t some hypothetical or abstract philosophical assumption. Key voting blocs for the Democrats want big ideas and morally sound positions from candidates going into 2020. Knowing the difference in ethos between a political campaign and a business selling products or services is critical in appealing to the values of those voters who will create a new majority in the coming decades.

Who is Mike Gravel?

Who is Mike Gravel?

Adriel has been volunteering with the Mike Gravel 2020 campaign as it seeks to qualify this anti-war candidate for the Democratic Primary Election debates. This is a guest post from Duncan Gammie. 

Mike Gravel is running for President and needs your help. But who is he? Gravel, the 88 year old former Senator from Alaska made history in 1971 when he read the Pentagon Papers into the Congressional Record, and is attempting to make history again in 2020 with his campaign for President. Gravel has a strong record as an anti-war, anti-imperialist politician, and has decided to enter the race for the Democratic Party Presidential nomination in order to force certain issues in foreign policy and domestic. The campaign needs sixty-five thousand individual donations to make it to the debate stage, and so far has more donations than many of the other so-called ‘mainstream’ campaigns. In 2008, the last time Gravel ran for President, he made it onto the Democratic debate stage and made a strong performance that continues to resonate today. It is interesting to note that many of the issues that Gravel forced back in 2008, and for which he was roundly disparaged by the pundit class, have entered mainstream Democratic discourse.


“Anti-war.” – photo by Eric Kelly,

This time, the rules are different. The idea behind having individual donations become a benchmark for who gets in the debates or not was due in part to the paradigm shift in campaign funding brought on by the success of the Bernie Sanders campaign. Today, small-dollar donations (and currently the Gravel campaign is asking folks to donate as little as one dollar, if only to meet the requirement to enter the debates) are a significant indicator of grassroots support for your campaign – which, if true, would suggest that Mike Gravel’s 2020 campaign for President, in terms of how much people have decided to donate already, has much more support than campaigns that were thought to be front runners. It is also important to note that the overall number of donations is the best indicator of such grassroots support, rather than total fundraising, since the latter could be achieved with a few high-dollar donors. To donate to the Mike Gravel 2020 campaign is easy, and you can donate online. Gravel has long had a history of bucking the party establishment, and forgoing the easy route many politicians take of catering to their wealthy donors’ every whim, and instead has decided to fight for the rights of all people. His anti-war record has been vindicated again and again by the judgements of history, and it is time to bring Mike Gravel onto the Democratic debate stage once more – and now that it is so easy way to donate, there truly is no excuse.