YouGov Chat: An Experience in Interactive Polling

YouGov Chat: An Experience in Interactive Polling

Trying to reach constituents, voters, and potential supporters via electronic communication technology is daunting for the inexperienced and frustrating for the experienced campaigner. People are so close and yet so far: close enough that you can establish an electronic dialogue with them, but too far away to be able to screen for unspoken sentiments or take advantage of that one pause that lets in all the candidate is trying to sell. Every aspect of public opinion engineering is difficult, whether it’s gathering data (which may be done through phone append by companies like Accurate Append) or interpreting said data into neat little packages. Accordingly, there’s a special place in my heart for those who brave public opinion engineering out of a love for people or love for the game. 

I bring this up because I’ve finally been chat-botted for a reason other than customer service, and frankly speaking, it was not terrible. That’s right: recently, I consented to participate in a YouGov chat, which was a simple interactive survey conducted via an online chatbot. YouGov surveys internet users on a variety of topics. I was about the Trump administration and how vulnerable it has become to criticism. Although I was preoccupied, I readily took the survey or, rather, co-participated in a conversation between a bot and a political junkie. 

I have a long history of qualified respect for public opinion polls. I have worked for pollsters, I have answered telephone, in-person, and internet surveys, and I’ve crunched the numbers turning other people’s surveys into conclusions. 

A lot of times I think the polls create, rather than reflect or record, public opinion. The question of the credibility of opinion poll results is often relegated to the folk science section of our collective mind. Richard Seymour’s Guardian piece from last year states more eloquently what many people suspect — that many pollsters mitigate against the risk of not interviewing who they want through “their selection of who to interview, and how to weight the results. . . . in conditions of political uncertainty, these assumptions start to look like what they are: guesswork and ideology.” Seymour’s Guardian article expresses my biggest concern with polling: “By relying on past outcomes to guide their assumptions, pollsters made no room for political upsets,” he writes. “Since poll numbers are often used as a kind of democratic currency – a measure of “electability” – the effect of these methodological assumptions was to ratify the status quo. They reinforced the message: ‘There is no alternative.’ Now that there are alternatives, polling firms are scrabbling to update their models.”

A few years ago, Jill Lepore wrote a good history of public opinion polling in America. She pointed out that such polling began during the Great Depression, and the response rate at the time (when, presumably, opinion workers put far more time into individualized targets of response) was over 90%. Lepore pointed out that “the lower the response rate the harder and more expensive it becomes to realize” the promise of representation. By the 1980s, when the response rate had fallen to sixty percent, pollsters were worried about it falling further. But the rate now is “in the single digits,” Lepore wrote, and pollsters still see value in conducting them.

The source of that value, even in a single-digit world, is the ultimate precarity and close-call nature of modern political campaigns. Even the risk of a tiny moving of the needle, a few votes here or there, can swing an election. Even a minimally successful PR campaign can bring in one or two extra donors and that can swing a close campaign. And even minimal data can be added to other collections of data in the service of big data analytics. 

Traditionally, that data was collected using call centers whose operators would call a representative sample of people and ask them questions on scales, binaristic true/false forms, and more. But YouGov’s surveys are different. They are based on the interactivity of chatbots, which are primarily viewed as marketing utilities rather than instruments of public deliberation. 

The dynamic of the exchange was interesting. In these bot interactions, if they’re done well, one forgets, but does not completely forget, that one is dealing with a computer program possessing no “real” personality of consciousness. The right programming and wording can create such a personality, at least in limited ways. This was no exception. The main issue in the survey I took was Donald Trump’s commutation of Roger Stone’s sentence. YouGov would periodically inform me that “most Americans” felt either in agreement with my position or at odds with it, as a way of putting my own opinions into perspective. 

Moreover, YouGov would ask me if I wanted to keep going after clusters of answers. It seemed friendly and appreciative of my participation. It was as if part of me were genuinely convinced I was talking to a program with some degree of autonomy and personality. 

Not everything about the survey or the experience taking it was perfect. The only political parties I could choose from were Republican, Democrat, Independent, or Other. Similarly, the question concerning political orientation had no position to the left of “very liberal.” This raises the question of how useful political affiliation categories really are. It’s easy to argue that these categories are sufficient for survey purposes; third parties poll very low nationally even when their candidates receive a lot of media exposure, whether seen as credible leaders or not. And what’s the practical difference between labeling someone “very liberal” and a socialist? So I get that, but at a time when a few hundred votes might decide a large election outcome, or when groups like the Democratic Socialists of America can influence races with national importance, it may be time to expand those categories. Few pollsters seem eager to do that. 

One last complaint: At one point I made an error, typing some incomplete gibberish and accidentally sending it. There was no way to backtrack in the poll, even though the poll was asking for some detailed answers on things. 

But did I feel like my voice had been heard? Yes, more so than in singular binary or Likert scale answers. Perhaps with more interactivity, even more data-driven bot proactivity, chatbot-based political opinion-gathering could become extremely dialogical, allowing the kind of relational interaction found between crews on spaceships and their sentient computers–well perhaps not quite that much, but an impressive amount compared to what we were capable of doing during the Great Depression.