Is the Era of Cheap Internet Surveys Over?

A surprising amount of modern life depends on internet surveys.

Such surveys are the bedrock of polling, market research and much of social science. They enable pollsters to forecast elections and parties to tweak their rhetoric as public opinion shifts. In the private sector, they permit companies to gauge consumers’ interest in new products. And in academia, they allow researchers to run survey experiments and gather comparative data across countries.

The key benefit of internet surveys is that they’re cheap. YouGov states on its website that it will run a “a short, nationally representative survey to 200 respondents” for $300. Running a survey the traditional way — by conducting face-to-face interviews — would cost five or ten times as much. You’d need interviewers in every part of the country, and they’d have to go out out to individual respondents’ houses.

However, the era of the cheap internet survey might be over. Why? Artificial intelligence.

As a new paper by Sean Westwood explains, AIs can now be “trivially programmed” to answer online surveys in ways that are essentially indistinguishable from humans — and not just humans in general but specific categories of respondent. You can tell an AI to pose as a ‘middle-aged white woman from California’ or as a ‘young black man from Alabama’ or anything else. And the AI will give answers that are consistent with the typical answers of those types of respondents.

Of course, online platforms have long been aware of the risk of bots or inattentive humans ‘contaminating’ their surveys. However, these risks were easily detectable. Traditional bots respond at random or just click the first option for every question. They also tend to complete surveys extremely fast, since they’re designed to fill out as many as possible and collect the small fee users receive for taking part.  

New ‘synthetic respondents’ are far more sophisticated. As Westwood demonstrates in the paper, they can bypass standard ‘attention checks’ over 99% of the time. For example, they can easily pass tests of image recognition or reading comprehension that were designed to filter out bots. And it’s unclear what other tests could be used in their place since AIs are already better at test-taking than most humans.

Why would anyone bother to program an AI to fill out online surveys? One obvious reason is money. As Westwood notes, it would only cost about $0.05 to fill out a standard survey with AI, as compared to a payout of around $1.50. So you’re looking at a profit margin of 97%.

Another potential motive is to manipulate public opinion. Foreign adversaries could program AIs to give answers more favourable to their national interests, with the aim of dissuading lawmakers in the target country from taking actions that might harm them. For example, China could create synthetic respondents that always answer ‘no’ to items like, ‘Should the US adopt a more hostile stance towards the People’s Republic?’

The only way of addressing this problem (aside from reverting to face-to-face interviews) is much more vetting of respondents: requiring proof of identification, using multi-factor authentication etc. And even those obstacles might not prove insurmountable for sufficiently motivated actors.

ChatGPT has already been commercially available for three years. And Westwood is surely not the first person to whom this idea has occurred. How much of the survey data produced since 2022 was contaminated with AI? We simply don’t know.

Subscribe
Notify of

To join in with the discussion please make a donation to The Daily Sceptic.

Profanity and abuse will be removed and may lead to a permanent ban.

10 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
transmissionofflame
4 months ago

Interesting, thanks

Jon Garvey
4 months ago

How do I know you’re not just an AI bot saying thanks??

transmissionofflame
4 months ago
Reply to  Jon Garvey

Indeed
However I would have thought that you could tell an LLM was an LLM by asking questions designed to trigger its safety protocols

GroundhogDayAgain
4 months ago

Cunning response. Just what an AI might say.

Mogwai
4 months ago

🤣😂 I’ll be honest, there’s a few, erm..mono-emotional people on here I’ve got my suspicions about.🥸🤫

transmissionofflame
4 months ago

Lol

The major LLMs won’t impersonate humans I believe- think they are programmed not to. I think you’d have to train your own, which is expensive I believe

EppingBlogger
4 months ago

Perhaps they will use AI to mimic the voting behaviour of the public and decide we all want Starmer again.

Gezza England
Gezza England
4 months ago
Reply to  EppingBlogger

Ah…maybe that explains why Starmer is under the delusion that he will be PM until 2034.

DontPanic
DontPanic
4 months ago

In my experience yougov surveys are biased giving little option in answers in order to get the results they want.

Whomakesthisstuffup
Whomakesthisstuffup
4 months ago

Starmer is s bot – the 2TK bot, however I think someone forgot to program in the feedback loop of his mistakes so he keeps making them. Either that or Millibrain won’t pay the energy bill to do it!