How to spot a bot

Online bots have become more and more prevalent in the past few years. They can be a slight nuisance or create complete havoc. Bots are used to automate repetitive tasks and simulate human behaviour. Bots can do a range of things: from rogue comments on social media and writing fake reviews for businesses, to sending spam emails. Online bots are thought to be almost half of internet traffic today.

So, how does this affect the world of market research?

While bots have a variety of uses, bots and fraudulent respondents within the market research industry can be very damaging and pose a serious threat to the production of valid and reliable data. When there is financial gain to be had, bots will exist; whether this is fraudulent respondents taking part in surveys multiple times thanks to the help of bots, or fabricating application answers to take part in an incentivised focus group. With technology and AI getting smarter, bots are getting increasingly difficult to detect.

As researchers, how do we overcome this?

We might never be able to get rid of them completely and with their methods ever-changing, protecting research data is a persistent challenge. However, there are a few techniques we can use to mitigate the damage.

By having multiple ways of validating responses and potential respondents, we can detect and remove bots step by step.

One way to curb this is by using cross-checking validation techniques, an example being cross-checking emails. In a recent study here at Progressive, we were looking for individuals to take part in a focus group. We needed some hard-to-reach respondents for this study, so took to social media to find our participants. We were amazed the following day to find about 70 applications to take part! However, upon further inspection, not all responses made sense as answers were contradicting, and the email address layouts were all very similar. Alarm bells rang. It transpired that all these applications had come from one person sending out multiple ‘bot’ responses in the hopes to be accepted to attend the focus group for an incentive.

When participants fill out an application form to take part in a focus group, for example, sending a follow up email cross-checking their responses is a great way to see if the potential recruit is wanting to take part to earn a quick penny. Validating demographic questions such as age and job title, or confirming their phone number, are also good, conclusive questions to ask. Sending a follow-up email a few days later ensures a buffer period that allows these fraudsters to forget the details of their answers. Ensuring the validation responses match up to the original application is key to separating genuine from false applications.

For quantitative methods, using open-ended text responses in online surveys is another good way to spot dodgy responses. While this isn’t a fail-safe way to spot every bot, it’s a great way to check for duplicate open-ended answers, as well as ‘non-human’ sounding responses. Some other key things to look out for which could be seen as typical ‘bot behaviour’ are things like faster than normal completion times, inconsistent answering and outlying data values i.e., asking someone about financial spend or number of times they do X (and checking for anomalous responses).

All in all…

While bots and bogus respondents might always crop up in market research data (especially when financial gain is to be had) and we may never be able to eliminate them from submitting responses to online research, knowing what to look out for and applying rigorous quality assurance techniques will ensure they do not impact our final data.

More Editorial...

Your audience has a story to tell.

Can we help make sense of it?