For this week’s issue of our It's Not True newsletter we spoke with Dr Rakoen Maertens, from the University of Cambridge's Department of Psychology and lead author of the Misinformation Susceptibility Test (MIST), a quick 20-point quiz that indicates how vulnerable a person is to misinformation. Participants in the test are asked to rate 20 news headlines – 10 real and 10 fake – presented in a randomized order. In our interview, Dr Maertens also talks about the challenges that AI poses to the fight against misinformation and his opinion on the effectiveness of psychological inoculation and debunking.
A test that measures misinformation susceptibility
How did the idea to develop the Misinformation Susceptibility Test (MIST) come about?
When I started doing research on the psychology of misinformation in 2018 I was surprised to see that researchers and practitioners kept creating new tests and measures of misinformation susceptibility with no or very limited validation.
This means that researchers could not really be sure of what exactly they were measuring, and they might have been measuring different things with each test. When discussing this with Dr Friedrich Götz (Assistant Professor at the University of British Columbia) – a good friend and colleague with expertise in test development – we decided to develop a short, validated, reliable, and easy-to-use psychological test to solve this problem.
What are some potential applications of the MIST in the real world, and how can this tool help fight misinformation?
The MIST could help identify in which places, regions, networks, or schools, people are on average more susceptible than in other places.
This could help inform policy makers and organisations on where we need to invest in better media literacy skills or where dangerous misinformation could spread more quickly, and help us to learn from the groups who are particularly resilient. Similarly, it can help to evaluate whether an intervention had an effect on general susceptibility. The MIST can also be used as a predictor for other outcomes. We looked at for example vaccine uptake, and found that the MIST was a strong predictor on top of other predictors. The MIST is also a useful tool to educate people and test your own skills.
The first survey to use the MIST, published at the end of June by YouGov, showed that, contrary to common belief, younger adults are worse than older adults at identifying false headlines. How do you interpret this data?
In research on the psychology of misinformation we often find conflicting results in terms of age and susceptibility to misinformation.
There are various explanations to these conflicting findings. One reason could be that it depends on the type of information: older people may be worse at detecting a manipulated image for example, but may be less impressed by polarising language. We also need to take into account a potential confound: is it younger people who are more susceptible (i.e., age-based), or is it the new generation that is more susceptible (i.e., generational shift)? The susceptibility could come, for example, from a shift to news consumption from social media. If the majority of your news comes from Snapchat and TikTok, your view on the world and what is considered a nuanced perspective can be distorted. We also know that many social media websites use algorithms that promote engaging, emotional, and entertaining content, which is typically negatively correlated with accuracy and nuance.
That being said, I think the final word has not been said on this, and we need to investigate this more systematically. For me, this survey opens more questions than it answers, and that's a good thing.
New challenges imposed by AI chatbots
Your team turned to ChatGPT to create the convincing false headlines present in the test. How do you think the misinformation landscape changes with these new AI chatbots?
The misinformation landscape is changing significantly. The false headlines generated for the MIST were based on GPT-2, released in 2019. With the GPT-2 we managed to generate thousands of credible but false headlines in seconds. The current version of ChatGPT, released this year, is based on GPT-4.
The capabilities of generative AI are beyond just headlines: it can not only generate credible photos, but even entire fake (news) websites, fake scientific articles, and fake sources. In other words, it is likely that we will be more and more flooded with misinformation generated by AI, and this could pose significant new challenges.
Psychological inoculation and debunking
Much of the effort to combat misinformation today is focused on debunking false narratives. Do you think the idea of psychological inoculation against misinformation, using techniques such as prebunking, can be more effective?
Misinformation is a complex issue that requires a multi-layered defence system. Prebunking is excellent because it protects people against misinformation they have not seen before, therefore helping them not to fall into the trap.
Once the damage is done, it can often be hard to undo it. We therefore have developed a variety of inoculation interventions that teach people how to detect the underlying techniques common in misinformation via videos, games, and messages. However, debunking, if done effectively (see the Debunking Handbook 2020 for more information on how to do this), can be very effective as well. So instead of asking the question whether one is better than the other, it is rather a question of where we are on the timeline: we protect using inoculation, but if we fail, we debunk. In addition, a good debunking message should also inoculate against future misinformation.