- There is a lot of vaccine-related misinformation out there, and bots are often blamed
- But instead of eliminating bots, other tactics are needed to alleviate the spread of misinformation
- This study is the first to investigate the use of bots vs. human-operated accounts
There is a lot of medical misinformation (especially regarding vaccines) out there, and a common belief is that much of it is driven by bots.
False information is misleading and can be downright harmful, especially during a pandemic.
Even before Covid-19, the spread of medical misinformation was a significant problem. A study from 2018 published in the journal Science suggested that false medical news spreads even faster than facts via social media platforms such as Twitter and Facebook. The top 1% of false news can quickly spread to between 1 000 and 100 000 people, whereas facts and scientifically-based information rarely reach more than 100 people, the study states.
But the latest research shows that bots aren’t the biggest sources of such misinformation, particularly vaccines – humans are.
Influence of Twitter-bots much smaller
A research team from the University of Sydney looked at over 53 000 randomly selected active twitter users in the United States and monitored their interaction with more than 20 million vaccine-related posts, both by human-operated and bot accounts from between 2017 and 2019.
The study was led by public health informations expert Associate Professor Adam Dunn, head of Biomedical Information and Digital Health from the School of Medical Sciences, Faculty of Medicine and Health, and published in the American Journal of Public Health.
While other studies have examined the reach of vaccine-related content on social media, this is the first study to group the interaction between bots and human-operated accounts.
They found that the majority of false vaccine-related content searched for by specific keywords, mostly seen by users in the United States, are generated by human-operated accounts.
And while more than a third of active Twitter users in the study posted or retweeted about vaccines, only 4.5% ever retweeted information that was accurate and critical.
The final results showed that bots play little to no role in shaping vaccine discourse among Twitter users in the United States.
The researchers also found that Twitter users were less likely to share vaccine-related content from bots, but would rather form connections with other like-minded users, leading to retweets and spreading of misinformation.
"The study shows that bots play little to no role in shaping vaccine discourse among Twitter users in the United States," says Associate Professor Dunn.
Study results have implications for public health
While the tools used in the study had some limitations, the researchers state that their analyses could help public health bodies inform approaches to help address vaccine misinformation rather than allocating sources to eliminate bots.
"Vaccine confidence is unevenly distributed within and across countries, which can lead to increased risk of outbreaks in places where too many people decide not to vaccinate," Dunn said.
"I think the best tools that social media platforms have for stopping misinformation are those that can empower their users to spot it and add friction to passing it along. For public health organisations and researchers, the tools we need are those that can prioritise resources by signalling when the benefits of tackling misinformation outweigh the risks of unintentionally amplifying it by engaging with it."
READ | The damage of vaccine misinformation
READ | Could social media use during Covid-19 increase depression and secondary trauma?
READ | Ibuprofen and Covid-19: How credible sources can drive misinformation during a pandemic
Image credit: Unsplash