Site icon IT World Canada

Student wins national competition for social media tool aimed at reducing misinformation

social media

Image courtesy of Shutterstock.com

Arvin Jagayat, a psychology student at Toronto Metropolitan University (formerly Ryerson University), has received an award for his efforts to reduce the spread of misinformation online.

Jagayat is one of five winners of the Social Sciences and Humanities Research Council’s (SSHRC) national 2022 Storytellers Challenge, and received a C$4000 cash prize.

The competition challenges Canadian post secondary students to tell the story of how social sciences and humanities research is impacting our lives, our world, and our future for the better.

The five winners were announced during the Congress of the Humanities and Social Sciences (Congress 2022), Canada’s largest academic gathering, taking place virtually this year from May 12 to 20.

How it works

Arvin Jagayat

Jagayat, a fifth year PhD student in Toronto Met’s psychology program, created an open-source Mock Social Media Website Tool that can generate simulated social media websites and collect detailed behavioural data on how participants interact with them.

In numerous experiments, his tool was used to examine what motivates people to maintain positive social identities and withhold spreading or interacting with misinformation.

“It’s really hard to measure social media behaviour in a controlled manner,” he said. 

Jagayat said that studies which collect tweets online and run analyses on tweets, which have been done before, don’t really have an idea of the context behind an individual’s decision to engage with something on social media. 

“So what this website allows researchers to do, is to take complete control over the social media environment. Which platform do you want to simulate? What posts do you want? What attachment do you want on those posts?” he explained. “When you actually go to interact with it, not only do you have incredibly detailed behavioural data on what they like, which links did they click and so on, but you can be very certain, because you created this controlled environment, that it’s only because you presented them with this set of posts that they interact in that manner.”

The mock social tool shows a user a series of posts, like news articles, photos, and videos. Users can react by liking or responding with emojis as well as commenting. At the end of the demonstration, it will reveal the list of misinformation they reacted to, if any.

Jagayat said that by being able to control what type of information that he and the rest of his team presents to people, they can look at these different types of misinformation individually and experimentally. He added that in some studies the team did, they were able to compare the same piece of misinformation across different languages and how it’s presented

“We have an idea on the sort of broad scale that maybe some content induces strong negative emotions in people, like fear or anger. But what are the specifics of that content? Those are the questions that we can answer using the tool that are not as easily done with other existing methods, or come with different caveats that make it hard to generalize some cases,” he said.

Creating the tool

Jagayat said he got interested in finding a way to help solve misinformation issues in 2016, just after the U.S. presidential election, where a lot of discourse on social media emerged. 

He started to look into designs of different social media platforms to see if there is any correlation between how a platform looks physically and how misinformation spreads. 

For example, he wanted to look into why misinformation spreads so fast in certain formats, such as on the messaging app WhatsApp. 

“I thought if we had some sort of open source tool that in the future many different people could contribute to… It could help facilitate so much research, not just on misinformation, but stuff like hate speech, racism, or even positive psychology; different things make people happy. It’s not necessarily negative behaviour or misinformation that the tool is designed to assess, it’s any social media content. So at that point, I was like, it can be something powerful. I know it didn’t exist because I’ve tried to look for it,” he said. 

Jagayat said he is constantly working on ways to update the tool. He said he often receives requests from different researchers asking him to add features. 

Right now the group is working to make the platform simulate Facebook and has plans to add a Twitter simulation as well. For the future the goal is to also be able to make the platform look like Instagram, Reddit and TikTok.

Exit mobile version