The role of online platforms in the spread of fake news and dissemination of extreme ideologies is an ongoing issue that can have serious real-world consequences, with the responsibility of those behind the platforms called into question. However, new research gives an insight into how individuals make decisions in these settings.

University of Washington research has discovered that in large groups of anonymous members, such as online forums, people make choices based on a model of the “mind of the group”.

Using a mathematical framework based on artificial intelligence and robotics, researchers were able to learn more about how people makes choices in groups, and could use this to predict what choice a person would make.

Researchers based their study on something called the theory of mind. This is when a person predicts what another person will do by making a model of the other’s mind, which is much harder to do in a large group.

Subjects were put in an MRI while playing the game so their brain responses could be studied and then took part in a game. During the game individuals had the choice as to whether they wanted to contribute a dollar to a communal pot of money, or contribute nothing. If the overall sum of money in the pot was higher than was particular amount, each individual would receive two dollars back. Unbeknownst to the subject, the others were actually simulated by a computer mimicking previous human players.

Researchers then used mathematical variables in order to create  computer models for predicting what decisions the person might make during the game.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Collective decisions in online forums: Getting a glimpse into the human mind

Lead author Koosha Khalvati, a doctoral student in the Allen School, explains what the study revealed:

“We can almost get a glimpse into a human mind and analyse its underlying computational mechanism for making collective decisions. When interacting with a large number of people, we found that humans try to predict future group interactions based on a model of an average group member’s intention. Importantly, they also know that their own actions can influence the group. For example, they are aware that even though they are anonymous to others, their selfish behaviour would decrease collaboration in the group in future interactions and possibly bring undesired outcomes.”

The new research suggests that humans create an average model of a “mind” representative of the group even when the identities of the others are not known.

Senior author Rajesh Rao, the CJ and Elizabeth Hwang professor in the UW’s Paul G. Allen School of Computer Science & Engineering and co-director of the Center for Neurotechnology believes that this sheds light on interactions that occur within online forums and on social media:

“Our results are particularly interesting in light of the increasing role of social media in dictating how humans behave as members of particular groups.

“In online forums and social media groups, the combined actions of anonymous group members can influence your next action, and conversely, your own action can change the future behavior of the entire group.”

As well as this, Rao believes that the research could be used in the development of more “human-friendly AI”:

“In scenarios where a machine or software is interacting with large groups of people, our results may hold some lessons for AI,” he said. “A machine that simulates the ‘mind of a group’ and simulates how its actions affect the group may lead to a more human-friendly AI whose behavior is better aligned with the values of humans.”

Read More: Cloudflare drops 8chan as thorny topic of online freedoms returns.