FreeFeed: Combating Degenerate Feedback Loops With Linguistic Inferences From Human Interactions On Social Media
by Aashika Jagadeesh
Abstract – Social network recommendation systems are frequently linked to encouraging polarization and widening ideological division, but this effect has rarely been examined in detail. Pernicious feedback loops are often created when these systems are trained with data that originates from users already exposed to algorithmic recommendations. This study analyzes the influence that feedback loops have on user mental health and assesses the effect of a Bayesian choice model (FreeFeed) in its ability to prevent the harmful reinforcement of views. At first, the Twitter API was filtered off of 4 factors: drugs, relationships, academics, and physical appearance. After 120,000 tweets were collected and preprocessed, the tweets were used to train/test a generalized logistic regression model and a multi-layer perceptron neural network. The models were compared on values such as the F1 score (max 0.963), AUC(max 0.990), and accuracy (max 93.7%). The algorithm was then implemented into an online simulation and tested on a set of social media users (n = 102) in New Jersey to identify both the impact of the revised model and the recommendation system model on self-esteem. Over the course of 3 weeks, participants completed a survey before and after use, in which responses were scored on the Rosenberg Self-esteem Scale. Significant statistical difference was determined between the revised model and the recommendation system model in the online simulations, which proves that policy makers and platform users should take these effects into consideration when they govern the use of feed algorithms.