In this talk, we aim to find if polarization is induced in a neural network by feeding it newspaper articles with manufactured sentiments according to the Allsides Media Bias chart for the level of faith people on various aisles of the political spectrum. This project consists of a set of experiments on similar data-sets from news agencies across the various subsets in the ”media-bias” chart. News Media perceived bias is common across consumers that belong to various political affiliations. While anecdotal evidence of this exists and there exist annotated datasets that aim to annotate the ”spin” a news agency puts on certain events and entities, whether this is a widespread problem and whether it can be detected by the neural network topically or temporally is a problem that needs to be explored. The news media bias analysis is modelled as a Natural Language Processing sentiment analysis task and a fake news binary classification task to deduce the level of polarization in a neural network by feeding it headlines embedded using pre-trained sentiment models from news publications across the political spectrum. Only a hand-annotated dataset that did not rely on the news publication agency to denote its political affiliation for every article managed to polarize the neural networks used. This primarily means that the perceived polarization in news media could be specifically topical or could be an interpolation of the other sources that the public gets its news from. When it came to fake news vulnerability, news from all kinds of perceived politically affiliated news media holds up well against a fake news dataset with a very good accuracy. None of the accuracies dropped below 95%. This is a significant result that sort of debunks the AllSlides Media categorization – if taken as simplistically as it is presented. These experiments can be extended to include entity based topical studies in the future and to also educate the populace about their perceived biases.
Speaker: Ms. Aroma Rodrigues / USA / UMass Amherst - GitHub, LinkedIn
Language: English
Date and Time : October 9, 2021 / 10:30-11:00 (UTC+8)
Speaker Introduction
Aroma Rodrigues is a master’s student at UMass Amherst. As a techno-activist she has been a part of many projects that promote diversity and inclusion. She believes that Automation is the path to Inclusion. In 2016, a teammate of her “Shoes for the Visually Impaired” project presented it at the FOSSASIA. She reads, writes and enjoys walking to explore places. She presently works in a financial services firm and believes that solving problems that she has would solve problems for a large chunk of the world. An ML enthusiast she has about 20+ Coursera Certifications with the respective project work to support her learning in that field. She presented a talk on “De-mystifying Terms and Conditions using NLP” at PyCon 2018 and a talk called “Propaganda Detection in Fake News using Natural Language Processing” at PyCon ZA 2019 in Johannesburg. She spoke on detecting gender roles based biases in school textbooks at PyOhio 2020.