Studies Show Tweets By Black Folks One and a Half Times More Likely To Be Flagged as “Offensive” by Algorithms Used to Detect Hate Speech (Video)

Artificial intelligence technology used to identify racist and violent speech on social media may actually be amplifying racist bias.

A recent report by Washington University researchers examined hate speech detection algorithms and found that the leading AI models were one–and–a–half times more likely to flag tweets authored by Black people as offensive or hateful, according to Recode. Moreover, tweets written in African-American Vernacular English or (AAVE) were more than twice as likely to be flagged.

A second study out of Cornell University uncovered similar patterns of racial bias against Black lingo/speech after researchers combed through five academic data sets — which contained some 155,000 online posts — used to study hate speech.

They’re saying the reason for the bias is that humans teach these algorithms what’s offensive and what’s not and many of them come from different cultural backgrounds so they have no idea what context these words are being used in. 

Video of my commentary:

Leave a Reply

%d bloggers like this: